We all love Ehcache. But the rise of real-time Big Data means you want to keep larger amounts of data in memory with low, predictable latency. In this webinar,
we explain how BigMemory Go can turbocharge your Ehcache deployment.
Capped collections in MongoDB allow you to limit the size and number of documents in a collection. This is useful for applications that involve logging, news feeds, or caching where collections need to be bounded. While documents cannot be deleted or updated, objects are returned in insertion order and the collection size is prevented from growing out of control. Geospatial indexing allows finding documents close to a location point, making it useful for location-based services. Real world examples of using capped collections include activity feeds, event logging, caching, and data pages.
高性能かつスケールアウト可能なHPCクラウド AIST Super Green CloudRyousei Takano
The document contains configuration instructions for creating a cluster in a cloud computing environment called myCluster. It specifies creating a frontend node and 16 compute nodes using specified templates, compute and disk offerings. It also defines the cluster name, zone, network, and SSH key to use. The cluster can then be started and later destroyed along with a configuration file.
The document discusses building cloud-native applications that are flexible, redundant, and infinitely scalable. It covers using a message bus as the core architecture, how to design for distributed online transaction processing (OLTP) and online analytical processing (OLAP) databases in the cloud. Requirements for the system include no data loss, ability to seamlessly update, and not becoming a bottleneck.
The BigMemory Revolution in Financial ServicesSoftware AG
Dozens of financial institutions — including 30% of Fortune 500 banks and credit card companies — already use Terracotta BigMemory Max to speed fraud detection, meet previously unthinkable service level agreements (SLAs), and revolutionize performance around risk analysis, portfolio tracking, and compliance. In this webcast, you'll learn how BigMemory Max can keep ALL of your data in machine memory for instant, anytime access.
Software AG - From Suggestion to Process Improvement - ProcessForum Nordic, N...Software AG
The document outlines a process for continuous process improvement that includes defining problems, measuring current performance, analyzing root causes, improving processes by identifying solutions, controlling new processes, and collaborating throughout by sharing ideas and knowledge. Employees can submit improvement suggestions that are prioritized and delegated for evaluation and implementation if approved. Communication of changes is emphasized.
The Software AG TECHcommunity is an online community for technology professionals with over 37,000 members from 100 countries. It provides resources for sharing knowledge about Software AG products including product news, documentation, code samples, best practices, discussion forums, and opportunities for collaboration. Members can discover, share, and discuss solutions as well as build their professional network.
Capped collections in MongoDB allow you to limit the size and number of documents in a collection. This is useful for applications that involve logging, news feeds, or caching where collections need to be bounded. While documents cannot be deleted or updated, objects are returned in insertion order and the collection size is prevented from growing out of control. Geospatial indexing allows finding documents close to a location point, making it useful for location-based services. Real world examples of using capped collections include activity feeds, event logging, caching, and data pages.
高性能かつスケールアウト可能なHPCクラウド AIST Super Green CloudRyousei Takano
The document contains configuration instructions for creating a cluster in a cloud computing environment called myCluster. It specifies creating a frontend node and 16 compute nodes using specified templates, compute and disk offerings. It also defines the cluster name, zone, network, and SSH key to use. The cluster can then be started and later destroyed along with a configuration file.
The document discusses building cloud-native applications that are flexible, redundant, and infinitely scalable. It covers using a message bus as the core architecture, how to design for distributed online transaction processing (OLTP) and online analytical processing (OLAP) databases in the cloud. Requirements for the system include no data loss, ability to seamlessly update, and not becoming a bottleneck.
The BigMemory Revolution in Financial ServicesSoftware AG
Dozens of financial institutions — including 30% of Fortune 500 banks and credit card companies — already use Terracotta BigMemory Max to speed fraud detection, meet previously unthinkable service level agreements (SLAs), and revolutionize performance around risk analysis, portfolio tracking, and compliance. In this webcast, you'll learn how BigMemory Max can keep ALL of your data in machine memory for instant, anytime access.
Software AG - From Suggestion to Process Improvement - ProcessForum Nordic, N...Software AG
The document outlines a process for continuous process improvement that includes defining problems, measuring current performance, analyzing root causes, improving processes by identifying solutions, controlling new processes, and collaborating throughout by sharing ideas and knowledge. Employees can submit improvement suggestions that are prioritized and delegated for evaluation and implementation if approved. Communication of changes is emphasized.
The Software AG TECHcommunity is an online community for technology professionals with over 37,000 members from 100 countries. It provides resources for sharing knowledge about Software AG products including product news, documentation, code samples, best practices, discussion forums, and opportunities for collaboration. Members can discover, share, and discuss solutions as well as build their professional network.
We all love Ehcache. But the rise of real-time Big Data means you want to keep larger amounts of data in memory with low, predictable latency. In this webinar,
we explain how BigMemory Go can turbocharge your Ehcache deployment.
This document discusses several key questions to consider when creating an offline application, including what functionality should be available offline, how to store application data locally, and how to handle synchronization between offline and online data. It provides examples of different storage options for offline data, such as the Application Cache, Service Workers Cache API, web storage, web SQL, file system API, and IndexedDB. It also discusses approaches for resolving conflicts when synchronizing offline and online data, such as using timestamps or optimistic/pessimistic locking strategies. The document is an informative resource for developers building offline-capable web applications.
Rahul Singh of Anant Corporation covers the three common problems in Datastax / Cassandra operations which stem from Data Modeling and outlines strategies and best practices to deal with them.
This is the official tutorial from Oracle.httpdocs.oracle.comj.pdfjillisacebi75827
This is the official tutorial from Oracle.
http://docs.oracle.com/javase/tutorial/jdbc/
Here is a good tutorial for getting started with SQLite.
http://www.tutorialspoint.com/sqlite/sqlite_java.htm
Chapter 34 in the Liang text. He uses MySQL. Getting started with SQLite might be a little
easier, but he does a good job of defining the issues in not too many pages.
For this assignment you can use SQLite OR MySQL.
There are numerous videos in YouTube that demonstrate how to do this. Some are better than
others. When you find one that is helpful, post a link to it on the discussion board.
We have been working with the front-end (GUI), and the middle (creating and manipulating
collections of objects), and now we will add on the back end. The persistent storage of data in
your applications. This exercise is to get you comfortable with connecting to a DB, adding,
deleting, retrieving data. I encourage you to play with this one, do more than the minimum.
SQLite is a very small database. It is included by default in Android and iOS. It is surprisingly
powerful for such a small footprint. It can be frustrating to see what’s going on – what is in the
DB, did the query work correctly? MySQL is often called a community database. It belongs to
Oracle, but they allow anyone to use it for free. The recent versions of the MySQL workbench
that allows you to see what’s going on in your database are really very nice – starting to look like
the Access front end.
Create a connection to a relational database using SQLite or MySQL.
Create a single database table to hold information.
Let’s make a simple class called Person for this exercise.
Person
firstName (String)
lastName(String)
age (int)
ssn (long)
creditCard (long)
Note that once you have the DB created, you don’t want to do this again every time you run your
test program. The easiest way to deal with this – for this assignment, is to comment out the code
that creates the DB creation and the table creation while you experiment with the following.
(Aside: I choose ssn and credit card as fields here so that you might think about the persistent
storage of sensitive data. There are some pretty strict laws governing the storage of some data.
Please don’t use any actual social security numbers or credit card numbers in this exercise.)
Demonstrate the insertion of a record into the database Insert several records.
Write a method called insertPerson(Person person) that adds a person object to your database.
Create another object of type Person, and demonstrate calling your method, passing the object to
the method.
Demonstrate the retrieval of information from the database. Use SQL Select statements, to
retrieve a particular Person from the database.
Write a method called selectPerson that returns a Person object. This method retrieves the data
for a Person from the database. We also need to pass a parameter to identify what person. You
can use ‘name’ if you like, or if you find it easier to use the database generated .
Big Data Expo 2015 - Gigaspaces Making Sense of it allBigDataExpo
NOSQL are often limited in the type of queries that they can support due to the distributed nature of the data. In this session we would learn patterns on how we can overcome this limitation and combine multiple query semantics with NoSQL based engines. We will demonstrate specifically a combination of key/value, SQL like, Document model and Graph based queries as well as more advanced topic such as handling partial update and query through projection. We will also demonstrate how we can create a mash-up between those API's i.e. write fast through Key/Value API and execute complex queries on that same data through SQL query.
Adopting Hadoop to manage your Big Data is an important step, but not the end-solution to your Big Data challenges. Here are some of the additional considerations you must face:
Choosing the right cloud for the job: The massive computing and storage resources that are needed to support Big Data applications make cloud environments an ideal fit, and more than ever, there is a growing number of choices of cloud infrastructure types and providers. Given the diverse options, and the dynamic environments involved, it becomes ever more important to maintain the flexibility for all your IT needs.
Big Data is a complex beast: It involves many and different moving parts, in large clusters, and is continually growing and evolving. Managing such an environment manually is not a viable option. The question is, how can you achieve automation of all this complexity?
The world beyond Hadoop: Big Data is not just Hadoop – there is a whole rapidly growing ecosystem to contend with, including NoSQL, data processing, analytics tools… As well as your own application services. How can you manage deployment, configuration, scaling and failover of all the different pieces, in a consistent way?
In this session, you'll learn how to deploy and manage your Hadoop cluster on any Cloud, as well as manage the rest of your big data application stack using a new open source framework called Cloudify.
Building an Analytic Extension to MySQL with ClickHouse and Open SourceAltinity Ltd
This is a joint webinar Percona - Altinity.
In this webinar we will discuss suggestions and tips on how to recognize when MySQL is overburdened with analytics and can benefit from ClickHouse’s unique capabilities.
We will then walk through important patterns for integrating MySQL and ClickHouse which will enable the building of powerful and cost-efficient applications that leverage the strengths of both databases.
Building an Analytic Extension to MySQL with ClickHouse and Open Source.pptxAltinity Ltd
Building an Analytic Extension to MySQL with ClickHouse and Open Source
In this webinar Percona and Altinity offer suggestions and tips on how to recognize when MySQL is overburdened with analytics and can benefit from ClickHouse’s unique capabilities.
Also, they will walk you through important patterns for integrating MySQL and ClickHouse which will enable the building of powerful and cost-efficient applications that leverage the strengths of both databases.
Hybrid solutions – combining in memory solutions with SSD - Christos ErotocritouJAXLondon_Conference
The document provides an overview of different technologies used in big data solutions, including SQL, NoSQL, in-memory data grids (IMDG), key/value stores, and stream processing technologies. It discusses how SSD technology can help shape the big data landscape by enabling greater scale at lower costs. The document then discusses building a hybrid big data solution using an IMDG and SSD to create a common high-speed data store. It provides examples of using rich query semantics like nested queries, projections, and aggregations with data grids. Finally, it briefly discusses orchestration in big data, including deploying and managing big data applications over their lifecycle.
Petascale Analytics - The World of Big Data Requires Big AnalyticsHeiko Joerg Schick
The document discusses big data and analytics technologies. It describes how new technologies like Hadoop and MapReduce enable processing of extremely large datasets. It also discusses future technologies like exascale computing and storage class memory that will be needed to manage increasing data volumes and support real-time analytics.
Apache Cassandra is a free and open-source, distributed, wide column store, NoSQL database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure.
The document discusses the TidyUp for Mac software, which cleans up the Mac operating system by removing duplicate and damaged files. It searches for duplicate files based on name, date, size, and other attributes. The software has a graphical interface and works on all Mac OS versions. It enhances performance by freeing up disk space and removing items causing errors like "low disk space." The process involves installing TidyUp, performing a quick scan, grouping duplicate items, reviewing the "smart basket," and removing selected items manually or automatically.
This Presentation will give you a brief about Stellar Drive ToolBox2.
Keep your Mac healthy with power packed maintenance & optimizing ToolBox. Stellar Drive ToolBox is a set of 12 must have disk utilities to manage, secure, protect & optimize your Mac.
Александр Терещук - Memory Analyzer Tool and memory optimization tips in AndroidUA Mobile
The document discusses memory analysis and optimization tips for Android applications. It introduces the Eclipse Memory Analyzer tool, which can be used to analyze memory usage and find memory leaks. Some key points covered include the low RAM sizes on Android devices, how the garbage collector works, how to analyze an object's shallow size and retained size, finding memory dominators, and common memory leak pitfalls like non-static handlers and large bitmaps stored in static fields. The document encourages analyzing heap dumps in the Memory Analyzer tool to optimize memory usage and find leaks.
In this presentation, Gil Tene (CTO, Azul Systems, and a JVM mechanic) discusses examples of how the freedom this machine has in re-interpreting the meaning of code can have dramatic implications on performance and other code behavior.
This document provides an overview and definitions of CQRS (Command Query Responsibility Segregation). It discusses:
1. CQRS separates commands, which change data, from queries, which retrieve data without modifying it. This avoids polluting domain models with query logic.
2. Queries access a read-optimized data store directly via DTOs (data transfer objects) without involving domain models or object-relational mapping. Commands update data by invoking command handlers and raising domain events.
3. Internal events are raised in response to commands and can be persisted for audit logging, recovery, and data analysis purposes. External events are published to notify other systems of changes.
Paper presented at the 12th International Conference on Digital Preservation, November 2-6, 2015. University of North Carolina at Chapel Hill.
Abstract:
An increasing amount of scientific work is performed in silico, such that the entire process of investigation, from experiment to publication, is performed by computer. Unfortunately, this has made the problem of scientific reproducibility even harder, due to the complexity and imprecision of specifying and recreating the computing environments needed to run a given piece of software. Here, we consider from a high level what techniques and technologies must be put in place to allow for the accurate preservation of the execution of software. We assume that there exists a suitable digital archive for storing digital objects; what is missing are frameworks for precisely specifying, assembling, and executing software with all of its dependencies. We discuss the fundamental problems of managing implicit dependencies and outline two broad approaches: preserving the mess, and encouraging cleanliness. We introduce three prototype tools for preserving software executions: Parrot, Umbrella, and Prune.
Conference: HP Big Data Conference 2015
Session: Real-world Methods for Boosting Query Performance
Presentation: "Extra performance out of thin air"
Presenter: Konstantine Krutiy, Principal Software Engineer / Vertica Whisperer
Company: Localytics
Description:
Learn how to get extra performance out of Vertica from areas you never expected.
This presentation will illustrate how you can improve performance of your Vertica cluster without extra budget.
All you need is ingenuity, knowledge of Vertica internals, and the ability to challenge conventional wisdom.
We will show you real world examples on gaining performance by eliminating unneeded work, eliminating unneeded system waits and making your system operate more efficiently.
Visit my blog http://www.dbjungle.com for more Vertica insights
BigDataCloud meetup - July 8th - Cost effective big-data processing using Ama...BigDataCloud
This document discusses using Amazon Elastic MapReduce (EMR) for cost-effective big data processing. It describes the author's experience using EMR to process 1TB of log data per week for a startup. Key advantages of EMR include only paying for usage, no hardware to maintain, and ability to customize cluster resources for different jobs. The author outlines best practices learned, such as splitting logs by type and processing in smaller windows, as well as next steps like using spot instances and NoSQL for improved performance and cost savings.
NA Adabas & Natural User Group Meeting April 2023Software AG
The Adabas & Natural Health Check provides customers with a no-cost, half to one day remote or onsite review of their Adabas and Natural environment. Software AG experts evaluate the customer's operating environment, Adabas performance, Natural usage, and integration points to identify opportunities for reengagement, modernization, optimization, and preparing for upcoming product upgrades. The health check includes a review of key metrics and configurations to understand resource utilization and pain points for the customer's technical staff.
Adabas & Natural Virtual User Group Meeting NAM 2022Software AG
The innovations keep coming! Discover what’s new on the Adabas & Natural product roadmap that can help you optimize, modernize & transform your systems. Hear how customers successfully embraced digital transformation using APIs and data integration. Get tips from our services and solutions experts on how to address staffing challenges, end of maintenance, and demand for data for analytics.
Join your peers and experts from Software AG to explore:
• Adabas & Natural 2050+ innovations & roadmap
• Mainframe modernization and cross-agency data sharing at DELJIS
• Bi-directional API implementation at TRS
• Options to train new talent and address staffing gaps
• End of support considerations for Natural 8 on z/OS
• How to liberate data for modern data analytics
• Adabas & Natural for z/OS License Key Management
To learn more about Software AG Adabas & Natural, please visit www.adabasnatural.com
Contenu connexe
Similaire à 5 Reasons to Upgrade Ehcache to BigMemory Go
We all love Ehcache. But the rise of real-time Big Data means you want to keep larger amounts of data in memory with low, predictable latency. In this webinar,
we explain how BigMemory Go can turbocharge your Ehcache deployment.
This document discusses several key questions to consider when creating an offline application, including what functionality should be available offline, how to store application data locally, and how to handle synchronization between offline and online data. It provides examples of different storage options for offline data, such as the Application Cache, Service Workers Cache API, web storage, web SQL, file system API, and IndexedDB. It also discusses approaches for resolving conflicts when synchronizing offline and online data, such as using timestamps or optimistic/pessimistic locking strategies. The document is an informative resource for developers building offline-capable web applications.
Rahul Singh of Anant Corporation covers the three common problems in Datastax / Cassandra operations which stem from Data Modeling and outlines strategies and best practices to deal with them.
This is the official tutorial from Oracle.httpdocs.oracle.comj.pdfjillisacebi75827
This is the official tutorial from Oracle.
http://docs.oracle.com/javase/tutorial/jdbc/
Here is a good tutorial for getting started with SQLite.
http://www.tutorialspoint.com/sqlite/sqlite_java.htm
Chapter 34 in the Liang text. He uses MySQL. Getting started with SQLite might be a little
easier, but he does a good job of defining the issues in not too many pages.
For this assignment you can use SQLite OR MySQL.
There are numerous videos in YouTube that demonstrate how to do this. Some are better than
others. When you find one that is helpful, post a link to it on the discussion board.
We have been working with the front-end (GUI), and the middle (creating and manipulating
collections of objects), and now we will add on the back end. The persistent storage of data in
your applications. This exercise is to get you comfortable with connecting to a DB, adding,
deleting, retrieving data. I encourage you to play with this one, do more than the minimum.
SQLite is a very small database. It is included by default in Android and iOS. It is surprisingly
powerful for such a small footprint. It can be frustrating to see what’s going on – what is in the
DB, did the query work correctly? MySQL is often called a community database. It belongs to
Oracle, but they allow anyone to use it for free. The recent versions of the MySQL workbench
that allows you to see what’s going on in your database are really very nice – starting to look like
the Access front end.
Create a connection to a relational database using SQLite or MySQL.
Create a single database table to hold information.
Let’s make a simple class called Person for this exercise.
Person
firstName (String)
lastName(String)
age (int)
ssn (long)
creditCard (long)
Note that once you have the DB created, you don’t want to do this again every time you run your
test program. The easiest way to deal with this – for this assignment, is to comment out the code
that creates the DB creation and the table creation while you experiment with the following.
(Aside: I choose ssn and credit card as fields here so that you might think about the persistent
storage of sensitive data. There are some pretty strict laws governing the storage of some data.
Please don’t use any actual social security numbers or credit card numbers in this exercise.)
Demonstrate the insertion of a record into the database Insert several records.
Write a method called insertPerson(Person person) that adds a person object to your database.
Create another object of type Person, and demonstrate calling your method, passing the object to
the method.
Demonstrate the retrieval of information from the database. Use SQL Select statements, to
retrieve a particular Person from the database.
Write a method called selectPerson that returns a Person object. This method retrieves the data
for a Person from the database. We also need to pass a parameter to identify what person. You
can use ‘name’ if you like, or if you find it easier to use the database generated .
Big Data Expo 2015 - Gigaspaces Making Sense of it allBigDataExpo
NOSQL are often limited in the type of queries that they can support due to the distributed nature of the data. In this session we would learn patterns on how we can overcome this limitation and combine multiple query semantics with NoSQL based engines. We will demonstrate specifically a combination of key/value, SQL like, Document model and Graph based queries as well as more advanced topic such as handling partial update and query through projection. We will also demonstrate how we can create a mash-up between those API's i.e. write fast through Key/Value API and execute complex queries on that same data through SQL query.
Adopting Hadoop to manage your Big Data is an important step, but not the end-solution to your Big Data challenges. Here are some of the additional considerations you must face:
Choosing the right cloud for the job: The massive computing and storage resources that are needed to support Big Data applications make cloud environments an ideal fit, and more than ever, there is a growing number of choices of cloud infrastructure types and providers. Given the diverse options, and the dynamic environments involved, it becomes ever more important to maintain the flexibility for all your IT needs.
Big Data is a complex beast: It involves many and different moving parts, in large clusters, and is continually growing and evolving. Managing such an environment manually is not a viable option. The question is, how can you achieve automation of all this complexity?
The world beyond Hadoop: Big Data is not just Hadoop – there is a whole rapidly growing ecosystem to contend with, including NoSQL, data processing, analytics tools… As well as your own application services. How can you manage deployment, configuration, scaling and failover of all the different pieces, in a consistent way?
In this session, you'll learn how to deploy and manage your Hadoop cluster on any Cloud, as well as manage the rest of your big data application stack using a new open source framework called Cloudify.
Building an Analytic Extension to MySQL with ClickHouse and Open SourceAltinity Ltd
This is a joint webinar Percona - Altinity.
In this webinar we will discuss suggestions and tips on how to recognize when MySQL is overburdened with analytics and can benefit from ClickHouse’s unique capabilities.
We will then walk through important patterns for integrating MySQL and ClickHouse which will enable the building of powerful and cost-efficient applications that leverage the strengths of both databases.
Building an Analytic Extension to MySQL with ClickHouse and Open Source.pptxAltinity Ltd
Building an Analytic Extension to MySQL with ClickHouse and Open Source
In this webinar Percona and Altinity offer suggestions and tips on how to recognize when MySQL is overburdened with analytics and can benefit from ClickHouse’s unique capabilities.
Also, they will walk you through important patterns for integrating MySQL and ClickHouse which will enable the building of powerful and cost-efficient applications that leverage the strengths of both databases.
Hybrid solutions – combining in memory solutions with SSD - Christos ErotocritouJAXLondon_Conference
The document provides an overview of different technologies used in big data solutions, including SQL, NoSQL, in-memory data grids (IMDG), key/value stores, and stream processing technologies. It discusses how SSD technology can help shape the big data landscape by enabling greater scale at lower costs. The document then discusses building a hybrid big data solution using an IMDG and SSD to create a common high-speed data store. It provides examples of using rich query semantics like nested queries, projections, and aggregations with data grids. Finally, it briefly discusses orchestration in big data, including deploying and managing big data applications over their lifecycle.
Petascale Analytics - The World of Big Data Requires Big AnalyticsHeiko Joerg Schick
The document discusses big data and analytics technologies. It describes how new technologies like Hadoop and MapReduce enable processing of extremely large datasets. It also discusses future technologies like exascale computing and storage class memory that will be needed to manage increasing data volumes and support real-time analytics.
Apache Cassandra is a free and open-source, distributed, wide column store, NoSQL database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure.
The document discusses the TidyUp for Mac software, which cleans up the Mac operating system by removing duplicate and damaged files. It searches for duplicate files based on name, date, size, and other attributes. The software has a graphical interface and works on all Mac OS versions. It enhances performance by freeing up disk space and removing items causing errors like "low disk space." The process involves installing TidyUp, performing a quick scan, grouping duplicate items, reviewing the "smart basket," and removing selected items manually or automatically.
This Presentation will give you a brief about Stellar Drive ToolBox2.
Keep your Mac healthy with power packed maintenance & optimizing ToolBox. Stellar Drive ToolBox is a set of 12 must have disk utilities to manage, secure, protect & optimize your Mac.
Александр Терещук - Memory Analyzer Tool and memory optimization tips in AndroidUA Mobile
The document discusses memory analysis and optimization tips for Android applications. It introduces the Eclipse Memory Analyzer tool, which can be used to analyze memory usage and find memory leaks. Some key points covered include the low RAM sizes on Android devices, how the garbage collector works, how to analyze an object's shallow size and retained size, finding memory dominators, and common memory leak pitfalls like non-static handlers and large bitmaps stored in static fields. The document encourages analyzing heap dumps in the Memory Analyzer tool to optimize memory usage and find leaks.
In this presentation, Gil Tene (CTO, Azul Systems, and a JVM mechanic) discusses examples of how the freedom this machine has in re-interpreting the meaning of code can have dramatic implications on performance and other code behavior.
This document provides an overview and definitions of CQRS (Command Query Responsibility Segregation). It discusses:
1. CQRS separates commands, which change data, from queries, which retrieve data without modifying it. This avoids polluting domain models with query logic.
2. Queries access a read-optimized data store directly via DTOs (data transfer objects) without involving domain models or object-relational mapping. Commands update data by invoking command handlers and raising domain events.
3. Internal events are raised in response to commands and can be persisted for audit logging, recovery, and data analysis purposes. External events are published to notify other systems of changes.
Paper presented at the 12th International Conference on Digital Preservation, November 2-6, 2015. University of North Carolina at Chapel Hill.
Abstract:
An increasing amount of scientific work is performed in silico, such that the entire process of investigation, from experiment to publication, is performed by computer. Unfortunately, this has made the problem of scientific reproducibility even harder, due to the complexity and imprecision of specifying and recreating the computing environments needed to run a given piece of software. Here, we consider from a high level what techniques and technologies must be put in place to allow for the accurate preservation of the execution of software. We assume that there exists a suitable digital archive for storing digital objects; what is missing are frameworks for precisely specifying, assembling, and executing software with all of its dependencies. We discuss the fundamental problems of managing implicit dependencies and outline two broad approaches: preserving the mess, and encouraging cleanliness. We introduce three prototype tools for preserving software executions: Parrot, Umbrella, and Prune.
Conference: HP Big Data Conference 2015
Session: Real-world Methods for Boosting Query Performance
Presentation: "Extra performance out of thin air"
Presenter: Konstantine Krutiy, Principal Software Engineer / Vertica Whisperer
Company: Localytics
Description:
Learn how to get extra performance out of Vertica from areas you never expected.
This presentation will illustrate how you can improve performance of your Vertica cluster without extra budget.
All you need is ingenuity, knowledge of Vertica internals, and the ability to challenge conventional wisdom.
We will show you real world examples on gaining performance by eliminating unneeded work, eliminating unneeded system waits and making your system operate more efficiently.
Visit my blog http://www.dbjungle.com for more Vertica insights
BigDataCloud meetup - July 8th - Cost effective big-data processing using Ama...BigDataCloud
This document discusses using Amazon Elastic MapReduce (EMR) for cost-effective big data processing. It describes the author's experience using EMR to process 1TB of log data per week for a startup. Key advantages of EMR include only paying for usage, no hardware to maintain, and ability to customize cluster resources for different jobs. The author outlines best practices learned, such as splitting logs by type and processing in smaller windows, as well as next steps like using spot instances and NoSQL for improved performance and cost savings.
Similaire à 5 Reasons to Upgrade Ehcache to BigMemory Go (20)
NA Adabas & Natural User Group Meeting April 2023Software AG
The Adabas & Natural Health Check provides customers with a no-cost, half to one day remote or onsite review of their Adabas and Natural environment. Software AG experts evaluate the customer's operating environment, Adabas performance, Natural usage, and integration points to identify opportunities for reengagement, modernization, optimization, and preparing for upcoming product upgrades. The health check includes a review of key metrics and configurations to understand resource utilization and pain points for the customer's technical staff.
Adabas & Natural Virtual User Group Meeting NAM 2022Software AG
The innovations keep coming! Discover what’s new on the Adabas & Natural product roadmap that can help you optimize, modernize & transform your systems. Hear how customers successfully embraced digital transformation using APIs and data integration. Get tips from our services and solutions experts on how to address staffing challenges, end of maintenance, and demand for data for analytics.
Join your peers and experts from Software AG to explore:
• Adabas & Natural 2050+ innovations & roadmap
• Mainframe modernization and cross-agency data sharing at DELJIS
• Bi-directional API implementation at TRS
• Options to train new talent and address staffing gaps
• End of support considerations for Natural 8 on z/OS
• How to liberate data for modern data analytics
• Adabas & Natural for z/OS License Key Management
To learn more about Software AG Adabas & Natural, please visit www.adabasnatural.com
Modernization - Capabilities of NaturalONE, Mainframe data integration, and Cloud/hybrid cloud architectures | Devops deployments, cloud architectures, and application/data integration best practices.
The document discusses Aha!, a modern brainstorming tool that provides an intuitive interface for creating and organizing ideas. It allows users to view existing ideas, provides filtering and sorting capabilities, and enables feedback through public commenting. The tool distinguishes between public ideas and private back-office functions for designated users. It was selected by the AN-PM team to replace their existing brainstorming method and has also garnered interest from other business units within the company for its modern features.
One Path to a Successful Implementation of NaturalONESoftware AG
One path to a successful implementation of NaturalONE | Software AG
Join the Natural Administration team from Texas Comptroller of Public Accounts and discover how they overcame programmer resistance to successfully implement and thrive using NaturalONE and DevOPs. Get tips and techniques as well as real-world samples of architecture, configuration and implementations.
The Texas Comptroller of Public Accounts successfully implemented NaturalONE in the spring of 2019, deploying the NaturalONE client to 40+ Windows 10 laptops, and upgraded to mainframe Natural V9 a few months later. We had a rocky start and a lot of resistance from senior programmers, but we survived and are thriving – even the programmer with Natural 1.2 mainframe editing experience has made the leap and is editing Natural code in NaturalONE.
Join us as we share our experienced-based insights on the following topics:
- How to get your programming staff to accept the change to NaturalONE
- Overview of TX CPA NDV Architecture for Application Development Life Cycle
- Sample an NDV configuration reference guide provided to NaturalONE users
- Discuss differences between configuration files for NDV batch server and NDV server with the CICS adapter
- How to set up NDV Monitor (NATMOPI)
- Review pre-requisites/restrictions to adhere to for NDV CICS Adapter
- External Security Configuration requirements you won’t want to miss
- How do I DEBUG code in NaturalONE? (Just an overview reference)
- Lessons learned from issues we encountered, so you can have a smoother implementation
- Tips and techniques for using NaturalONE features that highlight the power of the NaturalONE IDE
To learn more about Software AG’s NaturalOne, please visit https://www.softwareag.com/en_corporate/platform/adabas-natural/devops.html
Apama, Terracotta, webMethods Upgrade: Avoiding Common Pitfalls Software AG
Get some valuable tips and techniques to optimize your upgrade process, including:
• The single most commonly overlooked source of upgrade information (and where to find the rest)
• Highlights of the upgrade guide (including a new section on databases)
• Supported upgrade paths and the optimal sequence of events for a smooth upgrade transition
• Tips on database migration
• When to install fixes
• Managing widely dispersed information
Ten Disruptive Digital Trends Retailers Need To Know Software AG
The document discusses 10 digital trends that will disrupt retailers in the coming year. It notes that retailers will have fewer stores but offer more products by expanding inventory through endless aisles, marketplaces, and hub-and-spoke networks. Retailers will also focus on personalized, real-time customer experiences driven by more customer data sources and dynamic pricing adjusted in real time based on analytics. Advanced analytics will also be used to improve operations through better queue management, reduced inventory, and optimized customer service.
Command Central provides unified management and monitoring for webMethods products. It allows centralized, scalable, and automated management of software installations, configurations, fixes, and more. Some key uses of Command Central include centralized monitoring of environments, scripted maintenance activities, managing development environments through templates, and elastic expansion of environments.
Innovation World 2015 General Session - Dr. Wolfram JostSoftware AG
Software AG's Chief Technology Officer, Dr. Wolfram Jost's General Session Presentation from Innovation World 2015.
https://www.youtube.com/watch?v=6aZsRW5I_t4
VEA: ARIS and Alfabet Journey Together Software AG
VEA is a global service provider and partner that delivers solutions for business process management, IT portfolio planning, and enterprise architecture using ARIS and Alfabet. VEA advises clients throughout their BPM, ITPM, and EA initiatives to enable planning and architecture. VEA has enabled over 100 global customers as the only third-party provider of accelerators for effective deliverables and information delivery. VEA's services are managed through a Center of Excellence to ensure advanced platform support.
The document discusses trends in customer centricity, including fractured views of customers, the increasing amount of available information obscuring real knowledge, and privacy/security concerns. It notes three big trends: 1) signals of customer intent beyond transactions provide early warnings for trends, 2) internet of things generates more data about customer behavior, and 3) context is needed to understand customer objectives rather than just demographics or location data. The future is about gaining insights into customers through their intent and a holistic understanding of their behaviors and contexts.
This document provides an overview and demonstration of WebMethods Integration Cloud for hybrid integration. It discusses trends driving hybrid integration, such as the proliferation of SaaS applications and overloaded integration teams. It then reviews hybrid integration options using WebMethods Integration Cloud and the Integration Server. New features in the October 2015 release are highlighted. The document concludes with a demonstration of building simple integration flows in the cloud and hybrid integration scenarios using connectors, Amazon SQS, and connecting on-premise systems with Integration Cloud.
The document discusses ARIS, a software product for business process management and customer experience management. It provides information on the ARIS 2015 update and 2016 roadmap, including growth and successes in 2015, new features in the 9.8.2 release, and the vision and strategy going forward. Key areas of focus include improved usability, performance, and scalability; enhanced customer experience management capabilities; expanded mobile and API functionality; and tighter integration with other products like SAP and Alfabet.
Apama and Terracotta World: Getting Started in Predictive Analytics Software AG
The document provides an overview of predictive analytics and Terracotta and Apama products. It discusses key highlights and strategic focus areas for Terracotta and Apama in 2016-2017, including delivering an in-memory data fabric platform, enhancing integration with digital business platform products, and enabling internet of things integration and streaming analytics. The document also introduces four speakers on predictive analytics.
In-Memory Data Management Goes Mainstream - OpenSlava 2015Software AG
Manish Devgan's presentation from the OpenSlava 2015 Conference. The presentation will cover Ehcache and Terracotta Server, its recent milestones, and how it continues to help developers easily leverage in-memory storage for current and emerging workloads.
Watch the full presentation here: http://bit.ly/1MGwGUv
Thingalytics uses real-time analytics and algorithms to guide organizations through analyzing large amounts of data generated by the Internet of Things, enabling them to optimize operations, identify opportunities and minimize threats. It works by continuously monitoring data from connected devices and sensors to spot patterns and make small adjustments that improve performance. Any organization involved in the Internet of Things can benefit from Thingalytics to gain insights from their data and devices.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
Dive into the realm of operating systems (OS) with Pravash Chandra Das, a seasoned Digital Forensic Analyst, as your guide. 🚀 This comprehensive presentation illuminates the core concepts, types, and evolution of OS, essential for understanding modern computing landscapes.
Beginning with the foundational definition, Das clarifies the pivotal role of OS as system software orchestrating hardware resources, software applications, and user interactions. Through succinct descriptions, he delineates the diverse types of OS, from single-user, single-task environments like early MS-DOS iterations, to multi-user, multi-tasking systems exemplified by modern Linux distributions.
Crucial components like the kernel and shell are dissected, highlighting their indispensable functions in resource management and user interface interaction. Das elucidates how the kernel acts as the central nervous system, orchestrating process scheduling, memory allocation, and device management. Meanwhile, the shell serves as the gateway for user commands, bridging the gap between human input and machine execution. 💻
The narrative then shifts to a captivating exploration of prominent desktop OSs, Windows, macOS, and Linux. Windows, with its globally ubiquitous presence and user-friendly interface, emerges as a cornerstone in personal computing history. macOS, lauded for its sleek design and seamless integration with Apple's ecosystem, stands as a beacon of stability and creativity. Linux, an open-source marvel, offers unparalleled flexibility and security, revolutionizing the computing landscape. 🖥️
Moving to the realm of mobile devices, Das unravels the dominance of Android and iOS. Android's open-source ethos fosters a vibrant ecosystem of customization and innovation, while iOS boasts a seamless user experience and robust security infrastructure. Meanwhile, discontinued platforms like Symbian and Palm OS evoke nostalgia for their pioneering roles in the smartphone revolution.
The journey concludes with a reflection on the ever-evolving landscape of OS, underscored by the emergence of real-time operating systems (RTOS) and the persistent quest for innovation and efficiency. As technology continues to shape our world, understanding the foundations and evolution of operating systems remains paramount. Join Pravash Chandra Das on this illuminating journey through the heart of computing. 🌟
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
3. Plummeting RAM prices and exploding volumes of
valuable data make real-time Big Data possible
In-Memory
Maximize inexpensive memory
Steep drop in
price of RAM
Big Data
Unlock the value in your data
Explosion in
volume of
business data
3
4. “Memory is the new disk. The obvious
thing to do is to exploit that technology.”
— The New York Times, Sep. 9, 2012
4
5. Terracotta BigMemory Go makes ALL of your
data instantly available
=
Stores “big” amounts of data in machine
memory for ultra-fast access
Snaps into enterprise applications
Easily scales up on a single server
5
6. REASON 2
BigMemory Go includes Ehcache and
eliminates garbage collection pauses
and tuning.
6
7. With Ehcache, you’re limited to a few GBs in
RAM. With BigMemory Go, use all your RAM.
7
9. Includes:
Terracotta Management Console: advanced
in-memory monitoring/control
Fast search: powerful API for searching inmemory stores
Automatic Resource Control: tiered stores
that keep data where it’s needed
Ehcache interface: Java’s de facto API
Fault-tolerant, fast restartable store
Keep
ALL your
data
instantly
available in
distributed
RAM
Scale up
Make your
app’s data
instantly
available in
your server’s
RAM
Scale up
BigMemory Go does almost everything
BigMemory Max does, but on standalone JVMs
Scale out
Everything in BigMemory Go PLUS:
Distributed scale: manages in-memory data across
server
Data consistency: keeps data in synch across your array
Full fault-tolerance and fast restart: mirrors data for
99.999% availability
9
14. Reading and writing happens the same way as
with Ehcache
CacheManager manager = CacheManager.create(managerConfiguration);
Cache bigMemory = manager.getCache("bm-crud");
// create
final Person tim = new Person("Tim Doe", 35, Person.Gender.MALE,
"eck street", "San Mateo", "CA");
bigMemory.put(new Element("1", tim));
// read
final Element element = bigMemory.get("1");
System.out.println(”Element value: " + element.getObjectValue());
// update
final Person pamelaJones = new Person("Pamela Jones", 23,
Person.Gender.FEMALE, "berry st", "Parsippany", "LA");
bigMemory.put(new Element("1", pamelaJones));
// delete
bigMemory.remove("1");
14
15. Plus, you can easily define searchable
attributes and execute queries
// Find the number of people who live in New Jersey.
Attribute<String> state =
bigMemory.getSearchAttribute("state");
Query newJerseyCountQuery =
bigMemory.createQuery().addCriteria(state.eq("NJ"));
// Execute query and print results.
System.out.println("Count of people from NJ: "
+ newJerseyCountQuery.execute().all().iterator().next()
.getAggregatorResults());
15
16. BONUS REASON
The Terracotta Management Console
(TMC) in BigMemory Go gives you
visibility and control of in-memory data.
16
17. The TMC in BigMemory Go is a web-based
control and viewing platform for in-memory
stores
17
18. See how much data is in your local Java heap
and local off-heap
18
19. Create virtual data stores, controlling exactly
how much memory each will use
19
20. DOUBLE BONUS REASON
You can add BigMemory Go
to your Ehcache deployment
with as few as two lines of config.
20
21. All there is to it:
<ehcache … name="crud-config">
<cache name="crud"
maxBytesLocalHeap="64M"
maxBytesLocalOffHeap=“32G">
</cache>
</ehcache>
21
22. TRIPLE BONUS REASON
BigMemory Go gives you 32GB of inmemory capacity … FREE
Download:
http://terracotta.org/products/bigmemorygo
22
23. What could you do with instant
access to all of your data?
23
24. BigMemory powers real-time Big Data apps
across many industries
Fraud detection slashed from
45 minutes to mere seconds
Media streamed in real time
to millions of devices
Customer service
transactions throughput
increased by 10x
Flight reservation load on
mainframes reduced 80%
Automobile traffic updates
delivered to millions of global
customers in real time
Terracotta
Enterprise Customers
25. Q&A
Questions?
Type them in the “Question” panel or in
the chat window
Download (32GB free) + Learn More:
http://terracotta.org/products/bigmemorygo
25