The document discusses caching and new features in Ehcache 2 and Hibernate caching. It describes reasons for caching like offloading resources, improving performance, and enabling scalability. It discusses how caching works by leveraging principles of locality of reference and Pareto distributions. It also covers challenges like data size, staleness, and maintaining coherency in a clustered environment.
This document proposes a No Heap model for remote objects in distributed real-time Java systems. The model avoids garbage collection by containing objects within scoped memory areas. Remote objects have a creation context in immortal memory and invocation contexts in local temporary memory areas. This allows safe nesting of contexts by forbidding references from creation to invocation contexts while allowing references the other way. The model is compared to traditional garbage collected remote objects, showing it uses less time as it avoids garbage collection overhead.
TROSGi is a framework that enhances OSGi with real-time Java support through three integration levels (L0, L1, L2). L0 provides minimal access to real-time Java. L1 adds a real-time characterization service. L2 introduces admission control, fault tolerance, and composition services. The services were implemented and tested on a real-time Java prototype, showing improved performance over non-real-time approaches. Ongoing work focuses on further implementation improvements and integrating other languages.
The document discusses using asynchronous remote invocations for distributed real-time Java applications. It proposes four asynchronous models - fire-and-forget, sync-with-server, result-callback, and poll-object - that offer performance advantages over the default synchronous model. Empirical results show asynchronous approaches reduce response time, memory usage, and bandwidth compared to synchronous remote invocations.
This document proposes an extension to the Real-Time Specification for Java (RTSJ) called RealtimeThread++. It aims to simplify the dual threading model of RTSJ by having a single thread type that can dynamically decide whether to run with or without garbage collector interference. This added flexibility could help avoid garbage collector priority inversions, enrich event handling programming, and enhance real-time distributed architectures in the Distributed RTSJ. The extension requires some underlying virtual machine support for activation/deactivation of reading barriers and checking local variables when changing memory environments. Performance evaluations show the absolute and relative penalties of using this extended threading model.
This document proposes a new type of region called AGCMemory for the Real-Time Specification for Java (RTSJ) to improve portability while maintaining predictability. AGCMemory reduces the "floating garbage" problem seen in RTSJ regions by automatically recycling objects created during method invocations. It uses constant-time barriers and internal data structures to track object allocations and support partial deallocation in a way that is more intuitive than nested scopes. The authors evaluate AGCMemory and discuss implementing it in an RTSJ virtual machine and evaluating its performance trade-offs.
1) The document presents a Synchronous Scheduling Service (SSS) that introduces time-triggered orientation for distributed real-time Java applications.
2) The SSS is based on the Flexible Time-Triggered protocol and is supported as a new service in the Distributed Real-Time Specification for Java.
3) It provides a more predictable network management which is useful for high-integrity applications.
This document discusses the development of a Nova proxy driver for Red Hat Enterprise Virtualization Manager (RHEV-M) and oVirt. The driver allows OpenStack to manage virtual machines running on RHEV-M and oVirt by mapping Nova functions like boot, delete, and reboot to operations in RHEV-M like creating, shutting down, and removing virtual machines. It also maps Glance functions to managing virtual machine templates. The driver architecture, implemented functions, demonstration setup, next steps, and acknowledgements are covered in the document.
Dimemas and Multi-Level Cache SimulationsMário Almeida
This report describes the simulation and benchmarking steps taken in order to predict the parallel performance of an application using Dimemas and Cache-level simulations. Using Dimemas [3] the time
behaviour of NAS [1] integer sort was simulated for the architecture of the Barcelona Super Computer, MareNostrum [4]. The performance was evaluated as a function of the architecture latency, bandwidth,
connectivity and CPU speed. For Cache-Level Simulations, Intel's pin tool was used to benchmark a simple parallel application in function of the cache and cluster sizes.
This document proposes a No Heap model for remote objects in distributed real-time Java systems. The model avoids garbage collection by containing objects within scoped memory areas. Remote objects have a creation context in immortal memory and invocation contexts in local temporary memory areas. This allows safe nesting of contexts by forbidding references from creation to invocation contexts while allowing references the other way. The model is compared to traditional garbage collected remote objects, showing it uses less time as it avoids garbage collection overhead.
TROSGi is a framework that enhances OSGi with real-time Java support through three integration levels (L0, L1, L2). L0 provides minimal access to real-time Java. L1 adds a real-time characterization service. L2 introduces admission control, fault tolerance, and composition services. The services were implemented and tested on a real-time Java prototype, showing improved performance over non-real-time approaches. Ongoing work focuses on further implementation improvements and integrating other languages.
The document discusses using asynchronous remote invocations for distributed real-time Java applications. It proposes four asynchronous models - fire-and-forget, sync-with-server, result-callback, and poll-object - that offer performance advantages over the default synchronous model. Empirical results show asynchronous approaches reduce response time, memory usage, and bandwidth compared to synchronous remote invocations.
This document proposes an extension to the Real-Time Specification for Java (RTSJ) called RealtimeThread++. It aims to simplify the dual threading model of RTSJ by having a single thread type that can dynamically decide whether to run with or without garbage collector interference. This added flexibility could help avoid garbage collector priority inversions, enrich event handling programming, and enhance real-time distributed architectures in the Distributed RTSJ. The extension requires some underlying virtual machine support for activation/deactivation of reading barriers and checking local variables when changing memory environments. Performance evaluations show the absolute and relative penalties of using this extended threading model.
This document proposes a new type of region called AGCMemory for the Real-Time Specification for Java (RTSJ) to improve portability while maintaining predictability. AGCMemory reduces the "floating garbage" problem seen in RTSJ regions by automatically recycling objects created during method invocations. It uses constant-time barriers and internal data structures to track object allocations and support partial deallocation in a way that is more intuitive than nested scopes. The authors evaluate AGCMemory and discuss implementing it in an RTSJ virtual machine and evaluating its performance trade-offs.
1) The document presents a Synchronous Scheduling Service (SSS) that introduces time-triggered orientation for distributed real-time Java applications.
2) The SSS is based on the Flexible Time-Triggered protocol and is supported as a new service in the Distributed Real-Time Specification for Java.
3) It provides a more predictable network management which is useful for high-integrity applications.
This document discusses the development of a Nova proxy driver for Red Hat Enterprise Virtualization Manager (RHEV-M) and oVirt. The driver allows OpenStack to manage virtual machines running on RHEV-M and oVirt by mapping Nova functions like boot, delete, and reboot to operations in RHEV-M like creating, shutting down, and removing virtual machines. It also maps Glance functions to managing virtual machine templates. The driver architecture, implemented functions, demonstration setup, next steps, and acknowledgements are covered in the document.
Dimemas and Multi-Level Cache SimulationsMário Almeida
This report describes the simulation and benchmarking steps taken in order to predict the parallel performance of an application using Dimemas and Cache-level simulations. Using Dimemas [3] the time
behaviour of NAS [1] integer sort was simulated for the architecture of the Barcelona Super Computer, MareNostrum [4]. The performance was evaluated as a function of the architecture latency, bandwidth,
connectivity and CPU speed. For Cache-Level Simulations, Intel's pin tool was used to benchmark a simple parallel application in function of the cache and cluster sizes.
This document discusses Cassandra performance improvements using Acunu technology. It describes how Acunu powers Cassandra with features like doubling arrays to improve insert and range query performance by over 100x and 3.5x respectively compared to standard Cassandra. It also discusses Acunu's monitoring, operations and open source aspects.
This document discusses cache and concurrency considerations for Apache Cassandra. It covers metrics and monitors for cache performance, how the JVM performs in big data systems, examples of Cassandra in real-world systems like Facebook and Twitter, techniques for achieving fast writes and reads, and tools for optimizing performance. It emphasizes locality, non-blocking collections, and techniques for handling garbage collection and compactions efficiently.
The document discusses a note on using PowerPoint slides from a textbook on computer networking. It provides the slides freely for educational use and asks only that the source is cited if used substantially unaltered and any copyright is noted if posted online. The slides are from the textbook "Computer Networking: A Top-Down Approach" by Kurose and Ross from Pearson/Addison-Wesley.
Tungsten University: Setup and Operate Tungsten ReplicatorsContinuent
Do you have the background necessary to take full advantage of Tungsten Replicator in your environments? Tungsten offers enterprise-quality replication features in an open source package hosted on Google Code. This virtual course teaches you how to set up innovative topologies that solve complex replication problems. We start with single MySQL servers running MySQL replication and show a simple path migration path to Tungsten.
Course Topics
- Checking host and MySQL prerequisites
- Downloading code from http://code.google.com/p/tungsten-replicator/
- Installation using the tungsten-installer utility
- Transaction filtering using standard filters as well as customized filters you write yourself
- Enabling and managing parallel replication
- Configuring multi-master and fan-in using multiple replication services
- Backup and restore integration
- Troubleshooting replication problems
- Logging bugs and participating in the Tungsten Replicator community
Replication is a powerful technology that takes knowledge and planning to use effectively. We give you the background that makes replication easier to set up, and allows you to take full advantage of the Tungsten Replicator benefits. Learn how to configure and use it more effectively for your projects in the cloud as well as on-premises hardware.
How to Make Ruby CGI Script Faster - CGIを高速化する小手先テクニック -kwatch
The document discusses ways to make Ruby CGI scripts faster. It explains that process invocation and library loading are the main reasons CGI scripts are slow. Various case studies are presented on optimizing code by lazy-loading libraries, avoiding unnecessary objects, and parsing query strings efficiently. Benchmark results show performance improvements from these techniques.
This document provides an overview and summary of the key concepts in the lecture on parallel programming with Spark. It discusses Spark concepts like RDDs (Resilient Distributed Datasets), transformations and actions. It provides examples of Spark operations like map, filter, reduceByKey for word count. It also summarizes how Spark jobs are executed, with the Spark context creating worker threads and executors that perform tasks scheduled by the task scheduler, while optimizing for data locality and caching. The document concludes with examples of creating a Spark context and running a complete word count program in Spark.
1. The Trans-Pacific Grid Datafarm testbed provides 70 terabytes of disk capacity and 13 gigabytes per second of disk I/O performance across clusters in Japan, the US, and Thailand.
2. Using the GNET-1 network testbed device, the Trans-Pacific Grid Datafarm achieved stable transfer rates of up to 3.79 gigabits per second during a file replication experiment between Japan and the US, near the theoretical maximum of 3.9 gigabits per second.
3. Precise pacing of network traffic flows using inter-frame gap controls on the GNET-1 device allowed for high-speed, lossless utilization of long-haul trans-Pacific network links.
This document discusses cache memory. It describes the location, capacity, unit of transfer, access methods, and physical types of caches. Common cache organizations include direct mapping, set associative mapping, and replacement algorithms like LRU. Write policies can be write-through or write-back. Example architectures discussed include Pentium 4 and PowerPC caches.
Jeff Dean presents designs, lessons, and advice from building large distributed systems at Google. He discusses how computing is shifting to small devices and large consolidated computing farms. Google's data centers contain thousands of servers organized into racks and clusters. MapReduce is introduced as a programming model that applies to many large-scale computing problems by hiding details like parallelization, load balancing, and fault tolerance in a runtime library. It involves mapping data, shuffling and sorting, then reducing to produce output.
The document discusses accessing the virtual router in a CloudStack KVM environment. It explains that the virtual router can be accessed by SSHing into its link local IP address from the compute node it is running on. When logging in, it is shown that the link local IP address may change if the virtual router is restarted. The internal interfaces, routing tables, and network configuration of the running virtual router are then displayed and examined.
TokyoCabinet and TokyoTyrant are open source databases written by Akira Koyasu. TokyoCabinet provides embedded databases using hash, B+ tree, fixed-length, and table formats. TokyoTyrant builds on TokyoCabinet and provides a client-server database with a server and client command line interface. Both have Java APIs and support features like putting, getting, listing, and removing key-value pairs.
The document discusses Foreman, an open source tool that provides provisioning, configuration management, and reporting functions through a single interface. It can manage a server's full lifecycle from installation to updates and integrates with tools like Puppet, DNS, DHCP, and libvirt/RHEV. The document includes a demo of Foreman's inventory, node classifier, reporting, API, and user management features. It also provides information on Foreman's architecture, community, installation process, and some organizations currently using Foreman.
The document discusses dense linear algebra solvers and algorithms. It provides an overview of existing software for dense linear algebra including LINPACK, EISPACK, LAPACK, ScaLAPACK, PLASMA, and MAGMA. It then discusses challenges with dense linear algebra on modern hardware including distributed memory, heterogeneity, and the high cost of communication. It introduces tile algorithms as an approach to address these challenges compared to traditional LAPACK algorithms.
Ehcache is an open source Java caching library that provides fast, scalable caching for applications. It allows for in-process caching with single nodes or distributed caching across multiple nodes. Ehcache provides features like memory and disk storage, replication, search capabilities, and integration with Terracotta for distributed caching. It uses common caching patterns like cache-aside, read-through, write-through, and cache-as-sor. Ehcache has a simple API and is lightweight, scalable, and standards-based.
Developing High Performance and Scalable ColdFusion Application Using Terraco...ColdFusionConference
This presentation discusses using Terracotta Ehcache to scale ColdFusion applications. It covers caching basics and options like on-heap, off-heap, and distributed caching. Attendees will learn how to configure Ehcache and Terracotta to enable distributed caching for ColdFusion to improve performance and scalability. Real-world customer examples are provided that demonstrate how Terracotta Ehcache helped online payment processors detect fraud faster and assisted Healthcare.gov in reducing response times.
Agenda:
Red Hat JBoss and SAP Collaboration
Red Hat JBoss - Overview
SAP Netweaver Gateway
SAP PartnerEdge program for Application Development
Call to Action
Q&A
Predicting Defects in SAP Java Code: An Experience Reporttilman.holschuh
Which components of a large software system are the
most defect-prone? In a study on a large SAP Java system,
we evaluated and compared a number of defect predictors,
based on code features such as complexity metrics, static
error detectors, change frequency, or component imports,
thus replicating a number of earlier case studies in an industrial
context. We found the overall predictive power to
be lower than expected; still, the resulting regression models
successfully predicted 50–60% of the 20% most defectprone
components.
SAP's Java IDE is based on Eclipse and provides tools for developing Java applications on SAP systems. It includes a J2EE toolset with support for standard Java EE features and integration with SAP's J2EE engine. A model abstraction layer and graphics layer allow development objects to be logically presented and diagrammed. The tools are integrated with SAP infrastructure components like the design time repository, build service, and software logistics.
Distributed Caching Using the JCACHE API and ehcache, Including a Case Study ...elliando dias
This document summarizes a presentation on distributed caching using the JCACHE API and ehcache. The presentation covers how to use ehcache to cache web pages, database queries, and configure distributed caching across multiple servers. It also discusses the JSR 107 JCACHE specification and its implementation in ehcache. The presentation concludes with a case study of caching at Wotif.com.
This document discusses Cassandra performance improvements using Acunu technology. It describes how Acunu powers Cassandra with features like doubling arrays to improve insert and range query performance by over 100x and 3.5x respectively compared to standard Cassandra. It also discusses Acunu's monitoring, operations and open source aspects.
This document discusses cache and concurrency considerations for Apache Cassandra. It covers metrics and monitors for cache performance, how the JVM performs in big data systems, examples of Cassandra in real-world systems like Facebook and Twitter, techniques for achieving fast writes and reads, and tools for optimizing performance. It emphasizes locality, non-blocking collections, and techniques for handling garbage collection and compactions efficiently.
The document discusses a note on using PowerPoint slides from a textbook on computer networking. It provides the slides freely for educational use and asks only that the source is cited if used substantially unaltered and any copyright is noted if posted online. The slides are from the textbook "Computer Networking: A Top-Down Approach" by Kurose and Ross from Pearson/Addison-Wesley.
Tungsten University: Setup and Operate Tungsten ReplicatorsContinuent
Do you have the background necessary to take full advantage of Tungsten Replicator in your environments? Tungsten offers enterprise-quality replication features in an open source package hosted on Google Code. This virtual course teaches you how to set up innovative topologies that solve complex replication problems. We start with single MySQL servers running MySQL replication and show a simple path migration path to Tungsten.
Course Topics
- Checking host and MySQL prerequisites
- Downloading code from http://code.google.com/p/tungsten-replicator/
- Installation using the tungsten-installer utility
- Transaction filtering using standard filters as well as customized filters you write yourself
- Enabling and managing parallel replication
- Configuring multi-master and fan-in using multiple replication services
- Backup and restore integration
- Troubleshooting replication problems
- Logging bugs and participating in the Tungsten Replicator community
Replication is a powerful technology that takes knowledge and planning to use effectively. We give you the background that makes replication easier to set up, and allows you to take full advantage of the Tungsten Replicator benefits. Learn how to configure and use it more effectively for your projects in the cloud as well as on-premises hardware.
How to Make Ruby CGI Script Faster - CGIを高速化する小手先テクニック -kwatch
The document discusses ways to make Ruby CGI scripts faster. It explains that process invocation and library loading are the main reasons CGI scripts are slow. Various case studies are presented on optimizing code by lazy-loading libraries, avoiding unnecessary objects, and parsing query strings efficiently. Benchmark results show performance improvements from these techniques.
This document provides an overview and summary of the key concepts in the lecture on parallel programming with Spark. It discusses Spark concepts like RDDs (Resilient Distributed Datasets), transformations and actions. It provides examples of Spark operations like map, filter, reduceByKey for word count. It also summarizes how Spark jobs are executed, with the Spark context creating worker threads and executors that perform tasks scheduled by the task scheduler, while optimizing for data locality and caching. The document concludes with examples of creating a Spark context and running a complete word count program in Spark.
1. The Trans-Pacific Grid Datafarm testbed provides 70 terabytes of disk capacity and 13 gigabytes per second of disk I/O performance across clusters in Japan, the US, and Thailand.
2. Using the GNET-1 network testbed device, the Trans-Pacific Grid Datafarm achieved stable transfer rates of up to 3.79 gigabits per second during a file replication experiment between Japan and the US, near the theoretical maximum of 3.9 gigabits per second.
3. Precise pacing of network traffic flows using inter-frame gap controls on the GNET-1 device allowed for high-speed, lossless utilization of long-haul trans-Pacific network links.
This document discusses cache memory. It describes the location, capacity, unit of transfer, access methods, and physical types of caches. Common cache organizations include direct mapping, set associative mapping, and replacement algorithms like LRU. Write policies can be write-through or write-back. Example architectures discussed include Pentium 4 and PowerPC caches.
Jeff Dean presents designs, lessons, and advice from building large distributed systems at Google. He discusses how computing is shifting to small devices and large consolidated computing farms. Google's data centers contain thousands of servers organized into racks and clusters. MapReduce is introduced as a programming model that applies to many large-scale computing problems by hiding details like parallelization, load balancing, and fault tolerance in a runtime library. It involves mapping data, shuffling and sorting, then reducing to produce output.
The document discusses accessing the virtual router in a CloudStack KVM environment. It explains that the virtual router can be accessed by SSHing into its link local IP address from the compute node it is running on. When logging in, it is shown that the link local IP address may change if the virtual router is restarted. The internal interfaces, routing tables, and network configuration of the running virtual router are then displayed and examined.
TokyoCabinet and TokyoTyrant are open source databases written by Akira Koyasu. TokyoCabinet provides embedded databases using hash, B+ tree, fixed-length, and table formats. TokyoTyrant builds on TokyoCabinet and provides a client-server database with a server and client command line interface. Both have Java APIs and support features like putting, getting, listing, and removing key-value pairs.
The document discusses Foreman, an open source tool that provides provisioning, configuration management, and reporting functions through a single interface. It can manage a server's full lifecycle from installation to updates and integrates with tools like Puppet, DNS, DHCP, and libvirt/RHEV. The document includes a demo of Foreman's inventory, node classifier, reporting, API, and user management features. It also provides information on Foreman's architecture, community, installation process, and some organizations currently using Foreman.
The document discusses dense linear algebra solvers and algorithms. It provides an overview of existing software for dense linear algebra including LINPACK, EISPACK, LAPACK, ScaLAPACK, PLASMA, and MAGMA. It then discusses challenges with dense linear algebra on modern hardware including distributed memory, heterogeneity, and the high cost of communication. It introduces tile algorithms as an approach to address these challenges compared to traditional LAPACK algorithms.
Ehcache is an open source Java caching library that provides fast, scalable caching for applications. It allows for in-process caching with single nodes or distributed caching across multiple nodes. Ehcache provides features like memory and disk storage, replication, search capabilities, and integration with Terracotta for distributed caching. It uses common caching patterns like cache-aside, read-through, write-through, and cache-as-sor. Ehcache has a simple API and is lightweight, scalable, and standards-based.
Developing High Performance and Scalable ColdFusion Application Using Terraco...ColdFusionConference
This presentation discusses using Terracotta Ehcache to scale ColdFusion applications. It covers caching basics and options like on-heap, off-heap, and distributed caching. Attendees will learn how to configure Ehcache and Terracotta to enable distributed caching for ColdFusion to improve performance and scalability. Real-world customer examples are provided that demonstrate how Terracotta Ehcache helped online payment processors detect fraud faster and assisted Healthcare.gov in reducing response times.
Agenda:
Red Hat JBoss and SAP Collaboration
Red Hat JBoss - Overview
SAP Netweaver Gateway
SAP PartnerEdge program for Application Development
Call to Action
Q&A
Predicting Defects in SAP Java Code: An Experience Reporttilman.holschuh
Which components of a large software system are the
most defect-prone? In a study on a large SAP Java system,
we evaluated and compared a number of defect predictors,
based on code features such as complexity metrics, static
error detectors, change frequency, or component imports,
thus replicating a number of earlier case studies in an industrial
context. We found the overall predictive power to
be lower than expected; still, the resulting regression models
successfully predicted 50–60% of the 20% most defectprone
components.
SAP's Java IDE is based on Eclipse and provides tools for developing Java applications on SAP systems. It includes a J2EE toolset with support for standard Java EE features and integration with SAP's J2EE engine. A model abstraction layer and graphics layer allow development objects to be logically presented and diagrammed. The tools are integrated with SAP infrastructure components like the design time repository, build service, and software logistics.
Distributed Caching Using the JCACHE API and ehcache, Including a Case Study ...elliando dias
This document summarizes a presentation on distributed caching using the JCACHE API and ehcache. The presentation covers how to use ehcache to cache web pages, database queries, and configure distributed caching across multiple servers. It also discusses the JSR 107 JCACHE specification and its implementation in ehcache. The presentation concludes with a case study of caching at Wotif.com.
SAP Integration with Red Hat JBoss Technologieshwilming
SAP ERP provides different approaches to integrate Java applications with business logic written in ABAP. With JBoss Fuse, the SOA Platform, and Data Services Platform, Red Hat offers flexible middleware solutions for service-oriented integration and orchestration. As a leading provider of integrated solutions and longtime Premier Partner, akquinet has a long history of projects integrating individual applications based on JBoss with standard ERP software such as SAP or Navision.
Based on various real world examples, we will show different ways to integrate SAP ABAP backends with JBoss Middleware. We will discuss the pros and cons of integrating Java EE applications using (a) the REST based approach with NetWeaver Gateway, (b) JBoss Data Services Platform with NetWeaver Gateway (c) SOAP based Web Services and (d) Remote Function Calls with the Java EE Connector Architecture (JCA) and the SAP Java Connector (JCo) library
The document discusses scalability patterns for applications using Terracotta. It introduces concepts of Terracotta like shared memory across nodes and transactional updates. It then describes several patterns for scalable applications using Terracotta like data affinity caching to collocate data and processing, asynchronous write-behind for throughput, and asynchronous messaging and processing for decoupling components. Code examples and Terracotta modules for implementing the patterns are also provided.
This document provides an overview of using Java Server Pages (JSPs), resources, and internationalization in SAP Portals. It discusses how JSPs are compiled to Portal Components and integrated with HTMLB tags. It describes two methods for JSP integration: JSPDynpage, which uses a controller class and beans, and JSPNative, which compiles a single JSP directly to a component. The document also reviews using different types of resources like images, scripts, and XML files from components and recommends a file structure. Finally, it mentions internationalization at a high level.
Ehcache is a widely used open source Java caching library. It allows caching to speed up CPU and I/O bound applications and improve scalability. Ehcache supports different caching topologies like standalone, distributed, and replicated caches. It provides options for memory-based, off-heap, and disk-based storage. Common caching patterns like cache-aside, read-through, and write-behind are supported. Ehcache can be configured based on size, count, or percentage and includes eviction policies and persistence. It enables search, replication between nodes, and high availability.
Practical SAP pentesting workshop (NullCon Goa)ERPScan
All business processes are generally contained in ERP systems. Any information an attacker might want is stored in a company’s ERP. This information can include financial, customer or public relations, intellectual property, personally identifiable information and more. And SAP is the most popular business application vendor with more than 250000 customers worldwide.
The workshop conducted by Alexander Polyakov, CTO of ERPScan, at NullCon Goa Conference is a practical SAP pentesting guide.
Integrating SAP the Java EE Way - JBoss One Day talk 2012hwilming
Cuckoo is an open source Resource Adapter for SAP that is compatible to the Java Connector Architecture (JCA) version 1.5.
It enables developers of Java EE applications to call functions in a SAP backend, making use of Java EE features like Container Managed Transactions and Security.
Hibersap helps developers of Java applications to call business logic in SAP backends. It defines a set of Java annotations to map SAP function modules to Java classes as well as a small, clean API to execute these function modules and handle transaction and security aspects.
Hibersap's programming model is quite similar to those of modern O/R mappers, significantly speeding up the development of SAP interfaces and making it much more fun to write the integration code.
A presentation on how automatic memory management and adaptive compilation impact on latency of applications. Includes some ideas on how to minimise these affects.
Building High Scalability Apps With TerracottaDavid Reines
Senior Architect David Reines will present the simple yet powerful clustering capabilities of Terracotta. David will include a brief overview of the product, an in-depth discussion of Terracotta Distributed Shared Objects, and a live load test demonstrating the importance of a well designed clustered application.
David Reines is a Senior Consultant at Object Partners Inc. He has lead the development efforts of several mission-critical enterprise applications in the Twin Cities area. During this time, he has worked very closely with numerous commercial and open source JEE technologies. David has always favored a pragmatic approach to selecting enterprise application technologies and is currently focusing on building highly-concurrent distributed applications using Terracotta.
Intégration Hybris / SAP
SAP JAVA Connector
PLAN
Introduction
Solution d’intégration Asynchrone
Solution d’intégration Synchrone
SAP Java Connector
Abréviation : SAP JCO
L’objectif:
Définir UN middleware QUI assure la communication avec SAP.
Supporter l’implémentation des applications Desktop & Web.
Caractéristiques SAP JCO :
basé sur JNI - Java Native Interface- CE qui permet d’accéder à bibliothèque CPI-C (Common Programming Interface - Communications) .
EFFectue des apples à des function En mode inbound (Java client appel BAPI OU RFM) OU outbound (ABAP calls external Java Server).
SAP Jco est mutli-Platforms.
Architecture SAP JAVA CoNNECTOR
SAP JCO BAPI
Business Application Programming Interface : des interfaces de programmation normalisées qui permettent aux programmes externes d'avoir accès aux données et aux processus de gestion du système SAP.
SAP JCO JAR
Etablissement de connexion .
Execution des Functions.
accès Et La navigation dans les tables.
Mapping ENTRE ABAP et JAVa data types.
Programmation multithreading.
Gestion des exceptions.
Développement BAPI
Exemple BAPI Stock :
Paramètres BAPI INPUT
Tester BAPI Dans SAP
Télécharger et installer SAP GUI ( SAP logon) :
Tester BAPI Dans SAP
Configuration SAP GUI ( SAP logon) :
Tester BAPI Dans SAP
Connexion SAP GUI ( SAP logon) :
Tester BAPI Dans SAP
Tester BAPI Dans SAP
Tester BAPI Dans SAP
Configurer DESTINATION RFC
L’ajout des extensions SAP Comme DES dépendances de projet dans le fichier localextensions.xml.
Création ou modification de l’impex de création de la RFC destination : sap.impex
Développer BAPI Dans Hybris
Les étapes à suivre :
Récupérer Une Connexion.
Récupérer La fonction BAPI.
Définir les paramètres d’import de la. Fonction BAPI.
Exécuter la fonction.
Récupérer les paramètres d’Export de la fonction.
Récupérer Stock
Conclusion
L’intégration entre SAP / Hybris S’impose Jour après Jour
; Personne N’est à l’abris de cette mutation.
L'intégration SAP / Hybris s’effectue Selon deux mode Synchrone à l’aide de SAP JAVA Connector et Asynchrone à l’aide de DataHUB.
SAP JAVA Connector se base Sur la Notion des BAPIs: des interfaces de programmation normalisées qui permettent aux programmes externes d'avoir accès aux données et aux processus de gestion du système SAP.
SAP LOGON GUI Permet de Tester les BAPI DANS SAP.
Pour plus de détails sur hybris-SAP Solution Integration , Rendez-vous sur : https://wiki.hybris.com/display/release5/Getting+Started+with+hybris-SAP+Solution+Integration
MERCI Pour Votre Attention
Terracotta is a high performance open source Java clustering technology which includes native support via plug-ins to seamlessly integrate with applications based on Hibernate, EHCache, Spring and more. In this session, learn how to integrate Terracotta into your Hibernate application in just a few simple steps. Then visualize your application in real-time and tune it's performance using the Terracotta developer console, a sophisticated runtime visualization, profiling and debugging tool included with the core Terracotta kit.
The document discusses Apache Traffic Server, an open source reverse proxy, caching server, and load balancer. It provides a history of Traffic Server, compares its features to other proxy servers, and describes how it works as a forward and reverse proxy. The document also discusses common performance issues like TCP handshakes, congestion control, and DNS lookups that proxies aim to optimize. It provides configuration examples and outlines Traffic Server's capabilities and directions for future development.
This document provides an introduction and overview of Apache Traffic Server, an open source reverse proxy, caching server, and load balancer. It discusses the history of Traffic Server, its key features compared to other proxy servers, and how it addresses common performance issues through an asynchronous event-driven architecture using multiple threads and caching. The document also covers Traffic Server configuration files and some future directions, concluding that Traffic Server is a versatile and fast tool supported by an active community.
The document discusses the Reactor Pattern and Event-Driven Programming using EventMachine and Thin as examples. It provides an overview of how Thin and EventMachine work together using the Reactor Pattern to provide scalable concurrent networking. Key aspects covered include how EventMachine acts as a reactor that handles events asynchronously using threads, and how Thin integrates with EventMachine by registering request handlers and processing requests concurrently.
This document provides an overview of Konrad Malawski's presentation on reactive stream processing with Akka Streams. The presentation covers Reactive Streams concepts like back pressure, the Reactive Streams specification and protocol, and how Akka Streams implements reactive stream processing using concepts like linear flows, flow graphs, and integration with Akka actors. It also discusses future plans for Akka Streams including API stabilization, improved testability, and potential features like visualizing flow graphs and distributing computation graphs.
This document summarizes the server.xml file that describes how to start the Tomcat Server. It outlines the main elements in the server.xml file including <Server>, <Service>, <Connector>, <Engine>, <Host>, and <Context>. Each element and its attributes are described, such as the <Connector> element representing the endpoint that receives and responds to client requests, with attributes like port and protocol. An example server.xml file structure is also provided.
The document discusses the reactor pattern and event-driven programming. It explains that the reactor pattern uses an event loop to process I/O asynchronously without blocking. EventMachine is given as an example of a reactor implementation in Ruby. It also explains how the thin web server uses EventMachine and the reactor pattern to handle requests asynchronously by delegating I/O to EventMachine and processing requests with thin handlers.
This document describes a tutorial on using semantic metadata with Grid services. The tutorial will cover:
1. Setting up a Globus container and deploying various semantic services and an operation provider for sticky notes.
2. Attaching RDF semantic bindings to sticky notes to represent their metadata.
3. Querying the semantic bindings of sticky notes using SPARQL or other query languages. The queries can exploit relationships defined in an ontology.
The hands-on exercises will guide participants in building a semantically-aware Grid service by completing the various setup and configuration steps. Participants will learn how to attach, query, and infer over semantic metadata for Grid resources.
Information Flow on the Intranet at Region Västra GötalandKristian Norling
Presentation made at Search Solutions 2011, 16th of november in London.
At Region Västra Götaland the goal to give the right person, the right information at the right time, place and context on the intranet, led to a holistic view of information flow including the life cycle of information. Even though the search platform plays a very important part of the information flow, it exists in an ecosystem of systems supporting the intranet. Kristian will talk about how that ecosystem works in practice and show some examples.
This document describes a tutorial on using semantic metadata with Grid services. The tutorial will cover:
1. Setting up a Globus container and deploying various semantic services and operation providers to enable semantic capabilities for Grid resources like sticky notes.
2. Attaching RDF metadata to sticky note resources using semantic bindings.
3. Querying the semantic bindings of resources using SPARQL or other query languages and making inferences over the metadata by using an ontology.
The hands-on exercises will guide participants in deploying the necessary software components, adding semantic description and querying capabilities to a sticky note service, and executing queries that leverage an ontology to infer additional information from the semantic metadata.
(ATS3-PLAT06) Handling “Big Data” with Pipeline Pilot (MapReduce/NoSQL)BIOVIA
Pipeline Pilot has wrangled large volumes of scientific data for many years. The emergence of "Big Data" challenges in other fields has brought many new tools and techniques to the table. This session will demonstrate various approaches to handling big data in Pipeline Pilot and show now Pipeline Pilot can integrate with "NoSQL" data stores such as Apache Cassandra and MongoDB. The second half of this session will be focus on audience participation and open discussion around big data tools and techniques to help inform our community and our future product road map.
Netty is a Java framework that provides tools for developing high performance and event-driven network applications. It uses non-blocking I/O and zero-copy techniques to minimize overhead and maximize throughput and scalability. Netty provides buffers, codecs, pipelines and handlers that allow building applications as a stack of processing layers. Example applications include a discard server and an HTTP file server that demonstrate Netty's core features and event-driven architecture.
The document provides an overview of performance tuning Apache Tomcat, including adjusting logging configuration to reduce duplicate logs, understanding how TCP and HTTP protocols impact performance, choosing an optimal connector (BIO, NIO, or APR) based on the application workload, and configuring connectors to optimize throughput and request processing.
Slash n: Technical Session 3 - Storage @ Scale: Quest for the mythical silver...slashn
The document discusses storage solutions and issues. It introduces metadata management and different storage needs for processing, delivery, and archival. A solution is proposed to separate metadata and storage. Problems with scaling capacity, high availability, and client coupling are noted. Alternative storage system architectures are presented, including options for optimized storage services, read caches, and replacing components. Experiments with GlusterFS and Ceph are mentioned.
Kafka Summit NYC 2017 - Introducing Exactly Once Semantics in Apache Kafkaconfluent
The document introduces Apache Kafka's new exactly once semantics that provide exactly once, in-order delivery of records per partition and atomic writes across multiple partitions. It discusses the existing at-least once delivery semantics and issues around duplicates. The new approach uses idempotent producers, sequence numbers, and transactions to ensure exactly once delivery and coordination across partitions. It also provides up to 20% higher throughput for producers and 50% for consumers through more efficient data formatting and batching. The new features are available in Apache Kafka 0.11 released in June 2017.
We present Spark Serving, a new spark computing mode that enables users to deploy any Spark computation as a sub-millisecond latency web service backed by any Spark Cluster. Attendees will explore the architecture of Spark Serving and discover how to deploy services on a variety of cluster types like Azure Databricks, Kubernetes, and Spark Standalone. We will also demonstrate its simple yet powerful API for RESTful SparkSQL, SparkML, and Deep Network deployment with the same API as batch and streaming workloads. In addition, we will explore the "dual architecture": HTTP on Spark. This architecture converts any spark cluster into a distributed web client with the familiar and pipelinable SparkML API. These two contributions provide the fundamental spark communication primitives to integrate and deploy any computation framework into the Spark Ecosystem. We will explore how Microsoft has used this work to leverage Spark as a fault-tolerant microservice orchestration engine in addition to an ETL and ML platform. And will walk through two examples drawn from Microsoft's ongoing work on Cognitive Service composition, and unsupervised object detection for Snow Leopard recognition.
Introducing Exactly Once Semantics To Apache KafkaApurva Mehta
Here are slides from my talk on introducing exactly once semantics to Apache Kafka. The talk was given at the Kafka Summit NYC, 8 May 2017.
The slides dive into the design of transactions in Apache Kafka.
Similaire à The new ehcache 2.0 and hibernate spi (20)
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Nunit vs XUnit vs MSTest Differences Between These Unit Testing Frameworks.pdfflufftailshop
When it comes to unit testing in the .NET ecosystem, developers have a wide range of options available. Among the most popular choices are NUnit, XUnit, and MSTest. These unit testing frameworks provide essential tools and features to help ensure the quality and reliability of code. However, understanding the differences between these frameworks is crucial for selecting the most suitable one for your projects.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
The new ehcache 2.0 and hibernate spi
1. Caching, and what’s new in Ehcache
2 and Hibernate Caching Provider
Thursday, 17 June 2010
2. Why Cache?
Reasons to cache:
Offload - reducing the amount of resources consumed and hence cost
Performance - increasing the speed of processing
Scale out - distributed caching is a leading scale-out architecture
www.terracotta.org 2
Thursday, 17 June 2010
3. Truncating the request-response Loop
load balancer
load balancer
HTTPD HTTPD HTTPD
Application Application
server server
ehcache ehcache
Terracotta ... Terracotta
Terracotta
Server ... Terracotta
Server MySQL
Server Database
stripe 1 Server
stripe n
stripe 1 stripe n
www.terracotta.org 3
Thursday, 17 June 2010
4. Truncating the request-response Loop
load balancer
load balancer
HTTPD HTTPD HTTPD
Application Application
server server
ehcache ehcache
Terracotta ... Terracotta
Terracotta
Server ... Terracotta
Server MySQL
Server Database
stripe 1 Server
stripe n
stripe 1 stripe n
www.terracotta.org 3
Thursday, 17 June 2010
5. Truncating the request-response Loop
load balancer
load balancer
HTTPD HTTPD HTTPD
Application Application
server server
ehcache ehcache
Terracotta ... Terracotta
Terracotta
Server ... Terracotta
Server MySQL
Server Database
stripe 1 Server
stripe n
stripe 1 stripe n
www.terracotta.org 3
Thursday, 17 June 2010
6. Truncating the request-response Loop
load balancer
load balancer
HTTPD HTTPD HTTPD
Application Application
server server
ehcache ehcache
Terracotta ... Terracotta
Terracotta
Server ... Terracotta
Server MySQL
Server Database
stripe 1 Server
stripe n
stripe 1 stripe n
www.terracotta.org 3
Thursday, 17 June 2010
7. Truncating the request-response Loop
load balancer
load balancer
HTTPD HTTPD HTTPD
Application Application
server server
ehcache ehcache
Terracotta ... Terracotta
Terracotta
Server ... Terracotta
Server MySQL
Server Database
stripe 1 Server
stripe n
stripe 1 stripe n
www.terracotta.org 3
Thursday, 17 June 2010
8. Amdahl’s Law
Amdahl's law, after Gene Amdahl, is used to find the system speed up
from a speed up in part of the system.
1 / ((1 - Proportion Sped Up) + Proportion Sped Up / Speed up)
To apply Amdahl’s law you must measure the components of system
time and the before and after affect of the perf change made.
It is thus an empirical approach.
Not recommended is the other approach “When all you have is a hammer
every problem looks like a nail”. Lots of times it is something completely
new. However very few developers will take the time to make careful
measurements.
www.terracotta.org 4
Thursday, 17 June 2010
9. Cache Efficiency
cache efficiency = cache hits / total hits
➡ High efficiency = high offload
➡ High efficiency = high performance
www.terracotta.org 5
Thursday, 17 June 2010
10. Why does caching work?
Locality of Reference
Pareto Distributions
www.terracotta.org 6
Thursday, 17 June 2010
11. Locality of Reference
Many computer systems exhibit the phenomenon of locality of reference.
Data that is near other data or has just been used is more likely to be
used again.
Temporal locality - refers to the reuse of specific data and/or resources
within relatively small time durations.
Spatial locality - refers to the use of data elements within relatively close
storage locations
e.g. this is the reason for hierarchical memory design in computers
www.terracotta.org 7
Thursday, 17 June 2010
12. Pareto Distributions
Chris Anderson, of Wired Magazine, coined the term The Long Tail to
refer to Ecommerce systems.
The mathematical term is a Pareto Distribution aka Power Law
Distribution.
www.terracotta.org 8
Thursday, 17 June 2010
13. Another Problem...
But...
What if the data set is too large to fit in the cache?
What about staleness of data
www.terracotta.org 9
Thursday, 17 June 2010
14. Coherency with SOR
classically solved with an automatic expiry. Ehcache has both TTL and
TTI.
Better:
Eternal caching plus a cache invalidation protocol
Write-through or behind caches. The SOR gets updated in-line with the
cache. Hibernate read-write and transactional strategies are examples.
Also Ehcache CacheWriter
www.terracotta.org 10
Thursday, 17 June 2010
15. Why run a cluster?
Availability, most often n+1 redundancy
Scale out
But this creates a new cascade of problems:
• N * problem
• Cluster coherency problem
• CAP theorem limits
www.terracotta.org 11
Thursday, 17 June 2010
16. N * Problem
On a single node work is done once and then cached. Cache hits offload.
But in a cluster the work must be done N times, where N is the number of
nodes
The solution to the N * times problem is a replicated or distributed cache
replicated cache - the data is copied to each node. All data is held in
each node
distributed cache - the most used data is held in a node. The balance of
the data is held outside the application node
www.terracotta.org 12
Thursday, 17 June 2010
17. Cluster Coherency Problem
Each cache makes independent cache changes. The caches become
different
Solved partly by a replicated or distributed cache
But, without locking, race conditions will cause incoherencies
Solution is a coherent, distributed or replicated cache
www.terracotta.org 13
Thursday, 17 June 2010
18. CAP Theorem PACELC
The CAP theorem, also known as Brewer's theorem, states that it is
impossible for a distributed computer system to simultaneously provide
all three of the following guarantees: consistency, availability and
tolerance to partition.
A better explanation of the tradeoffs is PACELC: if there is a partition (P)
how does the system tradeoff between availability and consistency (A
and C); else (E) when the system is running as normal in the absence of
partitions, how does the system tradeoff between latency (L) and
consistency (C)?
There is no right answer, but the properties to be traded off will be
different for different applications. So the solution must be configurable.
1. http://dbmsmusings.blogspot.com/2010/04/problems-with-cap-and-yahoos-little.html
www.terracotta.org 14
Thursday, 17 June 2010
19. About Ehcache
The world's most widely used Java cache
Founded in 2003
Apache 2.0 License
Integrated by lots of projects, products
Hibernate Provider implemented 2003
Web Caching 2004
Distributed Caching 2006
Greg Luck becomes co-spec lead of JSR107
JCACHE (JSR107) implementation 2007
REST and SOAP APIs 2008
SourceForge Project of the Month March 2009
Acquired by Terracotta 2009; Integrated with Terracotta Server
Ehcache 2.0 March 2010
Forrester Wave “Leader” May 2010
www.terracotta.org 15
Thursday, 17 June 2010
25. Ehcache 2 - New Features
Hibernate 3.3+ Caching SPI
Old SPI was heavily synchronized and not well suited to
clusters
New SPI uses CacheRegionFactory
Fully cluster safe with Terracotta Server Array
Unification of the Ehcache and Terracotta 3.2 providers
JTA i.e. transactional caching strategy
JTA
Cache as an XAResource
Detects most common Transaction Managers
Others configurable
Works with Spring, EJB and manual transactions
www.terracotta.org 21
Thursday, 17 June 2010
26. Ehcache 2 - New Features
Write-behind
Offloads Databases with high write workloads
CacheWriter Interface to implement
cache.putWithWriter(...) and cache.removeWithWriter(...)
Write-through and Write-behind modes
Batching, coalescing and very configurable
Standalone with in-memory write-behind queue.
TSA with HA, durability and distributed workload balancing
Bulk Loading
incoherent mode for startup or periodic cache loading
10 x faster
No change to the API (put, load etc).
SetCoherent(), isCoherent(), waitForCoherent()
www.terracotta.org 22
Thursday, 17 June 2010
27. Ehcache 2 - New Features ...cont.
New CAP configurability – per cache basis
coherent – run coherent or incoherent (faster)
synchronousWrites – true for ha, false is faster
copyOnRead – true to stop interactions between threads outside
of the cache
Cluster events – notification of partition and reconnection
NonStopCache - decorated cache favouring availability
UnlockedReadsView - decorated cache favouring speed
Management
Dynamic Configuration of common cache configs from JMX and
DevConsole
New web-based Monitoring with UI and API
www.terracotta.org 23
Thursday, 17 June 2010
28. Ehcache 2.0 Monitoring Options
JMX
• is built in to Ehcache but...
• JMX needs use portmap
• Slow
• Machines may be headless
Terracotta Dev Console (if using Terracotta)
Ehcache Console, new in 2.1
www.terracotta.org 24
Thursday, 17 June 2010
29. Including Web Services
Simple + Performant + Coherent + HA + Scaleable
Application
Ehcache Terracotta Terracotta
Server Server
Application
Ehcache Terracotta Terracotta
Server Server
Application
Ehcache Terracotta
Terracotta Server
Server
PHP App
C App
Web Container
REST/
Ehcache
C# App HTTP
Server
Ruby App
www.terracotta.org 25
Thursday, 17 June 2010
30. Terracotta Developer Console
Cache hit ratios
Hit/miss rates
Hits on the database
Cache puts
Detailed efficiency of cache
regions
Dramatically simplifies tuning
and operations, and shows the
database offload.
www.terracotta.org 26
Thursday, 17 June 2010
31. Ehcache Console
Web based
Configuration
Efficiency
Memory Use
Comes with supported versions
API to connect Operations
Monitoring
www.terracotta.org 27
Thursday, 17 June 2010
42. REST Performance
Ehcache Server Memcache
4000
3000
2000
1000
0
Get Multi-Get Put Remove
Source: MemcacheBench with Java clients. Time for 10,000 operations
www.terracotta.org 38
Thursday, 17 June 2010
43. Ehcache with Terracotta vs the Rest
Application
Tests done with Owners = 25K and 125K which translates
to total objects of 0.3 M and 1.5 M
Minimal tuning.
Cluster Configuration:
8 Client JVMs (1.75G Heap)
1 (+0) Terracotta Servers (6G Heap)
MySql: sales18.
www.terracotta.org 39
Thursday, 17 June 2010
44. Ehcache with Terracotta vs the Rest
Ehcache
Replicated with RMI not included because not coherent
Single TSA Server
15 threads and some with 100 threads
IMDG
15 threads
Cache deployed in Partitioned Mode
Tests were also done with Replicated – which did well for
small cache sizes but failed to complete with larger cache
sizes. So, it is not included.
memcached
15 threads
1 server
www.terracotta.org 40
Thursday, 17 June 2010
45. Hibernate - Read Only TPS
www.terracotta.org 41
Thursday, 17 June 2010
49. Test Source
The code behind the benchmarks is in the
Terracotta Community SVN repository.
Download https://svn.terracotta.org/repo/forge/
projects/ehcacheperf/
(Terracotta Community Login Required)
www.terracotta.org 45
Thursday, 17 June 2010
50. Performance Conclusions
With Hibernate, Using Spring Pet Clinic
After app servers and DBs tuned by
independent 3rd parties
30-95% database load reduction
80 times read-only performance of MySQL
Notably lower latency
1.5 ms versus 120 ms for database (25k)
www.terracotta.org 46
Thursday, 17 June 2010
51. What about NoSQL?
Ehcache + Terracotta configured with persistence gives you a NoSQL
store with limited features i.e. no search
TerraStore - new open source project from Sergio Bossa is a document
oriented NoSQL store based on Terracotta and ful
www.terracotta.org 47
Thursday, 17 June 2010