The document discusses the limitations of traditional tier-based architectures for building scalable applications. It presents GigaSpaces' solution of using processing units that collocate services, data, and messaging in memory to minimize latency and maximize throughput. The key advantages are:
1) Services and data are collocated in a processing unit for minimal latency without network hops
2) Async persistence is used to store state changes for compliance/reporting while keeping data and services in memory
3) Resiliency is built-in through automated SLA-driven failover and redundancy between primary and backup processing units
2. Today’s Reality – Tier Based Architecture Separate technology
implementation
Separate technology
implementation
bottlenecks
bottlenecks
Separate technology
implementation
Bottlenecks in all areas where state is stored, architecture can’t scale linearly!
3. Traditional Architecture - path to complexity…
A Auction Service
B Bid Service Auction Bid Trade Info
T Trade Service A B T I
I
Process
Service
Bid Service Service Service Bid
Result Result
I Info Service Bid
Validate Process
Accepted
Trade
T Timer Service
Place bid
Bidder
Get Bid
Result
Auction Timer
Owner
T
Service
4. Traditional Architecture - path to complexity…
A Auction Service
B Bid Service
T Trade Service A B T I
I Info Service
T Timer Service
Business tier
Bidder
Auction
Owner Back-up
Separate failover strategy
and implementation for
each tier
Redundancy doubles
network traffic
Bottlenecks are created Back-up
Latency is increased
5. Do you see the Problem?
Scalability is not linear
Business tier
Scalability management B
nightmare I
A B T
Bidder
Auction
Owner
Back-up Back-up
Back-up Back-up
7. Step 1 – Create a Processing Unit
A Auction Service
B Bid Service B T I
A
T Trade Service
I Info Service
Processing Unit
T Timer Service
Business tier
Bidder
Auction
Owner
Single model for design, deployment and
management
No integration effort
Manage data in memory
Collapse the tiers
Collocate the services
8. Step 2 – Async Persistency
A Auction Service
B Bid Service B T I
A
T Trade Service
Place
Bid
I Info Service
T Timer Service
Processing Unit
Validate
Bidder
Process Bid
Auction
Owner Get Bid Process Trade
Results
Process Results
Persist for Compliance &
Collocation of data, messaging Reporting purposes:
and services in memory: - Storing State
Minimum Latency (no - Register Orders
network hops) - etc.
Maximum Throughput
9. Step 3 – Resiliency
SLA Driven Backup
Container
B T I B T I
A A
Processing Unit
Single, built-in failover/redundancy investment strategy
Fewer points of failure
Automated SLA driven failover/redundancy mechanism
Continuous High Availability
11. Step 4 – Scale
Backup Backup
B T I B T I B T I B T I
A A A A
Processing Unit
Write Once Scale Anywhere:
Linear scalability
Single monitoring and management engine
Automated, SLA-Driven deployment and management
- Scaling policy, System requirements, Space
cluster topology
13. The Processing Unit – Scalability Unit
Single Processing Unit Processing Unit - Scaled
Involves
Config Change
No code changes!
14. The Processing Unit – High-Availability Unit
Primary - Processing Unit Backup - Processing Unit
Business logic – Active mode Business logic – Standby mode
Sync Replication
15. The Processing Unit - Database Integration
Primary - Processing Unit Backup - Processing Unit
Business logic – Active mode Business logic – Standby mode
Sync Replication
Async Async
Replication Replication
Initial Load
Mirror Process
ORM