Apidays New York 2024 - The value of a flexible API Management solution for O...
Red Hat Storage Day Seattle: Stretching A Gluster Cluster for Resilient Message Persistence and File Storage at McMaster University
1. • One of four Canadian Universities ranked among the top
100 in the world (Shanghai Jiaotong University Academic
Ranking of World Universities, August 2016)
• Founded in 1887 in Toronto
• Moved to Hamilton in 1930
• Over 32,000 Full-time Staff, Faculty & Students
2. Systems Renewal - Mosaic Administrative
Systems
In 2012, McMaster began a multi year ERP project to replace its
Legacy Systems
Oracle PeopleSoft Financials, Human Resources, Campus
Solutions, Data Warehouse and related modules was selected for
Implementation
3. McMaster and Red Hat
▪Academic “Departmental” license for RHEL (since
converted to full site license)
▪Instance based Gluster, JBoss Fuse, EUS (and
possibly Ceph in our future…)
4. Mosaic and Red Hat Gluster Storage
▪Deployment of most PeopleSoft components on
RHEL with Gluster for shared/replicated storage
where required
▪Original middleware components RHEL and JBoss
Fuse with Gluster for shared/replicated message
persistence storage
5. Physical Technical Infrastructure
• Mosaic should continue to function even with the loss of one room (at reduced capacity)
• Message Oriented Middleware (JBoss Fuse with ActiveMQ)
• PeopleSoft file storage (report repository and user attachments)
6. User access/presentation layer
• Citrix Netscaler active/passive VPX instances
Stack components (web/app)
• Largely independent (with help of load balancer/ADC)
Databases
• Data guard for replication to other site
• Clusterware/RAC for fault tolerance and load distribution
User application file storage (attachments&reports)
MOM message persistence
How Close were we?
7. What about a [supportable] stretch replica cluster?
• All nodes must be on the same logical network – no storage traffic routing
• Latency must be low
• Maximum two storage sites ( not including a possible quorum device at a third neutral site ) **
• Red Hat Architecture Review
8. **Speaking of quorum devices…
▪Current Installation vulnerable to spit-brain
❑ </knock_wood>this hasn’t been an issue…
▪New version supports a third node in the trusted pool
that doesn’t contribute a storage brick
❑ Need to make sure quorum ratio is set
▪Upstream Gluster supports an arbiter node
❑ File metadata stored with the arbiter, but not file contents
❑ Roadmap for support in RHGS product?
9. Replicated file storage and MOM message persistence Sandbox
Environment
• Three RHEL7.2 VMs
• gl10.nie.vm (arbiter) ; gl11.nie.vm, gl12.nie.vm storage bricks and ActiveMQ brokers (and test clients)