SlideShare une entreprise Scribd logo
1  sur  49
Télécharger pour lire hors ligne
Scaling MySQL using multi master
synchronous replication

Marco “the Grinch” Tusa
Percona Live London2013
About Me

Introduction

Marco “The Grinch”
• Former Pythian cluster technical leader
• Former MySQL AB PS (EMEA)
• Love programming
• History of religions
• Ski; Snowboard; scuba diving; Mountain
trekking
Agenda
• Customer requirements
• Installation and initial setup
• Applying the customer scenario to solution
• Numbers, and considerations.
• Scaling out test and efforts
• Scaling in test and efforts
• Geographic distribution
Introduction
Many Galera Talks
• PERCONA XTRADB CLUSTER IN A NUTSHELL :
HANDS ON TUTORIAL
Tutorial Monday
• Galera Cluster 3.0 New Features. Seppo Jaakola
Presentation Tuesday
• HOW TO UNDERSTAND GALERA REPLICATION
Alexey Yurchenko Presentation Tuesday
Introduction
A journey started 2 yeas ago
• First work done as POC in November 2011

• First implementation in production January 2012
• Many more after
• Last done 12 clusters of 5 nodes with 18 to 24
application server attached

Introduction
Historical Real life case
Customer mentions the need to scale for writes.
My first though went to NDB.
Customer had specific constrains:
• Amazon EC2;
• No huge instances (medium preferred);
• Number of instances Increase during peak seasons;
• Number of Instances must be reduced during regular period;
• Customer use InnoDB as storage engine in his current platform and
will not change;
Customer requirements
Refine the customer requirements
Challenging architecture design, and proof of concept on a real case of study using Synchronous
solution.
Customer asks us to investigate and design MySQL architecture to support his application serving
shops around the globe.
Scale out and scale in base to sales seasons. We will share our experience presenting the results of
our POC High level outline
Customer numbers:
• Range of operation/s from 20 to 30,000 (5.000 inserts/sec)
• Selects/Inserts 70/30 %
• Application servers from 2 to ∞
• MySQL servers from 2 to ∞
• Operation from 20 bytes to max 1Mb (text)
• Data set dimension 40GB (Old data is archive every year)
• Geographic distribution (3 -> 5 zones), partial dataset

Customer requirements
My Motto
Use the right tool for the job

Customer requirements
Scaling Up vs. Out
Scaling Up Model
• Require more investment
• Not flexible and not a good fit with MySQL

Scaling Out Model
• Scale by small investment
• Flexible
• Fits in MySQL model (HA, load balancing etc.)
Scaling Reads vs Write
•

Read Easy way of doing in MySQL if % of write is low
Write
Read

•Write
• Replication is not working
• Single process
• No data consistency check
• Parallel replication by schema is not
• Semi synchronous replication is not

the solution
the solution as well
Synchronous Replication in MySQL
MySQL cluster, I
NDBCluster
•
Really synchronous
•
Data distribution and Internal partitioning
•
The only real solution giving you 9 9. 9 9 9 % (5 minutes) max
downtime
•
NDB Cluster is more then a simple storage engine (use API if you can)
Galera replication
•
Virtually Synchronous
•
No data consistency check (optimistic lock)
•
Data replicated by Commit
•
Use InnoDB
Options Overview
Choosing the solution
Did I say I
NDB Cluster?
–But not a good fit here because:
•EC2 dimension (1 CPU 3.7GB RAM);
•Customer does not want to change from InnoDB;
•Need to train the developer to get out maximum from it;
–Galera

could be a better fit because:
•Can fit in the EC2 dimension;
•Use InnoDB;
•No additional knowledge when develop the solution;

Options Overview
Architecture Design

Final architecture simple and powerful
Architecture Design
Application layer
in the cloud

Load Balancer distributing
request in RR

Data
layer in
the cloud

MySQL instance
geographically
distributed

Architecture AWS blocks

EC2 small
instance
EC2 medium
instance
Instances EC2
Web servers
• Small instance
• Local EBS

Data Servers
• Medium instance 1 CPU 3.7GB RAM
• 1 EBS OS
• 6 EBS RAID0 for data
Be ready to scale OUT
• Create an AMI
• Get AMI update at regular base

Architecture EC2 blocks
Why not ephemeral storage
RAID0 against 6 EBS is performing faster;
•

RAID approach will mitigate possible temporary degradation;

•

Ephemeral is … ephemeral, all data will get lost;

Numbers with rude comparison
(ebs) Timing buffered disk reads:

768 MB in

3.09 seconds = 248.15 MB/sec

(eph)Timing buffered disk reads:

296 MB in

3.01 seconds =

(ebs)Timing O_DIRECT disk reads:

814 MB in

3.20 seconds = 254.29 MB/sec

(eph)Timing O_DIRECT disk reads:

2072 MB in

Architecture Installation and numbers

98.38 MB/sec

3.00 seconds = 689.71 MB/sec
Why not ephemeral storage (cont.)

Architecture Installation and numbers
Why not ephemeral storage (cont.)

Architecture Installation and numbers
Why not ephemeral storage (cont.)

Architecture Installation and numbers
Storage on EC2
Multiple EBS RAID0
Or
USE Provisioned IOPS
Amazon EBS Provisioned IOPS volumes:
Amazon EBS Standard volumes:
$0.10 per GB-month of provisioned storage $0.125 per GB-month of provisioned storage
$0.10 per provisioned IOPS-month
$0.10 per 1 million I/O requests

Architecture Installation and numbers
Instances EC2

How we configure the EBS.
• Use Amazon EC2 API Tools
(http://aws.amazon.com/developertools/351)
• Create 6 EBS
• Attach them to the running instance
Run mdadm as root (sudo mdadm --verbose --create /dev/md0 --level=0 --chunk=256 --

raid-devices=6 /dev/xvdg1 /dev/xvdg2 /dev/xvdg3 /dev/xvdg4 /dev/xvdg5 /dev/xvdg6 echo 'DEVICE
/dev/xvdg1 /dev/xvdg2 /dev/xvdg3 /dev/xvdg4 /dev/xvdg5 /dev/xvdg6' | tee -a /etc/mdadm.conf sudo

)
Create an LVM to allow possible easy increase of data size
Format using ext3 (no journaling)
Mount it using noatime nodiratime
Run hdparm –t [--direct] <device> to check it works properly
mdadm --detail --scan | sudo tee -a /etc/mdadm.conf

Installation
Instances EC2 (cont.)

You can install MySQL using RPM, or if you want to have a better
life and upgrade (or downgrade) faster do:
•Create a directory like /opt/mysql_templates
•Get

MySQL binary installation and expand it in the
/opt/mysql_templates
•Create

symbolic link /usr/local/mysql against the version you
want to use
•Create

the symbolic links also in the /usr/bin directory ie

(for bin in `ls -D /usr/local/mysql/bin/`; do ln -s /usr/local/mysql/bin/$bin /usr/bin/$bin; done)

Installation
Create the AMI
•

Once I had the machines ready and standardized.
o
o

•

Create AMI for the MySQL –Galera data node;
Create AMI for Application node;

AMI will be used for expanding the cluster and or in case of

crashes.

Installation
Problem in tuning - MySQL
MySQL optimal configuration for the environment
•
•

Dirty page;

•

Innodb write/read threads;

•

Binary logs (no binary logs unless you really need them);

•

Doublebuffer;

•

Setup

Correct Buffer pool, InnoDB log size;

Innodb Flush log TRX commit & concurrency;
Problem in tuning - Galera

Galera optimal configuration for the environment
evs.send_window Maximum messages in replication at a time
• evs.user_send_window Maximum data messages in replication
at a time
• wsrep_slave_threads which is the number of threads used by
Galera to commit the local queue
• gcache.size
• Flow Control
• Network/keep alive settings and WAN replication
•

Setup
Applying the customer scenario
How I did the tests. What I have used.
Stresstool (my development) Java
•
•
•
•
•
•
•

•
•

Test application

Multi thread approach (each thread a connection);
Configurable number of master table;
Configurable number of child table;
Variable (random) number of table in join;
Can set the ratio between R/W/D threads;
Tables can have Any data type combination;
Inserts can be done simple or batch;
Reads can be done by RANGE, IN, Equal;
Operation by set of commands not single SQL;
Applying the customer scenario
(cont.)

How I did the tests.
• Application side
•
I have gradually increase the number of thread per instance of
stresstool running, then increase the number of instance.
• Data layer
•
Start with 3 MySQL;
•
Up to 7 Node;
• Level of request
•
From 2 Application blocks to 12;
•
From 4 threads for “Application” block;
•
To 64 threads for “Application” block (768);

Test application
Numbers

Table with numbers (writes) for 3 nodes cluster and bad replication
traffic

Bad commit behavior
Numbers in Galera replication
What happened to the replication?

Bad commit behavior
Changes in replication settings
Problem was in commit efficiency & Flow Control

Reviewing Galera documentation I choose to change:
•

evs.send_window=1024 (Maximum packets in replication at a
time.);

•

evs.user_send_window=1024 (Maximum data packets in
replication at a time);

•

wsrep_slave_threads=48;

Bad commit behavior
Numbers After changes (cont.)

Table with numbers (writes) for 3-5-7 nodes and increase traffic

Using MySQL 5.5
Numbers After changes (cont.)
Table with numbers (writes) for 3-5-7 nodes and increase traffic

Using MySQL 5.5
Other problems…
This is what happen if one node starts to have issue?

Tests & numbers
Numbers After changes (cont.)

Rebuild the node, re-attach it to the cluster and the status is:

Tests & numbers
Numbers After changes (cont.)
Going further and removing Binary log writes:

Tests & numbers
Numbers for reads traffic
Select for 3 -7 nodes cluster and increase

Tests & numbers
Many related metrics
From 4 – 92 threads

Tests & numbers Real HW
FC on real HW
From 4 – 92 threads

Tests & numbers Real HW
How to scale OUT
The effort to scale out is:
• Launch a new instance from AMI (IST Sync if
wsrep_local_cache_size big enough otherwise SST);
•

Always add nodes in order to have ODD number of nodes;

Modify the my.cnf to match the server ID and IP of the master
node;
•

•

Start MySQL

•

Include node IP in the list of active IP of the load balancer

•

The whole operation normally takes less then 30 minutes.

Scaling
How to scale IN
The effort to scale IN is minimal:
Remove the data nodes IP from load balancer (HAProxy);
• Stop MySQL
• Stop/terminate the instance
•

Scaling
How to Backup:
If using provisioning and one single volumes
contains al, snapshot is fine.
Otherwise I like the Jay solution:
http://www.mysqlperformanceblog.com/2013/10/08
/taking-backups-percona-xtradb-cluster-withoutstalls-flow-control/
Using wsrep_desync=OFF
Failover and HA
With MySQL and Galera, unless issue all the nodes should contain
the same data.
Performing failover will be not necessary for the whole service.
Cluster in good health
Cluster with failed node

So the fail over is mainly an operation at load balancer (HAProxy
works great) and add another new Instance (from AMI).
Geographic distribution
With Galera it is possible to set the cluster to replicate cross
Amazon’s zones.
I have tested the implementation of 3 geographic location:
• Master location (1 to 7 nodes);
•

First distributed location (1 node to 3 on failover);

•

Second distributed location (1 node to 3 on failover);

No significant delay were reported, when the distributed nodes remain
passive.
•

Good to play with:

keepalive_period inactive_check_period suspect_timeout
inactive_timeout install_timeout

Geographic distribution
Problems with Galera
During the tests we face the following issues:
•

MySQL data node crash auto restart, recovery (Galera in loop)

•

Node behind the cluster, replica is not fully synchronous, so the
local queue become too long, slowing down the whole cluster

•

Node after restart acting slowly, no apparent issue, no way to
have it behaving as it was before clean shutdown, happening
randomly, also possible issue due to Amazon.
Faster solution build another node and attach it in place of the
failing.

Conclusions
Did we match the expectations?
Our numbers were:
•

From 1,200 to ~10,000 (~3,000 in prod) inserts/sec

•

27,000 reads/sec with 7 nodes

•

From 2 to 12 Application servers (with 768 request/sec)

•

EC2 medium 1 CPU and 3.7GB!!
o

In Prod Large 7.5GB 2 CPU.
I would say mission accomplished!

Conclusions
Consideration about the solution
Pro
•

Flexible;

•

Use well known storage engine;

•

Once tuned is “stable” (if Cloud permit it);

Cons
•

!WAS! New technology not included in a official cycle of development;

•

Some times fails without clear indication of why, but is getting better;

•

Replication is still not fully Synchronous (on write/commit);

Conclusions
Monitoring
Control what is going on is very important
Few tool are currently available and make sense for me:

Jay Janssen
https://github.com/jayjanssen/myq_gadgets/blob/master/myq_status
ClusterControl for MySQL
http://www.severalnines.com/resources/user-guide-clustercontrol-mysql
Percona Cacti monitor template

Conclusions
Q&A
Thank you
To contact Me
marco.tusa@percona.com
marcotusa@tusacentral.net
To follow me
http://www.tusacentral.net/
https://www.facebook.com/marco.tusa.94

@marcotusa
http://it.linkedin.com/in/marcotusa/

Conclusions

Contenu connexe

Tendances

Webinar Slides: Migrating to Galera Cluster
Webinar Slides: Migrating to Galera ClusterWebinar Slides: Migrating to Galera Cluster
Webinar Slides: Migrating to Galera Cluster
Severalnines
 

Tendances (20)

Galera webinar migration to galera cluster from my sql async replication
Galera webinar migration to galera cluster from my sql async replicationGalera webinar migration to galera cluster from my sql async replication
Galera webinar migration to galera cluster from my sql async replication
 
Introduction to Galera
Introduction to GaleraIntroduction to Galera
Introduction to Galera
 
Galera Cluster Best Practices for DBA's and DevOps Part 1
Galera Cluster Best Practices for DBA's and DevOps Part 1Galera Cluster Best Practices for DBA's and DevOps Part 1
Galera Cluster Best Practices for DBA's and DevOps Part 1
 
Using galera replication to create geo distributed clusters on the wan
Using galera replication to create geo distributed clusters on the wanUsing galera replication to create geo distributed clusters on the wan
Using galera replication to create geo distributed clusters on the wan
 
Codership's galera cluster installation and quickstart webinar march 2016
Codership's galera cluster installation and quickstart webinar march 2016Codership's galera cluster installation and quickstart webinar march 2016
Codership's galera cluster installation and quickstart webinar march 2016
 
Galera Cluster - Node Recovery - Webinar slides
Galera Cluster - Node Recovery - Webinar slidesGalera Cluster - Node Recovery - Webinar slides
Galera Cluster - Node Recovery - Webinar slides
 
Taking Full Advantage of Galera Multi Master Cluster
Taking Full Advantage of Galera Multi Master ClusterTaking Full Advantage of Galera Multi Master Cluster
Taking Full Advantage of Galera Multi Master Cluster
 
Maria DB Galera Cluster for High Availability
Maria DB Galera Cluster for High AvailabilityMaria DB Galera Cluster for High Availability
Maria DB Galera Cluster for High Availability
 
Repair & Recovery for your MySQL, MariaDB & MongoDB / TokuMX Clusters - Webin...
Repair & Recovery for your MySQL, MariaDB & MongoDB / TokuMX Clusters - Webin...Repair & Recovery for your MySQL, MariaDB & MongoDB / TokuMX Clusters - Webin...
Repair & Recovery for your MySQL, MariaDB & MongoDB / TokuMX Clusters - Webin...
 
Zero Downtime Schema Changes - Galera Cluster - Best Practices
Zero Downtime Schema Changes - Galera Cluster - Best PracticesZero Downtime Schema Changes - Galera Cluster - Best Practices
Zero Downtime Schema Changes - Galera Cluster - Best Practices
 
Galera Cluster 4 for MySQL 8 Release Webinar slides
Galera Cluster 4 for MySQL 8 Release Webinar slidesGalera Cluster 4 for MySQL 8 Release Webinar slides
Galera Cluster 4 for MySQL 8 Release Webinar slides
 
9 DevOps Tips for Going in Production with Galera Cluster for MySQL - Slides
9 DevOps Tips for Going in Production with Galera Cluster for MySQL - Slides9 DevOps Tips for Going in Production with Galera Cluster for MySQL - Slides
9 DevOps Tips for Going in Production with Galera Cluster for MySQL - Slides
 
MySQL Multi Master Replication
MySQL Multi Master ReplicationMySQL Multi Master Replication
MySQL Multi Master Replication
 
MariaDB Galera Cluster - Simple, Transparent, Highly Available
MariaDB Galera Cluster - Simple, Transparent, Highly AvailableMariaDB Galera Cluster - Simple, Transparent, Highly Available
MariaDB Galera Cluster - Simple, Transparent, Highly Available
 
Webinar Slides: Migrating to Galera Cluster
Webinar Slides: Migrating to Galera ClusterWebinar Slides: Migrating to Galera Cluster
Webinar Slides: Migrating to Galera Cluster
 
Oss4b - pxc introduction
Oss4b   - pxc introductionOss4b   - pxc introduction
Oss4b - pxc introduction
 
Running Galera Cluster on Microsoft Azure
Running Galera Cluster on Microsoft AzureRunning Galera Cluster on Microsoft Azure
Running Galera Cluster on Microsoft Azure
 
Galera cluster for MySQL - Introduction Slides
Galera cluster for MySQL - Introduction SlidesGalera cluster for MySQL - Introduction Slides
Galera cluster for MySQL - Introduction Slides
 
Galera cluster for high availability
Galera cluster for high availability Galera cluster for high availability
Galera cluster for high availability
 
Planning for Disaster Recovery (DR) with Galera Cluster
Planning for Disaster Recovery (DR) with Galera ClusterPlanning for Disaster Recovery (DR) with Galera Cluster
Planning for Disaster Recovery (DR) with Galera Cluster
 

Similaire à Scaling with sync_replication using Galera and EC2

Handling Massive Writes
Handling Massive WritesHandling Massive Writes
Handling Massive Writes
Liran Zelkha
 
Colvin exadata mistakes_ioug_2014
Colvin exadata mistakes_ioug_2014Colvin exadata mistakes_ioug_2014
Colvin exadata mistakes_ioug_2014
marvin herrera
 
M6d cassandrapresentation
M6d cassandrapresentationM6d cassandrapresentation
M6d cassandrapresentation
Edward Capriolo
 
In-memory Caching in HDFS: Lower Latency, Same Great Taste
In-memory Caching in HDFS: Lower Latency, Same Great TasteIn-memory Caching in HDFS: Lower Latency, Same Great Taste
In-memory Caching in HDFS: Lower Latency, Same Great Taste
DataWorks Summit
 
Pre and post tips to installing sql server correctly
Pre and post tips to installing sql server correctlyPre and post tips to installing sql server correctly
Pre and post tips to installing sql server correctly
Antonios Chatzipavlis
 

Similaire à Scaling with sync_replication using Galera and EC2 (20)

Amazon Aurora: Amazon’s New Relational Database Engine
Amazon Aurora: Amazon’s New Relational Database EngineAmazon Aurora: Amazon’s New Relational Database Engine
Amazon Aurora: Amazon’s New Relational Database Engine
 
Amazon Aurora: Amazon’s New Relational Database Engine
Amazon Aurora: Amazon’s New Relational Database EngineAmazon Aurora: Amazon’s New Relational Database Engine
Amazon Aurora: Amazon’s New Relational Database Engine
 
Amazon Aurora Let's Talk About Performance
Amazon Aurora Let's Talk About PerformanceAmazon Aurora Let's Talk About Performance
Amazon Aurora Let's Talk About Performance
 
Amazon Aurora: The New Relational Database Engine from Amazon
Amazon Aurora: The New Relational Database Engine from AmazonAmazon Aurora: The New Relational Database Engine from Amazon
Amazon Aurora: The New Relational Database Engine from Amazon
 
Amazon Aurora: The New Relational Database Engine from Amazon
Amazon Aurora: The New Relational Database Engine from AmazonAmazon Aurora: The New Relational Database Engine from Amazon
Amazon Aurora: The New Relational Database Engine from Amazon
 
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]
 
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]
 
Best Practices for NoSQL Workloads on Amazon EC2 and Amazon EBS - February 20...
Best Practices for NoSQL Workloads on Amazon EC2 and Amazon EBS - February 20...Best Practices for NoSQL Workloads on Amazon EC2 and Amazon EBS - February 20...
Best Practices for NoSQL Workloads on Amazon EC2 and Amazon EBS - February 20...
 
Handling Massive Writes
Handling Massive WritesHandling Massive Writes
Handling Massive Writes
 
Colvin exadata mistakes_ioug_2014
Colvin exadata mistakes_ioug_2014Colvin exadata mistakes_ioug_2014
Colvin exadata mistakes_ioug_2014
 
Cassandra CLuster Management by Japan Cassandra Community
Cassandra CLuster Management by Japan Cassandra CommunityCassandra CLuster Management by Japan Cassandra Community
Cassandra CLuster Management by Japan Cassandra Community
 
Managing Security At 1M Events a Second using Elasticsearch
Managing Security At 1M Events a Second using ElasticsearchManaging Security At 1M Events a Second using Elasticsearch
Managing Security At 1M Events a Second using Elasticsearch
 
Hardware Provisioning
Hardware ProvisioningHardware Provisioning
Hardware Provisioning
 
M6d cassandrapresentation
M6d cassandrapresentationM6d cassandrapresentation
M6d cassandrapresentation
 
Webinar slides: Our Guide to MySQL & MariaDB Performance Tuning
Webinar slides: Our Guide to MySQL & MariaDB Performance TuningWebinar slides: Our Guide to MySQL & MariaDB Performance Tuning
Webinar slides: Our Guide to MySQL & MariaDB Performance Tuning
 
How does Apache Pegasus (incubating) community develop at SensorsData
How does Apache Pegasus (incubating) community develop at SensorsDataHow does Apache Pegasus (incubating) community develop at SensorsData
How does Apache Pegasus (incubating) community develop at SensorsData
 
In-memory Caching in HDFS: Lower Latency, Same Great Taste
In-memory Caching in HDFS: Lower Latency, Same Great TasteIn-memory Caching in HDFS: Lower Latency, Same Great Taste
In-memory Caching in HDFS: Lower Latency, Same Great Taste
 
AWS Summit London 2014 | Uses and Best Practices for Amazon Redshift (200)
AWS Summit London 2014 | Uses and Best Practices for Amazon Redshift (200)AWS Summit London 2014 | Uses and Best Practices for Amazon Redshift (200)
AWS Summit London 2014 | Uses and Best Practices for Amazon Redshift (200)
 
Azure Data Factory Data Flow Performance Tuning 101
Azure Data Factory Data Flow Performance Tuning 101Azure Data Factory Data Flow Performance Tuning 101
Azure Data Factory Data Flow Performance Tuning 101
 
Pre and post tips to installing sql server correctly
Pre and post tips to installing sql server correctlyPre and post tips to installing sql server correctly
Pre and post tips to installing sql server correctly
 

Plus de Marco Tusa

Empower my sql server administration with 5.7 instruments
Empower my sql server administration with 5.7 instrumentsEmpower my sql server administration with 5.7 instruments
Empower my sql server administration with 5.7 instruments
Marco Tusa
 
Discard inport exchange table & tablespace
Discard inport exchange table & tablespaceDiscard inport exchange table & tablespace
Discard inport exchange table & tablespace
Marco Tusa
 

Plus de Marco Tusa (20)

Percona xtra db cluster(pxc) non blocking operations, what you need to know t...
Percona xtra db cluster(pxc) non blocking operations, what you need to know t...Percona xtra db cluster(pxc) non blocking operations, what you need to know t...
Percona xtra db cluster(pxc) non blocking operations, what you need to know t...
 
My sql on kubernetes demystified
My sql on kubernetes demystifiedMy sql on kubernetes demystified
My sql on kubernetes demystified
 
Comparing high availability solutions with percona xtradb cluster and percona...
Comparing high availability solutions with percona xtradb cluster and percona...Comparing high availability solutions with percona xtradb cluster and percona...
Comparing high availability solutions with percona xtradb cluster and percona...
 
Accessing data through hibernate: what DBAs should tell to developers and vic...
Accessing data through hibernate: what DBAs should tell to developers and vic...Accessing data through hibernate: what DBAs should tell to developers and vic...
Accessing data through hibernate: what DBAs should tell to developers and vic...
 
Best practice-high availability-solution-geo-distributed-final
Best practice-high availability-solution-geo-distributed-finalBest practice-high availability-solution-geo-distributed-final
Best practice-high availability-solution-geo-distributed-final
 
MySQL innoDB split and merge pages
MySQL innoDB split and merge pagesMySQL innoDB split and merge pages
MySQL innoDB split and merge pages
 
Robust ha solutions with proxysql
Robust ha solutions with proxysqlRobust ha solutions with proxysql
Robust ha solutions with proxysql
 
Fortify aws aurora_proxy_2019_pleu
Fortify aws aurora_proxy_2019_pleuFortify aws aurora_proxy_2019_pleu
Fortify aws aurora_proxy_2019_pleu
 
Accessing Data Through Hibernate; What DBAs Should Tell Developers and Vice V...
Accessing Data Through Hibernate; What DBAs Should Tell Developers and Vice V...Accessing Data Through Hibernate; What DBAs Should Tell Developers and Vice V...
Accessing Data Through Hibernate; What DBAs Should Tell Developers and Vice V...
 
Are we there Yet?? (The long journey of Migrating from close source to opens...
Are we there Yet?? (The long journey of Migrating from close source to opens...Are we there Yet?? (The long journey of Migrating from close source to opens...
Are we there Yet?? (The long journey of Migrating from close source to opens...
 
Improve aws withproxysql
Improve aws withproxysqlImprove aws withproxysql
Improve aws withproxysql
 
Fortify aws aurora_proxy
Fortify aws aurora_proxyFortify aws aurora_proxy
Fortify aws aurora_proxy
 
Mysql8 advance tuning with resource group
Mysql8 advance tuning with resource groupMysql8 advance tuning with resource group
Mysql8 advance tuning with resource group
 
Proxysql sharding
Proxysql shardingProxysql sharding
Proxysql sharding
 
Geographically dispersed perconaxtra db cluster deployment
Geographically dispersed perconaxtra db cluster deploymentGeographically dispersed perconaxtra db cluster deployment
Geographically dispersed perconaxtra db cluster deployment
 
Sync rep aurora_2016
Sync rep aurora_2016Sync rep aurora_2016
Sync rep aurora_2016
 
Proxysql ha plam_2016_2_keynote
Proxysql ha plam_2016_2_keynoteProxysql ha plam_2016_2_keynote
Proxysql ha plam_2016_2_keynote
 
Empower my sql server administration with 5.7 instruments
Empower my sql server administration with 5.7 instrumentsEmpower my sql server administration with 5.7 instruments
Empower my sql server administration with 5.7 instruments
 
Plmce 14 be a_hero_16x9_final
Plmce 14 be a_hero_16x9_finalPlmce 14 be a_hero_16x9_final
Plmce 14 be a_hero_16x9_final
 
Discard inport exchange table & tablespace
Discard inport exchange table & tablespaceDiscard inport exchange table & tablespace
Discard inport exchange table & tablespace
 

Dernier

Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 

Dernier (20)

Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 

Scaling with sync_replication using Galera and EC2

  • 1. Scaling MySQL using multi master synchronous replication Marco “the Grinch” Tusa Percona Live London2013
  • 2. About Me Introduction Marco “The Grinch” • Former Pythian cluster technical leader • Former MySQL AB PS (EMEA) • Love programming • History of religions • Ski; Snowboard; scuba diving; Mountain trekking
  • 3. Agenda • Customer requirements • Installation and initial setup • Applying the customer scenario to solution • Numbers, and considerations. • Scaling out test and efforts • Scaling in test and efforts • Geographic distribution Introduction
  • 4. Many Galera Talks • PERCONA XTRADB CLUSTER IN A NUTSHELL : HANDS ON TUTORIAL Tutorial Monday • Galera Cluster 3.0 New Features. Seppo Jaakola Presentation Tuesday • HOW TO UNDERSTAND GALERA REPLICATION Alexey Yurchenko Presentation Tuesday Introduction
  • 5. A journey started 2 yeas ago • First work done as POC in November 2011 • First implementation in production January 2012 • Many more after • Last done 12 clusters of 5 nodes with 18 to 24 application server attached Introduction
  • 6. Historical Real life case Customer mentions the need to scale for writes. My first though went to NDB. Customer had specific constrains: • Amazon EC2; • No huge instances (medium preferred); • Number of instances Increase during peak seasons; • Number of Instances must be reduced during regular period; • Customer use InnoDB as storage engine in his current platform and will not change; Customer requirements
  • 7. Refine the customer requirements Challenging architecture design, and proof of concept on a real case of study using Synchronous solution. Customer asks us to investigate and design MySQL architecture to support his application serving shops around the globe. Scale out and scale in base to sales seasons. We will share our experience presenting the results of our POC High level outline Customer numbers: • Range of operation/s from 20 to 30,000 (5.000 inserts/sec) • Selects/Inserts 70/30 % • Application servers from 2 to ∞ • MySQL servers from 2 to ∞ • Operation from 20 bytes to max 1Mb (text) • Data set dimension 40GB (Old data is archive every year) • Geographic distribution (3 -> 5 zones), partial dataset Customer requirements
  • 8. My Motto Use the right tool for the job Customer requirements
  • 9. Scaling Up vs. Out Scaling Up Model • Require more investment • Not flexible and not a good fit with MySQL Scaling Out Model • Scale by small investment • Flexible • Fits in MySQL model (HA, load balancing etc.)
  • 10. Scaling Reads vs Write • Read Easy way of doing in MySQL if % of write is low Write Read •Write • Replication is not working • Single process • No data consistency check • Parallel replication by schema is not • Semi synchronous replication is not the solution the solution as well
  • 11. Synchronous Replication in MySQL MySQL cluster, I NDBCluster • Really synchronous • Data distribution and Internal partitioning • The only real solution giving you 9 9. 9 9 9 % (5 minutes) max downtime • NDB Cluster is more then a simple storage engine (use API if you can) Galera replication • Virtually Synchronous • No data consistency check (optimistic lock) • Data replicated by Commit • Use InnoDB Options Overview
  • 12. Choosing the solution Did I say I NDB Cluster? –But not a good fit here because: •EC2 dimension (1 CPU 3.7GB RAM); •Customer does not want to change from InnoDB; •Need to train the developer to get out maximum from it; –Galera could be a better fit because: •Can fit in the EC2 dimension; •Use InnoDB; •No additional knowledge when develop the solution; Options Overview
  • 14. Architecture Design Application layer in the cloud Load Balancer distributing request in RR Data layer in the cloud MySQL instance geographically distributed Architecture AWS blocks EC2 small instance EC2 medium instance
  • 15. Instances EC2 Web servers • Small instance • Local EBS Data Servers • Medium instance 1 CPU 3.7GB RAM • 1 EBS OS • 6 EBS RAID0 for data Be ready to scale OUT • Create an AMI • Get AMI update at regular base Architecture EC2 blocks
  • 16. Why not ephemeral storage RAID0 against 6 EBS is performing faster; • RAID approach will mitigate possible temporary degradation; • Ephemeral is … ephemeral, all data will get lost; Numbers with rude comparison (ebs) Timing buffered disk reads: 768 MB in 3.09 seconds = 248.15 MB/sec (eph)Timing buffered disk reads: 296 MB in 3.01 seconds = (ebs)Timing O_DIRECT disk reads: 814 MB in 3.20 seconds = 254.29 MB/sec (eph)Timing O_DIRECT disk reads: 2072 MB in Architecture Installation and numbers 98.38 MB/sec 3.00 seconds = 689.71 MB/sec
  • 17. Why not ephemeral storage (cont.) Architecture Installation and numbers
  • 18. Why not ephemeral storage (cont.) Architecture Installation and numbers
  • 19. Why not ephemeral storage (cont.) Architecture Installation and numbers
  • 20. Storage on EC2 Multiple EBS RAID0 Or USE Provisioned IOPS Amazon EBS Provisioned IOPS volumes: Amazon EBS Standard volumes: $0.10 per GB-month of provisioned storage $0.125 per GB-month of provisioned storage $0.10 per provisioned IOPS-month $0.10 per 1 million I/O requests Architecture Installation and numbers
  • 21. Instances EC2 How we configure the EBS. • Use Amazon EC2 API Tools (http://aws.amazon.com/developertools/351) • Create 6 EBS • Attach them to the running instance Run mdadm as root (sudo mdadm --verbose --create /dev/md0 --level=0 --chunk=256 -- raid-devices=6 /dev/xvdg1 /dev/xvdg2 /dev/xvdg3 /dev/xvdg4 /dev/xvdg5 /dev/xvdg6 echo 'DEVICE /dev/xvdg1 /dev/xvdg2 /dev/xvdg3 /dev/xvdg4 /dev/xvdg5 /dev/xvdg6' | tee -a /etc/mdadm.conf sudo ) Create an LVM to allow possible easy increase of data size Format using ext3 (no journaling) Mount it using noatime nodiratime Run hdparm –t [--direct] <device> to check it works properly mdadm --detail --scan | sudo tee -a /etc/mdadm.conf Installation
  • 22. Instances EC2 (cont.) You can install MySQL using RPM, or if you want to have a better life and upgrade (or downgrade) faster do: •Create a directory like /opt/mysql_templates •Get MySQL binary installation and expand it in the /opt/mysql_templates •Create symbolic link /usr/local/mysql against the version you want to use •Create the symbolic links also in the /usr/bin directory ie (for bin in `ls -D /usr/local/mysql/bin/`; do ln -s /usr/local/mysql/bin/$bin /usr/bin/$bin; done) Installation
  • 23. Create the AMI • Once I had the machines ready and standardized. o o • Create AMI for the MySQL –Galera data node; Create AMI for Application node; AMI will be used for expanding the cluster and or in case of crashes. Installation
  • 24. Problem in tuning - MySQL MySQL optimal configuration for the environment • • Dirty page; • Innodb write/read threads; • Binary logs (no binary logs unless you really need them); • Doublebuffer; • Setup Correct Buffer pool, InnoDB log size; Innodb Flush log TRX commit & concurrency;
  • 25. Problem in tuning - Galera Galera optimal configuration for the environment evs.send_window Maximum messages in replication at a time • evs.user_send_window Maximum data messages in replication at a time • wsrep_slave_threads which is the number of threads used by Galera to commit the local queue • gcache.size • Flow Control • Network/keep alive settings and WAN replication • Setup
  • 26. Applying the customer scenario How I did the tests. What I have used. Stresstool (my development) Java • • • • • • • • • Test application Multi thread approach (each thread a connection); Configurable number of master table; Configurable number of child table; Variable (random) number of table in join; Can set the ratio between R/W/D threads; Tables can have Any data type combination; Inserts can be done simple or batch; Reads can be done by RANGE, IN, Equal; Operation by set of commands not single SQL;
  • 27. Applying the customer scenario (cont.) How I did the tests. • Application side • I have gradually increase the number of thread per instance of stresstool running, then increase the number of instance. • Data layer • Start with 3 MySQL; • Up to 7 Node; • Level of request • From 2 Application blocks to 12; • From 4 threads for “Application” block; • To 64 threads for “Application” block (768); Test application
  • 28. Numbers Table with numbers (writes) for 3 nodes cluster and bad replication traffic Bad commit behavior
  • 29. Numbers in Galera replication What happened to the replication? Bad commit behavior
  • 30. Changes in replication settings Problem was in commit efficiency & Flow Control Reviewing Galera documentation I choose to change: • evs.send_window=1024 (Maximum packets in replication at a time.); • evs.user_send_window=1024 (Maximum data packets in replication at a time); • wsrep_slave_threads=48; Bad commit behavior
  • 31. Numbers After changes (cont.) Table with numbers (writes) for 3-5-7 nodes and increase traffic Using MySQL 5.5
  • 32. Numbers After changes (cont.) Table with numbers (writes) for 3-5-7 nodes and increase traffic Using MySQL 5.5
  • 33. Other problems… This is what happen if one node starts to have issue? Tests & numbers
  • 34. Numbers After changes (cont.) Rebuild the node, re-attach it to the cluster and the status is: Tests & numbers
  • 35. Numbers After changes (cont.) Going further and removing Binary log writes: Tests & numbers
  • 36. Numbers for reads traffic Select for 3 -7 nodes cluster and increase Tests & numbers
  • 37. Many related metrics From 4 – 92 threads Tests & numbers Real HW
  • 38. FC on real HW From 4 – 92 threads Tests & numbers Real HW
  • 39. How to scale OUT The effort to scale out is: • Launch a new instance from AMI (IST Sync if wsrep_local_cache_size big enough otherwise SST); • Always add nodes in order to have ODD number of nodes; Modify the my.cnf to match the server ID and IP of the master node; • • Start MySQL • Include node IP in the list of active IP of the load balancer • The whole operation normally takes less then 30 minutes. Scaling
  • 40. How to scale IN The effort to scale IN is minimal: Remove the data nodes IP from load balancer (HAProxy); • Stop MySQL • Stop/terminate the instance • Scaling
  • 41. How to Backup: If using provisioning and one single volumes contains al, snapshot is fine. Otherwise I like the Jay solution: http://www.mysqlperformanceblog.com/2013/10/08 /taking-backups-percona-xtradb-cluster-withoutstalls-flow-control/ Using wsrep_desync=OFF
  • 42. Failover and HA With MySQL and Galera, unless issue all the nodes should contain the same data. Performing failover will be not necessary for the whole service. Cluster in good health Cluster with failed node So the fail over is mainly an operation at load balancer (HAProxy works great) and add another new Instance (from AMI).
  • 43. Geographic distribution With Galera it is possible to set the cluster to replicate cross Amazon’s zones. I have tested the implementation of 3 geographic location: • Master location (1 to 7 nodes); • First distributed location (1 node to 3 on failover); • Second distributed location (1 node to 3 on failover); No significant delay were reported, when the distributed nodes remain passive. • Good to play with: keepalive_period inactive_check_period suspect_timeout inactive_timeout install_timeout Geographic distribution
  • 44. Problems with Galera During the tests we face the following issues: • MySQL data node crash auto restart, recovery (Galera in loop) • Node behind the cluster, replica is not fully synchronous, so the local queue become too long, slowing down the whole cluster • Node after restart acting slowly, no apparent issue, no way to have it behaving as it was before clean shutdown, happening randomly, also possible issue due to Amazon. Faster solution build another node and attach it in place of the failing. Conclusions
  • 45. Did we match the expectations? Our numbers were: • From 1,200 to ~10,000 (~3,000 in prod) inserts/sec • 27,000 reads/sec with 7 nodes • From 2 to 12 Application servers (with 768 request/sec) • EC2 medium 1 CPU and 3.7GB!! o In Prod Large 7.5GB 2 CPU. I would say mission accomplished! Conclusions
  • 46. Consideration about the solution Pro • Flexible; • Use well known storage engine; • Once tuned is “stable” (if Cloud permit it); Cons • !WAS! New technology not included in a official cycle of development; • Some times fails without clear indication of why, but is getting better; • Replication is still not fully Synchronous (on write/commit); Conclusions
  • 47. Monitoring Control what is going on is very important Few tool are currently available and make sense for me: Jay Janssen https://github.com/jayjanssen/myq_gadgets/blob/master/myq_status ClusterControl for MySQL http://www.severalnines.com/resources/user-guide-clustercontrol-mysql Percona Cacti monitor template Conclusions
  • 48. Q&A
  • 49. Thank you To contact Me marco.tusa@percona.com marcotusa@tusacentral.net To follow me http://www.tusacentral.net/ https://www.facebook.com/marco.tusa.94 @marcotusa http://it.linkedin.com/in/marcotusa/ Conclusions