Ce diaporama a bien été signalé.
Nous utilisons votre profil LinkedIn et vos données d’activité pour vous proposer des publicités personnalisées et pertinentes. Vous pouvez changer vos préférences de publicités à tout moment.
MySQL and
Ceph
2:20pm – 3:10pm
Room 203
MySQL in the Cloud
Head-to-Head Performance Lab
1:20pm – 2:10pm
Room 203
WHOIS
Brent Compton and Kyle Bader
Storage Solution Architectures
Red Hat
Yves Trudeau
Principal Architect
Percona
AGENDA
MySQL on Ceph MySQL in the Cloud
Head-to-Head Performance Lab
• MySQL on Ceph vs. AWS
• Head-to-head: Performance
•...
MySQL on Ceph vs. AWS
• Shared, elastic storage pool
• Dynamic DB placement
• Flexible volume resizing
• Live instance migration
• Backup to obj...
MYSQL-ON-CEPH PRIVATE CLOUD
FIDELITY TO A MYSQL-ON-AWS EXPERIENCE
• Hybrid cloud requires public/private cloud commonaliti...
HEAD-TO-HEAD
PERFORMANCE
30 IOPS/GB: AWS EBS P-IOPS TARGET
HEAD-TO-HEAD LAB
TEST ENVIRONMENTS
• EC2 r3.2xlarge and m4.4xlarge
• EBS Provisioned IOPS and GPSSD
• Percona Server
• Sup...
OSD Storage Server Systems
5x SuperStorage SSG-6028R-OSDXXX
Dual Intel Xeon E5-2650v3 (10x core)
32GB SDRAM DDR3
2x 80GB b...
7996 7956
950
1680 1687
267
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
P-IOPS
m4.4xl
P-IOPS
r3.2xl
GP-SSD
r3.2xl
100% ...
7996
67144
40031
1680
5677
1258
20053
4752
0
10000
20000
30000
40000
50000
60000
70000
80000
P-IOPS
m4.4xl
Ceph cluster
1x...
CONVERTING SYSBENCH REQUESTS TO
IOPS READ PATH
X% FROM INNODB BUFFER POOL
IOPS = (READ REQUESTS – X%)
SYSBENCH READ
CONVERTING SYSBENCH REQUESTS TO
IOPS WRITE PATH
SYSBENCH WRITE
1X READ
X% FROM INNODB BUFFER POOL
IOPS = (READ REQ – X%)
L...
30.0 29.8
3.6
25.6 25.7
4.1
0.0
5.0
10.0
15.0
20.0
25.0
30.0
35.0
P-IOPS
m4.4xl
P-IOPS
r3.2xl
GP-SSD
r3.2xl
100% Read
100%...
IOPS/GB PER MYSQL INSTANCE
30
252
150
26
78
19
0
50
100
150
200
250
300
P-IOPS
m4.4xl
Ceph cluster
1x "m4.4xl"
(14% capaci...
FOCUSING ON WRITE IOPS/GB
AWS THROTTLE WATERMARK FOR DETERMINISTIC PERFORMANCE
26
78
19
0
10
20
30
40
50
60
70
80
90
P-IOP...
EFFECT OF CEPH CLUSTER LOADING ON
IOPS/GB
78
37
25
19
134
72
37 36
0
20
40
60
80
100
120
140
160
Ceph cluster
(14% capacit...
A NOTE ON WRITE AMPLIFICATION
MYSQL ON CEPH – WRITE PATH
INNODB DOUBLE
WRITE BUFFER
CEPH REPLICATION
OSD JOURNALING
MYSQL
...
HEAD-TO-HEAD
PERFORMANCE
30 IOPS/GB: AWS EBS P-IOPS TARGET
25 IOPS/GB: CEPH 72% CLUSTER CAPACITY (WRITES)
78 IOPS/GB: CEPH...
HEAD-TO-HEAD
PRICE/PERFORMANCE
$2.50: TARGET AWS EBS P-IOPS STORAGE PER IOP
IOPS/GB ON VARIOUS CONFIGS
31
18 18
78
-
10
20
30
40
50
60
70
80
90
IOPS/GB
(SysbenchWrite)
AWS EBS Provisioned-IOPS
Ceph ...
$/STORAGE-IOP ON THE SAME CONFIGS
$2.40
$0.80 $0.78
$1.06
$-
$0.50
$1.00
$1.50
$2.00
$2.50
$3.00
Storage$/IOP
(SysbenchWri...
HEAD-TO-HEAD
PRICE/PERFORMANCE
$2.50: TARGET AWS P-IOPS $/IOP (EBS ONLY)
$0.78: CEPH ON SUPERMICRO MICRO CLOUD CLUSTER
IOPS PERFORMANCE NODES
FOR CEPH
ARCHITECTURAL CONSIDERATIONS
UNDERSTANDING THE WORKLOAD
Traditional Ceph Workload
• $/GB
• PBs
• Unstructured data
• MB/se...
ARCHITECTURAL CONSIDERATIONS
FUNDAMENTALLY DIFFERENT DESIGN
Traditional Ceph Workload
• 50-300+ TB per server
• Magnetic M...
18
18
19
6
34 34
36
8
0
5
10
15
20
25
30
35
40
Ceph cluster
80 cores
8 NVMe
(87% capacity)
Ceph cluster
40 cores
4 NVMe
(8...
8x Nodes in 3U chassis
Model:
SYS-5038MR-OSDXXXP
Per Node Configuration:
CPU: Single Intel Xeon E5-2630 v4
Memory: 32GB
NV...
SEE US AT PERCONA LIVE!
• Hands on Test Drive: MySQL on Ceph
April 18, 1:30-4:30
• MySQL on Ceph
April 19, 1:20-2:10
• MyS...
THANK YOU!
Prochain SlideShare
Chargement dans…5
×
Prochain SlideShare
My SQL on Ceph
Suivant
Télécharger pour lire hors ligne et voir en mode plein écran

4

Partager

Télécharger pour lire hors ligne

My SQL and Ceph: Head-to-Head Performance Lab

Télécharger pour lire hors ligne

In this April 2016 session, Red Hat's Brent Compton and Kyle Bader compared the performance of MySQL on public and private clouds with a head-to-head look at (a) MySQL on Amazon AWS EBS, (b) MySQL on Amazon AWS EBS Provisioned IOPS, (c) MySQL on an OpenStack/Ceph private cloud (SuperMicro HDD-based Ceph storage), (d) MySQL on an OpenStack/Ceph private cloud (SuperMicro all-flash Ceph storage), and (e) MySQL on a single bare metal SuperMicro server (baseline).

My SQL and Ceph: Head-to-Head Performance Lab

  1. 1. MySQL and Ceph 2:20pm – 3:10pm Room 203 MySQL in the Cloud Head-to-Head Performance Lab 1:20pm – 2:10pm Room 203
  2. 2. WHOIS Brent Compton and Kyle Bader Storage Solution Architectures Red Hat Yves Trudeau Principal Architect Percona
  3. 3. AGENDA MySQL on Ceph MySQL in the Cloud Head-to-Head Performance Lab • MySQL on Ceph vs. AWS • Head-to-head: Performance • Head-to-head: Price/performance • IOPS performance nodes for Ceph • Why MySQL on Ceph • Ceph Architecture • Tuning: MySQL on Ceph • HW Architectural Considerations
  4. 4. MySQL on Ceph vs. AWS
  5. 5. • Shared, elastic storage pool • Dynamic DB placement • Flexible volume resizing • Live instance migration • Backup to object pool • Read replicas via copy-on-write snapshots MySQL ON CEPH STORAGE CLOUD OPS EFFICIENCY
  6. 6. MYSQL-ON-CEPH PRIVATE CLOUD FIDELITY TO A MYSQL-ON-AWS EXPERIENCE • Hybrid cloud requires public/private cloud commonalities • Developers want DevOps consistency • Elastic block storage, Ceph RBD vs. AWS EBS • Elastic object storage, Ceph RGW vs. AWS S3 • Users want deterministic performance
  7. 7. HEAD-TO-HEAD PERFORMANCE 30 IOPS/GB: AWS EBS P-IOPS TARGET
  8. 8. HEAD-TO-HEAD LAB TEST ENVIRONMENTS • EC2 r3.2xlarge and m4.4xlarge • EBS Provisioned IOPS and GPSSD • Percona Server • Supermicro servers • Red Hat Ceph Storage RBD • Percona Server
  9. 9. OSD Storage Server Systems 5x SuperStorage SSG-6028R-OSDXXX Dual Intel Xeon E5-2650v3 (10x core) 32GB SDRAM DDR3 2x 80GB boot drives 4x 800GB Intel DC P3700 (hot-swap U.2 NVMe) 1x dual port 10GbE network adaptors AOC-STGN-i2S 8x Seagate 6TB 7200 RPM SAS (unused in this lab) Mellanox 40GbE network adaptor(unused in this lab) MySQL Client Systems 12x Super Server 2UTwin2 nodes Dual Intel Xeon E5-2670v2 (cpuset limited to 8 or 16 vCPUs) 64GB SDRAM DDR3 Storage Server Software: Red Hat Ceph Storage 1.3.2 Red Hat Enterprise Linux 7.2 Percona Server 5x OSD Nodes 12x Client Nodes Shared10GSFP+Networking Monitor Nodes SUPERMICRO CEPH LAB ENVIRONMENT
  10. 10. 7996 7956 950 1680 1687 267 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 P-IOPS m4.4xl P-IOPS r3.2xl GP-SSD r3.2xl 100% Read 100% Write SYSBENCH BASELINE ON AWS EC2 + EBS
  11. 11. 7996 67144 40031 1680 5677 1258 20053 4752 0 10000 20000 30000 40000 50000 60000 70000 80000 P-IOPS m4.4xl Ceph cluster 1x "m4.4xl" (14% capacity) Ceph cluster 6x "m4.4xl" (87% capacity) 100% Read 100% write 70/30 RW SYSBENCH REQUESTS PER MYSQL INSTANCE
  12. 12. CONVERTING SYSBENCH REQUESTS TO IOPS READ PATH X% FROM INNODB BUFFER POOL IOPS = (READ REQUESTS – X%) SYSBENCH READ
  13. 13. CONVERTING SYSBENCH REQUESTS TO IOPS WRITE PATH SYSBENCH WRITE 1X READ X% FROM INNODB BUFFER POOL IOPS = (READ REQ – X%) LOG, DOUBLE WRITE BUFFER IOPS = (WRITE REQ * 2.3) 1X WRITE
  14. 14. 30.0 29.8 3.6 25.6 25.7 4.1 0.0 5.0 10.0 15.0 20.0 25.0 30.0 35.0 P-IOPS m4.4xl P-IOPS r3.2xl GP-SSD r3.2xl 100% Read 100% Write AWS IOPS/GB BASELINE: ~ AS ADVERTISED!
  15. 15. IOPS/GB PER MYSQL INSTANCE 30 252 150 26 78 19 0 50 100 150 200 250 300 P-IOPS m4.4xl Ceph cluster 1x "m4.4xl" (14% capacity) Ceph cluster 6x "m4.4xl" (87% capacity) MySQL IOPS/GB Reads MySQL IOPS/GB Writes
  16. 16. FOCUSING ON WRITE IOPS/GB AWS THROTTLE WATERMARK FOR DETERMINISTIC PERFORMANCE 26 78 19 0 10 20 30 40 50 60 70 80 90 P-IOPS m4.4xl Ceph cluster 1x "m4.4xl" (14% capacity) Ceph cluster 6x "m4.4xl" (87% capacity)
  17. 17. EFFECT OF CEPH CLUSTER LOADING ON IOPS/GB 78 37 25 19 134 72 37 36 0 20 40 60 80 100 120 140 160 Ceph cluster (14% capacity) Ceph cluster (36% capacity) Ceph cluster (72% capacity) Ceph cluster (87% capacity) IOPS/GB 100% Write 70/30 RW
  18. 18. A NOTE ON WRITE AMPLIFICATION MYSQL ON CEPH – WRITE PATH INNODB DOUBLE WRITE BUFFER CEPH REPLICATION OSD JOURNALING MYSQL INSERT X2 X2 X2
  19. 19. HEAD-TO-HEAD PERFORMANCE 30 IOPS/GB: AWS EBS P-IOPS TARGET 25 IOPS/GB: CEPH 72% CLUSTER CAPACITY (WRITES) 78 IOPS/GB: CEPH 14% CLUSTER CAPACITY (WRITES)
  20. 20. HEAD-TO-HEAD PRICE/PERFORMANCE $2.50: TARGET AWS EBS P-IOPS STORAGE PER IOP
  21. 21. IOPS/GB ON VARIOUS CONFIGS 31 18 18 78 - 10 20 30 40 50 60 70 80 90 IOPS/GB (SysbenchWrite) AWS EBS Provisioned-IOPS Ceph on Supermicro FatTwin 72% Capacity Ceph on Supermicro MicroCloud 87% Capacity Ceph on Supermicro MicroCloud 14% Capacity
  22. 22. $/STORAGE-IOP ON THE SAME CONFIGS $2.40 $0.80 $0.78 $1.06 $- $0.50 $1.00 $1.50 $2.00 $2.50 $3.00 Storage$/IOP (SysbenchWrite) AWS EBS Provisioned-IOPS Ceph on Supermicro FatTwin 72% Capacity Ceph on Supermicro MicroCloud 87% Capacity Ceph on Supermicro MicroCloud 14% Capacity
  23. 23. HEAD-TO-HEAD PRICE/PERFORMANCE $2.50: TARGET AWS P-IOPS $/IOP (EBS ONLY) $0.78: CEPH ON SUPERMICRO MICRO CLOUD CLUSTER
  24. 24. IOPS PERFORMANCE NODES FOR CEPH
  25. 25. ARCHITECTURAL CONSIDERATIONS UNDERSTANDING THE WORKLOAD Traditional Ceph Workload • $/GB • PBs • Unstructured data • MB/sec MySQL Ceph Workload • $/IOP • TBs • Structured data • IOPS
  26. 26. ARCHITECTURAL CONSIDERATIONS FUNDAMENTALLY DIFFERENT DESIGN Traditional Ceph Workload • 50-300+ TB per server • Magnetic Media (HDD) • Low CPU-core:OSD ratio • 10GbE->40GbE MySQL Ceph Workload • < 10 TB per server • Flash (SSD -> NVMe) • High CPU-core:OSD ratio • 10GbE
  27. 27. 18 18 19 6 34 34 36 8 0 5 10 15 20 25 30 35 40 Ceph cluster 80 cores 8 NVMe (87% capacity) Ceph cluster 40 cores 4 NVMe (87% capacity) Ceph cluster 80 cores 4 NVMe (87% capacity) Ceph cluster 80 cores 12 NVMe (84% capacity) IOPS/GB 100% Write 70/30 RW CONSIDERING CORE-TO-FLASH RATIO
  28. 28. 8x Nodes in 3U chassis Model: SYS-5038MR-OSDXXXP Per Node Configuration: CPU: Single Intel Xeon E5-2630 v4 Memory: 32GB NVMe Storage: Single 800GB Intel P3700 Networking: 1x dual-port 10G SFP+ + + 1x CPU + 1x NVMe + 1x SFP SUPERMICRO MICRO CLOUD CEPH MYSQL PERFORMANCE SKU
  29. 29. SEE US AT PERCONA LIVE! • Hands on Test Drive: MySQL on Ceph April 18, 1:30-4:30 • MySQL on Ceph April 19, 1:20-2:10 • MySQL in the Cloud: Head-to-Head Performance April 19, 2:20-3:10 • Running MySQL Virtualized on Ceph: Which Hypervisor? April 20, 3:30-4:20
  30. 30. THANK YOU!
  • jmuhbagir

    May. 27, 2017
  • ssuser9fbc66

    Nov. 16, 2016
  • charlesalva

    Sep. 29, 2016
  • LucianoJoseSilva

    Sep. 6, 2016

In this April 2016 session, Red Hat's Brent Compton and Kyle Bader compared the performance of MySQL on public and private clouds with a head-to-head look at (a) MySQL on Amazon AWS EBS, (b) MySQL on Amazon AWS EBS Provisioned IOPS, (c) MySQL on an OpenStack/Ceph private cloud (SuperMicro HDD-based Ceph storage), (d) MySQL on an OpenStack/Ceph private cloud (SuperMicro all-flash Ceph storage), and (e) MySQL on a single bare metal SuperMicro server (baseline).

Vues

Nombre de vues

2 042

Sur Slideshare

0

À partir des intégrations

0

Nombre d'intégrations

5

Actions

Téléchargements

42

Partages

0

Commentaires

0

Mentions J'aime

4

×