SlideShare une entreprise Scribd logo
1  sur  35
Télécharger pour lire hors ligne
Ziye Yang, Senior software Engineer
Notices and Disclaimers
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or
service activation. Learn more at intel.com, or from the OEM or retailer.
Software and workloads used in performance tests may have been optimized for performance only on Intel
microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems,
components, software, operations and functions. Any change to any of those factors may cause the results to vary. You
should consult other information and performance tests to assist you in fully evaluating your contemplated purchases,
including the performance of that product when combined with other products.
No computer system can be absolutely secure.
Software and workloads used in performance tests may have been optimized for performance only on Intel
microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems,
components, software, operations and functions. Any change to any of those factors may cause the results to vary. You
should consult other information and performance tests to assist you in fully evaluating your contemplated purchases,
including the performance of that product when combined with other products. For more complete information visit
http://www.intel.com/performance.
Intel, the Intel logo, Xeon, and others are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names
and brands may be claimed as the property of others.
*Other names and brands may be claimed as the property of others.
© 2017 Intel Corporation.
2
• SPDKintroductionandstatusupdate
• CurrentSDPKsupportinBluestore
• Casestudy:AccelerateiSCSIserviceexportedbyCeph
• SPDKsupportforCephin2017
• Summary
The Problem: Software is becoming the bottleneck
The Opportunity: Use Intel software ingredients to
unlock the potential of new media
HDD SATA NAND
SSD
NVMe* NAND
SSD
Intel® Optane™
SSD
Latency
I/O
Performance <500 IO/s
>25,000 IO/s
>400,000 IO/s
>2ms
<100µs <100µs
Storage
Performance
Development
Kit
6
Scalable and Efficient Software Ingredients
• User space, lockless, polled-mode components
• Up to millions of IOPS per core
• Designed for Intel Optane™ technology latencies
Intel® Platform Storage Reference Architecture
• Optimized for Intel platform characteristics
• Open source building blocks (BSD licensed)
• Available via spdk.io
Architecture
Drivers
Storage
Services
Storage
Protocols
iSCSI
Target
NVMe-oF*
Target
SCSI
vhost-scsi
Target
NVMe
NVMe Devices
Blobstore
NVMe-oF*
Initiator
Intel® QuickData
Technology Driver
Block Device Abstraction (BDEV)
Ceph
RBD
Linux
Async IO
Blob
bdev
3rd Party
NVMe
NVMe*
PCIe Driver
Released
Q2’17
Pathfinding
vhost-blk
Target
Object
BlobFS
Integration
RocksDB
Ceph
Core
Application
Framework
Benefits of using SPDK
SPDK
more performance
from Intel CPUs, non-
volatile media, and
networking
FASTER TTM/
LESS RESOURCES
than developing components
from scratch
10X MORE IOPS/coreUp to for NVMe-oF* vs. Linux kernel
as NVM technologies
increase in performanceFuture ProofingProvides
for NVMe vs. Linux kernel8X MORE IOPS/coreUp to
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using
specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to
assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/performance
350%Up to for RocksDB workloadsBETTER Tail Latency
SPDK Updates: 17.03 Release (Mar 2017)
Blobstore
• Block allocator for applications
• Variable granularity, defaults to 4KB
BlobFS
• Lightweight, non-POSIX filesystem
• Page caching & prefetch
• Initially limited to DB file semantic
requirements (e.g. file name and size)
RocksDB SPDK Environment
• Implement RocksDB using BlobFS
QEMU vhost-scsi Target
• Simplified I/O path to local QEMU
guest VMs with unmodified apps
NVMe over Fabrics Improvements
• Read latency improvement
• NVMe-oF Host (Initiator) zero-copy
• Discovery code simplification
• Quality, performance & hardening fixes
Newcomponents:
broader set of use cases for SPDK
libraries & ingredients
Existingcomponents:
feature and hardening
improvements
Current status
Fully realizing new media performance requires software optimizations
SPDK positioned to enable developers to realize this performance
SPDK available today via http://spdk.io
Help us build SPDK as an open source community!
Current SPDK support in BlueStore
New features
 Support multiple threads for doing I/Os on NVMe SSDs via SPDK user space NVMe driver
 Support running SPDK I/O threads on designated CPU cores in configuration file.
Upgrade in Ceph (now is 17.03)
 Upgraded SPDK to 16.11 in Dec, 2016
 Upgraded SPDK to 17.03 in April, 2017
Stability
 Fixed several compilation issues, running time bugs while using SPDK.
Totally 16 SPDK related Patches are merged in Bluestore (mainly in NVMEDEVICE
module)
(From iStaury’s talk in SPDK PRC meetup 2016)
Block service exported by Ceph via iSCSI protocol
 Cloud service providers which
provision VM service can use
iSCSI.
 If Ceph could export block
service with good performance, it
would be easy to glue those
providers to Ceph cluster solution.
APP
Multipath
iSCSI
initiator
dm-1
sdx sdy
iSCSI
target
RBD
iSCSI
target
RBD
OSD OSD OSD OSD
OSD OSD OSD OSD
Client
iSCSI gateway
Ceph cluster
iSCSI + RBD Gateway
Ceph server
 CPU:Intel(R) Xeon(R) CPU E5-2660 v4 @2.00GHz
 Four intel P3700 SSDs
 One OSD on each SSD, total 4 osds
 4 pools PG number 512, one 10G image in one pool
iSCSI target server (librbd+SPDK / librbd+tgt)
 CPU:Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz
 Only one core enable
iSCSI initiator
 CPU:Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz
iSCSI Initiator
iSCSI Target Server
iSCSI Target
librbd
Ceph Server
OSD0 OSD1
OSD2 OSD3
iSCSI + RBD Gateway
One CPU Core:
FIO + img
iSCSI type + op
1 FIO + 1 img
(IOPS)
2 FIO + 2 img
(IOPS)
3 FIO + 3 img
(IOPS)
SPDK iSCSI
tgt/TGT
ratio
TGT + 4k_randread 10K 20K 20K
140%
SPDK iSCSI tgt+ 4k_randread 20K 24K 28K
TGT + 4k_randwrite 6.5K 9.5K 18K
133%
SPDK iSCSI tgt + 4k_randwrite 14K 19K 24K
iSCSI + RBD Gateway
Two CPU Cores:
FIO + img
iSCSI type + op
1 FIO + 1 img
(IOPS)
2 FIO + 2 img
(IOPS)
3 FIO + 3 img
(IOPS)
4 FIO + 4 img
(IOPS)
SPDK iSCSI
tgt/TGT
ratio
TGT + 4k_randread 12K 24K 26K 26K
181%
SPDK iSCSI tgt + 4k_randread 37K 47K 47K 47K
TGT + 4k_randwrite 9.5K 13.5K 19K 22K
123%
SPDK iSCSI tgt + 4k_randwrite 16K 24K 25K 27K
Reading Comparison
10
20
12
37
20
24 24
47
20
28
26
47
0
5
10
15
20
25
30
35
40
45
50
One core:TGT One core:SPDK-iSCSI Two cores:TGT Two cores:SPDK-iSCSI
4K_randread(IOPS(K))
1stream 2 streams 3streams
Writing Comparison
6.5
14
9.5
16
9.5
19
13.5
24
18
24
19
25
22
27
0
5
10
15
20
25
30
One core:TGT One core:SPDK-iSCSI Two cores:TGT Two cores:SPDK-iSCSI
4K_randwrite(IOPS(K))
1stream 2 streams 3streams 4streams
SPDK support for Ceph in 2017
To make SPDK really useful in Ceph, we will still do the following works with
partners:
 Continue stability maintenance
– Version upgrade, bug fixing in compilation/running time.
 Performance enhancement
– Continue optimizing NVMEDEVICE module according to customers or partners’
feedback.
 New feature Development:
– Occasionally pickup some common requirements/feedback in community and may
upstream those features in NVMEDEVICE module
Proposals/opportunties for better leveraging SPDK
Multiple OSD support on same NVMe Device by using SPDK.
 Leverage SPDK’s multiple process features in user space NVMe driver.
 Risks: Same with kernel, i.e., fail all OSDs on the device if it is fail.
Enhance cache support in NVMEDEVICE via using SPDK
 Need better cache/buffer strategy for Read/Write performance improvement.
Optimize Rocksdb usage in Bluestore by SPDK’s blobfs/blobstore
 Make Rocksdb use SPDK’s Blobfs/Blostore instead of kernel file system for metadata
management.
Leverage SPDK to accelerate the block service
exported by Ceph
Optimization in front of Ceph
 Use optimized Block service daemon, e.g., SPDK iSCSI target or NVMe-oF target
 Introduce Cache policy in Block service daemon.
Store Optimization inside Ceph
 Use SPDK’s user space NVMe driver instead of Kernel NVMe driver (Already have)
 May replace “BlueRocksEnv + Bluefs” with “BlobfsENV + Blobfs/Blobstore”.
Ceph RBD service
SPDK optimized iSCSI target SPDK optimized NVMe-oF target
SPDK Ceph RBD bdev module (Leverage librbd/librados)
SPDK Cache module
Existing SPDK app/module
Existing Ceph
Service/component
FileStore
Export Block Service
KVStoreBluestore
metadata
RocksDB
BlueRocksENV
Bluefs
Kernel/SPDK
driver
NVMe device
metadata
RocksDB
SPDK BlobfsENV
SPDK Blobfs/
Blobstore
SPDK NVMe
driver
NVMe device
Optimized module to
be developed (TBD in
SPDK roadmap)
Accelerate block service exported by Ceph via
SPDK
Even replace RocksDB?
Summary
SPDK proves to useful to explore the capability of fast storage devices (e.g.,
NVMe SSDs)
But it still needs lots of development work to make SPDK useful for Bluestore in
product quality level.
Call for actions:
 Call for code contribution in SPDK community
 Call for leveraging SPDK for Ceph optimization, welcome to contact SPDK dev team
for help and collaboration.
Summary
SPDK proves to useful to explore the capability of fast storage devices (e.g.,
NVMe SSDs)
But it still needs lots of development work to make SPDK useful for Bluestore in
product quality level.
Call for actions:
 Call for code contribution in SPDK community
 Call for leveraging SPDK for Ceph optimization, welcome to contact SPDK dev team
for help and collaboration.
Vhost-scsi Performance
SPDK provides
1 Million IOPS with 1 core
and
8x VM performance vs. kernel!
Features Realized Benefit
High performance
storage virtualization
Increased VM
density
Reduced VM exit Reduced tail
latencies
1
11
System Configuration: Target system: 2x Intel® Xeon® E5-2695v4 (HT off), Intel® Speed Step enabled, Intel® Turbo Boost Technology enabled, 8x 8GB DDR4 2133 MT/s, 1 DIMM per channel, 8x Intel® P3700 NVMe SSD (800GB),
4x per CPU socket, FW 8DV10102, Network: Mellanox* ConnectX-4 100Gb RDMA, direct connection between initiator and target; Initiator OS: CentOS* Linux* 7.2, Linux kernel 4.7.0-rc2, Target OS (SPDK): CentOS Linux 7.2, Linux
kernel 3.10.0-327.el7.x86_64, Target OS (Linux kernel): CentOS Linux 7.2, Linux kernel 4.7.0-rc2 Performance as measured by: fio, 4KB Random Read I/O, 2 RDMA QP per remote SSD, Numjobs=4 per SSD, Queue Depth: 32/job
10
10
10
17
8
1
0 5 10 15 20 25 30
QEMU virtio-scsi
kernel vhost-scsi
SPDK vhost-scsi
VM cores I/O processing cores
0
200000
400000
600000
800000
1000000
QEMU virtio-scsi kernel vhost-scsi SPDK vhost-scsi
I/Os handled per I/O processing core
Alibaba* Cloud ECS Case Study: Write Performance
Source: http://mt.sohu.com/20170228/n481925423.shtml
* Other namesand brands may be claimedas the property of others
Ali Cloud sees 300% improvement
in IOPS and latency using SPDK
0
200
400
600
800
1000
1200
1400
1 2 4 8 16 32
Latency(usec)
Queue Depth
Random Write Latency (usec)
General Virtualization Infrastructure
Ali Cloud High-Performance Storage Infrastructure with SPDK
0
50000
100000
150000
200000
250000
300000
350000
400000
1 2 4 8 16 32
IOPS
Queue Depth
Random Write 4K IOPS
General Virtualization Infrastructure
Ali Cloud High-Performance Storage Infrastructure with SPDK
Alibaba* Cloud ECS Case Study: MySQL Sysbench
Source:http://mt.sohu.com/20170228/n481925423.shtml
* Other names and brands may be claimed as the propertyof others
Sysbench Update sees 4.6X QPS at
10% of the latency!
0
2
4
6
8
10
12
14
16
18
Select Update
Latency(ms)
MySQL Sysbench - Latency
General Virtualization Infrastructure High Performance Virtualization with SPDK
0
20000
40000
60000
80000
100000
120000
Select Update
MySQL Sysbench - TPS/QPS
General Virtualization Infrastructure High Performance Virtualization with SPDK
SPDK Blobstore Vs. Kernel: Key Tail Latency
0
20000
40000
60000
80000
100000
120000
140000
Readwrite
LatencyuS
db_bench 99.99th Percentile Latency
Lower is Better
Kernel (256KB sync) Blobstore (20GB Cache + Readahead)
372%
SPDK Blobstore reduces tail latency by 3.7X
Insert Randread Overwrite Readwrite
Kernel (256KB Sync) 366 6444 1675 122500
SPDK Blobstore
(20GB Cache + Readahead)
444 3607 1200 33052
0
1000
2000
3000
4000
5000
6000
7000
Insert Randread Overwrite
LatencyuS
db_bench 99.99th Percentile Latency
Lower is Better
Kernel (256KB sync) Blobstore (20GB Cache + Readahead)
21%
44%
28%
SPDK Blobstore Vs. Kernel: Key Transactions per sec
0
200000
400000
600000
800000
1000000
1200000
Insert Randread Overwrite Readwrite
Keyspersecond
db_bench Key Transactions
Higher is Better
85%
8% 4% ~0%
Insert Randread Overwrite Readwrite
Kernel (256KB Sync) 547046 92582 51421 30273
SPDK Blobstore
(20GB Cache + Readahead)
1011245 99918 53495 29804
SPDK Blobstore improves insert throughput by 85%

Contenu connexe

Tendances

Disaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoFDisaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoFShapeBlue
 
[232] 성능어디까지쥐어짜봤니 송태웅
[232] 성능어디까지쥐어짜봤니 송태웅[232] 성능어디까지쥐어짜봤니 송태웅
[232] 성능어디까지쥐어짜봤니 송태웅NAVER D2
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephSage Weil
 
Hadoopのシステム設計・運用のポイント
Hadoopのシステム設計・運用のポイントHadoopのシステム設計・運用のポイント
Hadoopのシステム設計・運用のポイントCloudera Japan
 
[db tech showcase Tokyo 2016] D13: NVMeフラッシュストレージを用いた高性能高拡張高可用なデータベースシステムの実現方...
[db tech showcase Tokyo 2016] D13: NVMeフラッシュストレージを用いた高性能高拡張高可用なデータベースシステムの実現方...[db tech showcase Tokyo 2016] D13: NVMeフラッシュストレージを用いた高性能高拡張高可用なデータベースシステムの実現方...
[db tech showcase Tokyo 2016] D13: NVMeフラッシュストレージを用いた高性能高拡張高可用なデータベースシステムの実現方...Insight Technology, Inc.
 
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화OpenStack Korea Community
 
Performance Wins with eBPF: Getting Started (2021)
Performance Wins with eBPF: Getting Started (2021)Performance Wins with eBPF: Getting Started (2021)
Performance Wins with eBPF: Getting Started (2021)Brendan Gregg
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph clusterMirantis
 
AF Ceph: Ceph Performance Analysis and Improvement on Flash
AF Ceph: Ceph Performance Analysis and Improvement on FlashAF Ceph: Ceph Performance Analysis and Improvement on Flash
AF Ceph: Ceph Performance Analysis and Improvement on FlashCeph Community
 
Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Community
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
 
Ceph Performance and Sizing Guide
Ceph Performance and Sizing GuideCeph Performance and Sizing Guide
Ceph Performance and Sizing GuideJose De La Rosa
 
BlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year InBlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year InSage Weil
 
Deep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache SparkDeep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache SparkDatabricks
 
DB Time, Average Active Sessions, and ASH Math - Oracle performance fundamentals
DB Time, Average Active Sessions, and ASH Math - Oracle performance fundamentalsDB Time, Average Active Sessions, and ASH Math - Oracle performance fundamentals
DB Time, Average Active Sessions, and ASH Math - Oracle performance fundamentalsJohn Beresniewicz
 
Oracle RAC 19c and Later - Best Practices #OOWLON
Oracle RAC 19c and Later - Best Practices #OOWLONOracle RAC 19c and Later - Best Practices #OOWLON
Oracle RAC 19c and Later - Best Practices #OOWLONMarkus Michalewicz
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017 Karan Singh
 
Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
 

Tendances (20)

Disaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoFDisaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoF
 
[232] 성능어디까지쥐어짜봤니 송태웅
[232] 성능어디까지쥐어짜봤니 송태웅[232] 성능어디까지쥐어짜봤니 송태웅
[232] 성능어디까지쥐어짜봤니 송태웅
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
Hadoopのシステム設計・運用のポイント
Hadoopのシステム設計・運用のポイントHadoopのシステム設計・運用のポイント
Hadoopのシステム設計・運用のポイント
 
[db tech showcase Tokyo 2016] D13: NVMeフラッシュストレージを用いた高性能高拡張高可用なデータベースシステムの実現方...
[db tech showcase Tokyo 2016] D13: NVMeフラッシュストレージを用いた高性能高拡張高可用なデータベースシステムの実現方...[db tech showcase Tokyo 2016] D13: NVMeフラッシュストレージを用いた高性能高拡張高可用なデータベースシステムの実現方...
[db tech showcase Tokyo 2016] D13: NVMeフラッシュストレージを用いた高性能高拡張高可用なデータベースシステムの実現方...
 
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
 
Performance Wins with eBPF: Getting Started (2021)
Performance Wins with eBPF: Getting Started (2021)Performance Wins with eBPF: Getting Started (2021)
Performance Wins with eBPF: Getting Started (2021)
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph cluster
 
AF Ceph: Ceph Performance Analysis and Improvement on Flash
AF Ceph: Ceph Performance Analysis and Improvement on FlashAF Ceph: Ceph Performance Analysis and Improvement on Flash
AF Ceph: Ceph Performance Analysis and Improvement on Flash
 
Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
Ceph Performance and Sizing Guide
Ceph Performance and Sizing GuideCeph Performance and Sizing Guide
Ceph Performance and Sizing Guide
 
BlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year InBlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year In
 
Deep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache SparkDeep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache Spark
 
Yahoo! JAPANにおけるApache Cassandraへの取り組み
Yahoo! JAPANにおけるApache Cassandraへの取り組みYahoo! JAPANにおけるApache Cassandraへの取り組み
Yahoo! JAPANにおけるApache Cassandraへの取り組み
 
DB Time, Average Active Sessions, and ASH Math - Oracle performance fundamentals
DB Time, Average Active Sessions, and ASH Math - Oracle performance fundamentalsDB Time, Average Active Sessions, and ASH Math - Oracle performance fundamentals
DB Time, Average Active Sessions, and ASH Math - Oracle performance fundamentals
 
Oracle RAC 19c and Later - Best Practices #OOWLON
Oracle RAC 19c and Later - Best Practices #OOWLONOracle RAC 19c and Later - Best Practices #OOWLON
Oracle RAC 19c and Later - Best Practices #OOWLON
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017
 
Ceph issue 해결 사례
Ceph issue 해결 사례Ceph issue 해결 사례
Ceph issue 해결 사례
 
Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)
 

En vedette

XPDS16: A Paravirtualized Interface for Socket Syscalls - Dimitri Stiliadis, ...
XPDS16: A Paravirtualized Interface for Socket Syscalls - Dimitri Stiliadis, ...XPDS16: A Paravirtualized Interface for Socket Syscalls - Dimitri Stiliadis, ...
XPDS16: A Paravirtualized Interface for Socket Syscalls - Dimitri Stiliadis, ...The Linux Foundation
 
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Odinot Stanislas
 
Scsi express overview
Scsi express overviewScsi express overview
Scsi express overviewrbeetle
 
The Linux Block Layer - Built for Fast Storage
The Linux Block Layer - Built for Fast StorageThe Linux Block Layer - Built for Fast Storage
The Linux Block Layer - Built for Fast StorageKernel TLV
 
XSKY - ceph luminous update
XSKY - ceph luminous updateXSKY - ceph luminous update
XSKY - ceph luminous updateinwin stack
 
Filesystem Comparison: NFS vs GFS2 vs OCFS2
Filesystem Comparison: NFS vs GFS2 vs OCFS2Filesystem Comparison: NFS vs GFS2 vs OCFS2
Filesystem Comparison: NFS vs GFS2 vs OCFS2Giuseppe Paterno'
 
Parallelization Stategies of DeepLearning Neural Network Training
Parallelization Stategies of DeepLearning Neural Network TrainingParallelization Stategies of DeepLearning Neural Network Training
Parallelization Stategies of DeepLearning Neural Network TrainingRomeo Kienzler
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureDanielle Womboldt
 

En vedette (10)

XPDS16: A Paravirtualized Interface for Socket Syscalls - Dimitri Stiliadis, ...
XPDS16: A Paravirtualized Interface for Socket Syscalls - Dimitri Stiliadis, ...XPDS16: A Paravirtualized Interface for Socket Syscalls - Dimitri Stiliadis, ...
XPDS16: A Paravirtualized Interface for Socket Syscalls - Dimitri Stiliadis, ...
 
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
 
Scsi express overview
Scsi express overviewScsi express overview
Scsi express overview
 
The Linux Block Layer - Built for Fast Storage
The Linux Block Layer - Built for Fast StorageThe Linux Block Layer - Built for Fast Storage
The Linux Block Layer - Built for Fast Storage
 
XSKY - ceph luminous update
XSKY - ceph luminous updateXSKY - ceph luminous update
XSKY - ceph luminous update
 
Filesystem Comparison: NFS vs GFS2 vs OCFS2
Filesystem Comparison: NFS vs GFS2 vs OCFS2Filesystem Comparison: NFS vs GFS2 vs OCFS2
Filesystem Comparison: NFS vs GFS2 vs OCFS2
 
Parallelization Stategies of DeepLearning Neural Network Training
Parallelization Stategies of DeepLearning Neural Network TrainingParallelization Stategies of DeepLearning Neural Network Training
Parallelization Stategies of DeepLearning Neural Network Training
 
Ceph barcelona-v-1.2
Ceph barcelona-v-1.2Ceph barcelona-v-1.2
Ceph barcelona-v-1.2
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
 
Xen Project: Windows PV Drivers
Xen Project: Windows PV DriversXen Project: Windows PV Drivers
Xen Project: Windows PV Drivers
 

Similaire à Ceph Day Beijing - SPDK for Ceph

Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSCeph Community
 
Seminar Accelerating Business Using Microservices Architecture in Digital Age...
Seminar Accelerating Business Using Microservices Architecture in Digital Age...Seminar Accelerating Business Using Microservices Architecture in Digital Age...
Seminar Accelerating Business Using Microservices Architecture in Digital Age...PT Datacomm Diangraha
 
Accelerating Virtual Machine Access with the Storage Performance Development ...
Accelerating Virtual Machine Access with the Storage Performance Development ...Accelerating Virtual Machine Access with the Storage Performance Development ...
Accelerating Virtual Machine Access with the Storage Performance Development ...Michelle Holley
 
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI ConvergenceDAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergenceinside-BigData.com
 
Impact of Intel Optane Technology on HPC
Impact of Intel Optane Technology on HPCImpact of Intel Optane Technology on HPC
Impact of Intel Optane Technology on HPCMemVerge
 
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster Ceph Community
 
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster Ceph Community
 
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance BarriersCeph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance BarriersCeph Community
 
Ceph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph clusterCeph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph clusterCeph Community
 
Accelerate Your Apache Spark with Intel Optane DC Persistent Memory
Accelerate Your Apache Spark with Intel Optane DC Persistent MemoryAccelerate Your Apache Spark with Intel Optane DC Persistent Memory
Accelerate Your Apache Spark with Intel Optane DC Persistent MemoryDatabricks
 
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Community
 
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red_Hat_Storage
 
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph clusterCeph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph clusterCeph Community
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Community
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
 
Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Community
 
Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Day Tokyo - Bring Ceph to Enterprise Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Day Tokyo - Bring Ceph to Enterprise Ceph Community
 
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Community
 

Similaire à Ceph Day Beijing - SPDK for Ceph (20)

Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
 
Ceph
CephCeph
Ceph
 
Seminar Accelerating Business Using Microservices Architecture in Digital Age...
Seminar Accelerating Business Using Microservices Architecture in Digital Age...Seminar Accelerating Business Using Microservices Architecture in Digital Age...
Seminar Accelerating Business Using Microservices Architecture in Digital Age...
 
Accelerating Virtual Machine Access with the Storage Performance Development ...
Accelerating Virtual Machine Access with the Storage Performance Development ...Accelerating Virtual Machine Access with the Storage Performance Development ...
Accelerating Virtual Machine Access with the Storage Performance Development ...
 
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI ConvergenceDAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence
 
Impact of Intel Optane Technology on HPC
Impact of Intel Optane Technology on HPCImpact of Intel Optane Technology on HPC
Impact of Intel Optane Technology on HPC
 
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
 
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
 
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance BarriersCeph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
 
Ceph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph clusterCeph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph cluster
 
Accelerate Your Apache Spark with Intel Optane DC Persistent Memory
Accelerate Your Apache Spark with Intel Optane DC Persistent MemoryAccelerate Your Apache Spark with Intel Optane DC Persistent Memory
Accelerate Your Apache Spark with Intel Optane DC Persistent Memory
 
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
 
optimizing_ceph_flash
optimizing_ceph_flashoptimizing_ceph_flash
optimizing_ceph_flash
 
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
 
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph clusterCeph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage
 
Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Day Tokyo - Bring Ceph to Enterprise Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Day Tokyo - Bring Ceph to Enterprise
 
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash Storage
 

Plus de Danielle Womboldt

Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Danielle Womboldt
 
Ceph Day Beijing- Ceph Community Update
Ceph Day Beijing- Ceph Community UpdateCeph Day Beijing- Ceph Community Update
Ceph Day Beijing- Ceph Community UpdateDanielle Womboldt
 
Ceph Day Beijing - Storage Modernization with Intel and Ceph
Ceph Day Beijing - Storage Modernization with Intel and CephCeph Day Beijing - Storage Modernization with Intel and Ceph
Ceph Day Beijing - Storage Modernization with Intel and CephDanielle Womboldt
 
Ceph Day Beijing - Welcome to Beijing Ceph Day
Ceph Day Beijing - Welcome to Beijing Ceph DayCeph Day Beijing - Welcome to Beijing Ceph Day
Ceph Day Beijing - Welcome to Beijing Ceph DayDanielle Womboldt
 
Ceph Day Beijing - Leverage Ceph for SDS in China Mobile
Ceph Day Beijing - Leverage Ceph for SDS in China MobileCeph Day Beijing - Leverage Ceph for SDS in China Mobile
Ceph Day Beijing - Leverage Ceph for SDS in China MobileDanielle Womboldt
 
Ceph Day Beijing - BlueStore and Optimizations
Ceph Day Beijing - BlueStore and OptimizationsCeph Day Beijing - BlueStore and Optimizations
Ceph Day Beijing - BlueStore and OptimizationsDanielle Womboldt
 
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Danielle Womboldt
 
Ceph Day Beijing - Small Files & All Flash: Inspur's works on Ceph
Ceph Day Beijing - Small Files & All Flash:  Inspur's works on Ceph Ceph Day Beijing - Small Files & All Flash:  Inspur's works on Ceph
Ceph Day Beijing - Small Files & All Flash: Inspur's works on Ceph Danielle Womboldt
 
Ceph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateCeph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateDanielle Womboldt
 

Plus de Danielle Womboldt (9)

Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
 
Ceph Day Beijing- Ceph Community Update
Ceph Day Beijing- Ceph Community UpdateCeph Day Beijing- Ceph Community Update
Ceph Day Beijing- Ceph Community Update
 
Ceph Day Beijing - Storage Modernization with Intel and Ceph
Ceph Day Beijing - Storage Modernization with Intel and CephCeph Day Beijing - Storage Modernization with Intel and Ceph
Ceph Day Beijing - Storage Modernization with Intel and Ceph
 
Ceph Day Beijing - Welcome to Beijing Ceph Day
Ceph Day Beijing - Welcome to Beijing Ceph DayCeph Day Beijing - Welcome to Beijing Ceph Day
Ceph Day Beijing - Welcome to Beijing Ceph Day
 
Ceph Day Beijing - Leverage Ceph for SDS in China Mobile
Ceph Day Beijing - Leverage Ceph for SDS in China MobileCeph Day Beijing - Leverage Ceph for SDS in China Mobile
Ceph Day Beijing - Leverage Ceph for SDS in China Mobile
 
Ceph Day Beijing - BlueStore and Optimizations
Ceph Day Beijing - BlueStore and OptimizationsCeph Day Beijing - BlueStore and Optimizations
Ceph Day Beijing - BlueStore and Optimizations
 
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
 
Ceph Day Beijing - Small Files & All Flash: Inspur's works on Ceph
Ceph Day Beijing - Small Files & All Flash:  Inspur's works on Ceph Ceph Day Beijing - Small Files & All Flash:  Inspur's works on Ceph
Ceph Day Beijing - Small Files & All Flash: Inspur's works on Ceph
 
Ceph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateCeph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA Update
 

Dernier

DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamUiPathCommunity
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Jeffrey Haguewood
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAndrey Devyatkin
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businesspanagenda
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...apidays
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWERMadyBayot
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024The Digital Insurer
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FMESafe Software
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...Zilliz
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MIND CTI
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusZilliz
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistandanishmna97
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobeapidays
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingEdi Saputra
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyKhushali Kathiriya
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...apidays
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...apidays
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDropbox
 

Dernier (20)

DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 

Ceph Day Beijing - SPDK for Ceph

  • 1. Ziye Yang, Senior software Engineer
  • 2. Notices and Disclaimers Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. No computer system can be absolutely secure. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/performance. Intel, the Intel logo, Xeon, and others are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. *Other names and brands may be claimed as the property of others. © 2017 Intel Corporation. 2
  • 3. • SPDKintroductionandstatusupdate • CurrentSDPKsupportinBluestore • Casestudy:AccelerateiSCSIserviceexportedbyCeph • SPDKsupportforCephin2017 • Summary
  • 4.
  • 5. The Problem: Software is becoming the bottleneck The Opportunity: Use Intel software ingredients to unlock the potential of new media HDD SATA NAND SSD NVMe* NAND SSD Intel® Optane™ SSD Latency I/O Performance <500 IO/s >25,000 IO/s >400,000 IO/s >2ms <100µs <100µs
  • 6. Storage Performance Development Kit 6 Scalable and Efficient Software Ingredients • User space, lockless, polled-mode components • Up to millions of IOPS per core • Designed for Intel Optane™ technology latencies Intel® Platform Storage Reference Architecture • Optimized for Intel platform characteristics • Open source building blocks (BSD licensed) • Available via spdk.io
  • 7. Architecture Drivers Storage Services Storage Protocols iSCSI Target NVMe-oF* Target SCSI vhost-scsi Target NVMe NVMe Devices Blobstore NVMe-oF* Initiator Intel® QuickData Technology Driver Block Device Abstraction (BDEV) Ceph RBD Linux Async IO Blob bdev 3rd Party NVMe NVMe* PCIe Driver Released Q2’17 Pathfinding vhost-blk Target Object BlobFS Integration RocksDB Ceph Core Application Framework
  • 8. Benefits of using SPDK SPDK more performance from Intel CPUs, non- volatile media, and networking FASTER TTM/ LESS RESOURCES than developing components from scratch 10X MORE IOPS/coreUp to for NVMe-oF* vs. Linux kernel as NVM technologies increase in performanceFuture ProofingProvides for NVMe vs. Linux kernel8X MORE IOPS/coreUp to Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/performance 350%Up to for RocksDB workloadsBETTER Tail Latency
  • 9. SPDK Updates: 17.03 Release (Mar 2017) Blobstore • Block allocator for applications • Variable granularity, defaults to 4KB BlobFS • Lightweight, non-POSIX filesystem • Page caching & prefetch • Initially limited to DB file semantic requirements (e.g. file name and size) RocksDB SPDK Environment • Implement RocksDB using BlobFS QEMU vhost-scsi Target • Simplified I/O path to local QEMU guest VMs with unmodified apps NVMe over Fabrics Improvements • Read latency improvement • NVMe-oF Host (Initiator) zero-copy • Discovery code simplification • Quality, performance & hardening fixes Newcomponents: broader set of use cases for SPDK libraries & ingredients Existingcomponents: feature and hardening improvements
  • 10. Current status Fully realizing new media performance requires software optimizations SPDK positioned to enable developers to realize this performance SPDK available today via http://spdk.io Help us build SPDK as an open source community!
  • 11.
  • 12. Current SPDK support in BlueStore New features  Support multiple threads for doing I/Os on NVMe SSDs via SPDK user space NVMe driver  Support running SPDK I/O threads on designated CPU cores in configuration file. Upgrade in Ceph (now is 17.03)  Upgraded SPDK to 16.11 in Dec, 2016  Upgraded SPDK to 17.03 in April, 2017 Stability  Fixed several compilation issues, running time bugs while using SPDK. Totally 16 SPDK related Patches are merged in Bluestore (mainly in NVMEDEVICE module)
  • 13. (From iStaury’s talk in SPDK PRC meetup 2016)
  • 14. Block service exported by Ceph via iSCSI protocol  Cloud service providers which provision VM service can use iSCSI.  If Ceph could export block service with good performance, it would be easy to glue those providers to Ceph cluster solution. APP Multipath iSCSI initiator dm-1 sdx sdy iSCSI target RBD iSCSI target RBD OSD OSD OSD OSD OSD OSD OSD OSD Client iSCSI gateway Ceph cluster
  • 15. iSCSI + RBD Gateway Ceph server  CPU:Intel(R) Xeon(R) CPU E5-2660 v4 @2.00GHz  Four intel P3700 SSDs  One OSD on each SSD, total 4 osds  4 pools PG number 512, one 10G image in one pool iSCSI target server (librbd+SPDK / librbd+tgt)  CPU:Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz  Only one core enable iSCSI initiator  CPU:Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz iSCSI Initiator iSCSI Target Server iSCSI Target librbd Ceph Server OSD0 OSD1 OSD2 OSD3
  • 16. iSCSI + RBD Gateway One CPU Core: FIO + img iSCSI type + op 1 FIO + 1 img (IOPS) 2 FIO + 2 img (IOPS) 3 FIO + 3 img (IOPS) SPDK iSCSI tgt/TGT ratio TGT + 4k_randread 10K 20K 20K 140% SPDK iSCSI tgt+ 4k_randread 20K 24K 28K TGT + 4k_randwrite 6.5K 9.5K 18K 133% SPDK iSCSI tgt + 4k_randwrite 14K 19K 24K
  • 17. iSCSI + RBD Gateway Two CPU Cores: FIO + img iSCSI type + op 1 FIO + 1 img (IOPS) 2 FIO + 2 img (IOPS) 3 FIO + 3 img (IOPS) 4 FIO + 4 img (IOPS) SPDK iSCSI tgt/TGT ratio TGT + 4k_randread 12K 24K 26K 26K 181% SPDK iSCSI tgt + 4k_randread 37K 47K 47K 47K TGT + 4k_randwrite 9.5K 13.5K 19K 22K 123% SPDK iSCSI tgt + 4k_randwrite 16K 24K 25K 27K
  • 18. Reading Comparison 10 20 12 37 20 24 24 47 20 28 26 47 0 5 10 15 20 25 30 35 40 45 50 One core:TGT One core:SPDK-iSCSI Two cores:TGT Two cores:SPDK-iSCSI 4K_randread(IOPS(K)) 1stream 2 streams 3streams
  • 19. Writing Comparison 6.5 14 9.5 16 9.5 19 13.5 24 18 24 19 25 22 27 0 5 10 15 20 25 30 One core:TGT One core:SPDK-iSCSI Two cores:TGT Two cores:SPDK-iSCSI 4K_randwrite(IOPS(K)) 1stream 2 streams 3streams 4streams
  • 20.
  • 21. SPDK support for Ceph in 2017 To make SPDK really useful in Ceph, we will still do the following works with partners:  Continue stability maintenance – Version upgrade, bug fixing in compilation/running time.  Performance enhancement – Continue optimizing NVMEDEVICE module according to customers or partners’ feedback.  New feature Development: – Occasionally pickup some common requirements/feedback in community and may upstream those features in NVMEDEVICE module
  • 22. Proposals/opportunties for better leveraging SPDK Multiple OSD support on same NVMe Device by using SPDK.  Leverage SPDK’s multiple process features in user space NVMe driver.  Risks: Same with kernel, i.e., fail all OSDs on the device if it is fail. Enhance cache support in NVMEDEVICE via using SPDK  Need better cache/buffer strategy for Read/Write performance improvement. Optimize Rocksdb usage in Bluestore by SPDK’s blobfs/blobstore  Make Rocksdb use SPDK’s Blobfs/Blostore instead of kernel file system for metadata management.
  • 23. Leverage SPDK to accelerate the block service exported by Ceph Optimization in front of Ceph  Use optimized Block service daemon, e.g., SPDK iSCSI target or NVMe-oF target  Introduce Cache policy in Block service daemon. Store Optimization inside Ceph  Use SPDK’s user space NVMe driver instead of Kernel NVMe driver (Already have)  May replace “BlueRocksEnv + Bluefs” with “BlobfsENV + Blobfs/Blobstore”.
  • 24. Ceph RBD service SPDK optimized iSCSI target SPDK optimized NVMe-oF target SPDK Ceph RBD bdev module (Leverage librbd/librados) SPDK Cache module Existing SPDK app/module Existing Ceph Service/component FileStore Export Block Service KVStoreBluestore metadata RocksDB BlueRocksENV Bluefs Kernel/SPDK driver NVMe device metadata RocksDB SPDK BlobfsENV SPDK Blobfs/ Blobstore SPDK NVMe driver NVMe device Optimized module to be developed (TBD in SPDK roadmap) Accelerate block service exported by Ceph via SPDK Even replace RocksDB?
  • 25.
  • 26. Summary SPDK proves to useful to explore the capability of fast storage devices (e.g., NVMe SSDs) But it still needs lots of development work to make SPDK useful for Bluestore in product quality level. Call for actions:  Call for code contribution in SPDK community  Call for leveraging SPDK for Ceph optimization, welcome to contact SPDK dev team for help and collaboration.
  • 27.
  • 28. Summary SPDK proves to useful to explore the capability of fast storage devices (e.g., NVMe SSDs) But it still needs lots of development work to make SPDK useful for Bluestore in product quality level. Call for actions:  Call for code contribution in SPDK community  Call for leveraging SPDK for Ceph optimization, welcome to contact SPDK dev team for help and collaboration.
  • 29.
  • 30. Vhost-scsi Performance SPDK provides 1 Million IOPS with 1 core and 8x VM performance vs. kernel! Features Realized Benefit High performance storage virtualization Increased VM density Reduced VM exit Reduced tail latencies 1 11 System Configuration: Target system: 2x Intel® Xeon® E5-2695v4 (HT off), Intel® Speed Step enabled, Intel® Turbo Boost Technology enabled, 8x 8GB DDR4 2133 MT/s, 1 DIMM per channel, 8x Intel® P3700 NVMe SSD (800GB), 4x per CPU socket, FW 8DV10102, Network: Mellanox* ConnectX-4 100Gb RDMA, direct connection between initiator and target; Initiator OS: CentOS* Linux* 7.2, Linux kernel 4.7.0-rc2, Target OS (SPDK): CentOS Linux 7.2, Linux kernel 3.10.0-327.el7.x86_64, Target OS (Linux kernel): CentOS Linux 7.2, Linux kernel 4.7.0-rc2 Performance as measured by: fio, 4KB Random Read I/O, 2 RDMA QP per remote SSD, Numjobs=4 per SSD, Queue Depth: 32/job 10 10 10 17 8 1 0 5 10 15 20 25 30 QEMU virtio-scsi kernel vhost-scsi SPDK vhost-scsi VM cores I/O processing cores 0 200000 400000 600000 800000 1000000 QEMU virtio-scsi kernel vhost-scsi SPDK vhost-scsi I/Os handled per I/O processing core
  • 31. Alibaba* Cloud ECS Case Study: Write Performance Source: http://mt.sohu.com/20170228/n481925423.shtml * Other namesand brands may be claimedas the property of others Ali Cloud sees 300% improvement in IOPS and latency using SPDK 0 200 400 600 800 1000 1200 1400 1 2 4 8 16 32 Latency(usec) Queue Depth Random Write Latency (usec) General Virtualization Infrastructure Ali Cloud High-Performance Storage Infrastructure with SPDK 0 50000 100000 150000 200000 250000 300000 350000 400000 1 2 4 8 16 32 IOPS Queue Depth Random Write 4K IOPS General Virtualization Infrastructure Ali Cloud High-Performance Storage Infrastructure with SPDK
  • 32. Alibaba* Cloud ECS Case Study: MySQL Sysbench Source:http://mt.sohu.com/20170228/n481925423.shtml * Other names and brands may be claimed as the propertyof others Sysbench Update sees 4.6X QPS at 10% of the latency! 0 2 4 6 8 10 12 14 16 18 Select Update Latency(ms) MySQL Sysbench - Latency General Virtualization Infrastructure High Performance Virtualization with SPDK 0 20000 40000 60000 80000 100000 120000 Select Update MySQL Sysbench - TPS/QPS General Virtualization Infrastructure High Performance Virtualization with SPDK
  • 33.
  • 34. SPDK Blobstore Vs. Kernel: Key Tail Latency 0 20000 40000 60000 80000 100000 120000 140000 Readwrite LatencyuS db_bench 99.99th Percentile Latency Lower is Better Kernel (256KB sync) Blobstore (20GB Cache + Readahead) 372% SPDK Blobstore reduces tail latency by 3.7X Insert Randread Overwrite Readwrite Kernel (256KB Sync) 366 6444 1675 122500 SPDK Blobstore (20GB Cache + Readahead) 444 3607 1200 33052 0 1000 2000 3000 4000 5000 6000 7000 Insert Randread Overwrite LatencyuS db_bench 99.99th Percentile Latency Lower is Better Kernel (256KB sync) Blobstore (20GB Cache + Readahead) 21% 44% 28%
  • 35. SPDK Blobstore Vs. Kernel: Key Transactions per sec 0 200000 400000 600000 800000 1000000 1200000 Insert Randread Overwrite Readwrite Keyspersecond db_bench Key Transactions Higher is Better 85% 8% 4% ~0% Insert Randread Overwrite Readwrite Kernel (256KB Sync) 547046 92582 51421 30273 SPDK Blobstore (20GB Cache + Readahead) 1011245 99918 53495 29804 SPDK Blobstore improves insert throughput by 85%