SlideShare une entreprise Scribd logo
1  sur  29
Télécharger pour lire hors ligne
SSD/NVM Technology Boosting Ceph Performance
JACK Zhang, Enterprise Architect, Intel
Xinxin Shu, Software Engineer, Intel
Ping She, Technical Mkt Engineer, Intel
10/2015
Agenda - Overview
• SSDs for Ceph today, The future SSDs is here -NVM Express™
• 3D NAND and 3D XPoint™ for Ceph tomorrow
• Yahoo! Case study w/Intel NVMe SSD+Intel Cache
Acceleration software
• ALL SSD Ceph performance data reviews
• Summary, Q&A
2
Agenda
• SSDs for Ceph today, The future SSDs is here -NVM Express™
• 3D NAND and 3D XPoint™ for Ceph tomorrow
• Yahoo! Case study w/Intel NVMe SSD+Intel Cache
Acceleration software
• ALL SSD Ceph performance data reviews
• Summary, Q&A
3
SSD as Journal and Caching
 SSD as Journal drive + Caching drive
 Intel Cache Acceleration Software (CAS with hinting
technology is optimized for Ceph
Example, Intel CAS + Intel P3700 for Yahoo! Ceph
 Separate Journal and Caching with different SSD
SSD : HDD ratio (recommendation)
 Not all SSDs are same, different density may have
different performance
 Example, Intel SSD as Journal
SATA SSD: 1 S3700: 5 HDDs
PCIe/NVMe SSD: 1 P3700 : 20 HDDs
Typical SSD usage today at Ceph
4
The Future is Here – NVM Express™
1 2
3
4
5
Built for SSDs
Architected from the
ground up for SSDs to be
efficient, scalable, and
manageable
Developed to be lean
Streamlined protocol with
new efficient queuing
mechanism to scale for
multi-core CPUs, low clock
cycles per IO
What is NVMe?
NVM Express is a standardized high
performance software interface for
PCI Express® Solid-State Drives
Ready for next generation SSDs
New storage stack with low
latency and small overhead
to take full advantage of next
generation NVM
Industry standard
software and drivers
that work out of the box
TM
5
• All NVMe/PCIe SSDs
Best performance but:
a) High cost, low capacity (today)
b) NIC bandwidth
• NVMe + Low Cost SATA high density SSDs
a) Best TCO for performance, can have higher storage density
b) Example, P3700 800GB for Journal drive, 4x S3510 1.6TB as data
• SSDs for Client nodes
ALL SSD Solutions for Ceph
6
Agenda
• SSDs for Ceph today, The future SSDs is here -NVM Express™
• 3D NAND and 3D XPoint™ for Ceph tomorrow
• Case study, Intel NVMe SSD+iCAS for Yahoo Ceph
• ALL SSD Ceph performance data reviews
• Summary, Q&A
7
Accelerating Solid State Storage in Computing Platforms
3D NAND2D NAND
32 Tiers
CAPACITY Enables high-density flash devices
COST Achieves lower cost per gigabyte than 2D NAND at maturity
CONFIDENCE 3D architecture increases performance and endurance
Intel Continues to Drive Technology
8
Moore’s Law Continues to Disrupt the Computing Industry
U.2 SSD
First Intel® SSD for
Commercial Usage
2017 >10TB
1,000,000x
the capacity while
shrinking the
form factor
1992 12MB
Source: Intel projections on SSD capacity
2019201820172014
>6TB
>30TB 1xxTB
>10TB
9
3DXpoint™TECHNOLOGY
A new class of non-volatile memory Media
1Technology claims are based on comparisons of latency, density and write cycling metrics amongst memory technologies recorded on published specifications of in-market memory products against
internal Intel specifications
1000XfasterTHAN NAND1
1000XenduranceOF NAND1
10XdenserTHAN DRAM1
Nand-likedensitiesanddram-likespeeds
10
NAND SSD
Latency: ~100,000X
Size of Data: ~1,000X
Latency: 1X
Size of Data: 1X
SRAM
Latency: ~10 MillionX
Size of Data: ~10,000X
HDD
Latency: ~10X
Size of Data: ~100X
DRAM
3D XPoint ™
Memory Media
Latency: ~100X
Size of Data: ~1,000X
STORAGE
MEMORYMEMORYTechnology claims are based on comparisons of latency, density and write cycling metrics amongst memory
technologies recorded on published specifications of in-market memory products against internal Intel specifications.
3DXpoint™TECHNOLOGYBreaks the Memory Storage Barrier
11
New Faster SSDs Emerge, Where Will They be Used?
12
Extend DRAM
In memory database
Key value store, memcache, RDMA replacement, NEW
Applications/Businesses
goal is to lower TCO and execute larger datasets
Faster SSD
Real time analytics – targeting noSQL databases
Fraud detection, ad bidding, real time decision making, trading
Cloud hosting – amazing QoS for better application SLAs
Ultra High Definition all professional video production
Caching
Cloud hosting, all flash array, SAN
Write buffer for RAID array
NAND Flash vs 3D XPoint™ Technology for
Ceph tomorrow
Enable higher capacity OSDs at lower
price
3D MLC and TLC NAND
Higher performance, opening up new
use cases, DRAM extended, Key/value…
3D XPoint™ Technology
13
Agenda
• SSDs for Ceph today, The future SSDs is here -NVM Express™
• 3D NAND and 3D XPoint™ for Ceph tomorrow
• Yahoo! Case study w/Intel NVMe SSD+Intel Cache
Acceleration software
• ALL SSD Ceph performance data reviews
• Summary, Q&A
14
Intel® NVMe SSD Accelerates Ceph for
CHALLENGE:
• How can I use Ceph for scalable storage solution?
(High latency and low throughput due to erasure
coding, write twice, huge number of small files)
• Use over-provision to address performance is
costly.
Intel® CAS
SOLUTION:
• Intel® NVMe SSD – consistently amazing
• Intel® CAS 3.0 feature – hinting
• Intel® CAS 3.0 fine tuned for Yahoo! – cache metadata
YAHOO! PERFORMANCE
GOAL:
2X
Throughput
1/2
LATENCY
COST REDUCTION:
• CapEx savings (over-provision )
OpEx savings (Power, Space, Cooling… )
• Improved scalability planning (Performance and Predictability )
User
Web Server
Client
Ceph Cluster
15
Yahoo! CEPH Challenges
Huge number of small files, why?
• Write twice
• Erasure Code
• Great for data efficiency –vs- RAID1 replication
• But bad for # of file and smaller size of each file
• Example:
• Erasure Coding (8+3) higher utilization: 63% (VS. 3
replications: 25%)
• 1M photo: become 11 x 128K
• Number of files: 64 – 128 millions files
• One file access: become 3-4 disks accesses
IO Performance
All 8 Erasure Coded
chunks must be received!
The latency is decided by
the slowest chunk
Worst latency Best Latency 
16
Intel CAS Linux 3.0 Feature - Hinting
17
Click to
play
video
Not all data equal. As cache, we treat them differently.
Compelling Solution. Now What?
• Consideration to adopt
 Use Intel NVMe SSD as cache
 Intel CAS Linux 3.0 with hinting feature will be released by end of this year
 Support RHEL, SLES, CentOS, ext4, ext3, xfs.
 Intel will help to fine tune performance for your Ceph workload
 Sign up with us as early engagement customer
18
YAHOO! PERFORMANCE GOAL:
2X
Throughput
1/2
LATENCY
• To Learn More
 Ceph IDF 2015 Demo:
https://www.youtube.com/watch?v=vtIIbxO4Zlk
 Special yahoo speaker IDF 2015:
http://intelstudios.edgesuite.net//idf/2015/sf/aep/SSDS002/SSDS002.html
• Take Away Message: 5% caching for 2X performance!
Agenda
• SSDs for Ceph today, The future SSDs is here -NVM Express™
• 3D NAND and 3D XPoint™ for Ceph tomorrow
• Yahoo! Case study w/Intel NVMe SSD+Intel Cache
Acceleration software
• ALL SSD Ceph performance data reviews
• Summary, Q&A
19
System Configuration
20
CEPH1
OSD…
FIO FIO
CLIENT 1
OSD
CEPH2
OSD…OSD
CEPH3
OSD…OSD
CEPH4
OSD…OSD
FIO FIO
CLIENT 2
FIO FIO
CLIENT 3
FIO FIO
CLIENT 4
Client Node
• 4 nodes with Intel® Xeon™ processor E5-2699
v3 @ 2.30GHz, 64GB memory
• OS : Ubuntu Trusty
Storage Node
• 4 nodes with Intel® Xeon™ processor E5-2699
v3 @ 2.30GHz, 64GB memory
• Ceph Version : 0.94.2
• OS : Ubuntu Trusty
• SSD Setup
• 16 DC 3700 400 GB for OSD
• 4 P3600 SSD for Journal
SSD Cluster vs. HDD Cluster
21
• Compared to HDD cluster
 40 HDDs with journal on SSD
 For 4K random write, need ~ 32x
HDD Cluster (~ 1300 HDDs) to get
same performance
 For 64K Sequential write, need ~
1.5x HDD Cluster (~ 60 HDDs) to get
the same performance
For the use cases that require high performance, using SSD can significantly reduce TCO
Comparison with Journal on the same SSD
~67% higher for 64K Sequential write & 14% higher for 4K random write.
Deployments
1. Journal on separate PCIe SSD
2. Journal on same SATA SSD
22
Comparison with Journal on the Separate SATA SSD
Same IOPS for 4K random write, ~ 14% higher performance for 64K Sequential write. Using NVMe SSD as
journal can spare more slots so maybe we can eliminate external storage enclosures
Deployments
Journal on separate PCIe SSD
Journal on separate SATA SSD
23
Agenda
• SSDs for Ceph today, The future SSDs is here -NVM Express™
• 3D NAND and 3D XPoint™ for Ceph tomorrow
• Yahoo! Case study w/Intel NVMe SSD+Intel Cache
Acceleration software
• ALL SSD Ceph performance data reviews
• Summary, Q&A
24
Summary
• NVMe™ was built for high-performance SSDs with the future in mind - and ready today,
using PCIe SSD as journal can get higher performance and eliminate external storage
enclosure
• Intel iCAS with hinting is a leading caching solution
• All SSD solutions provide best ever performance, and bright TCO, while there are still
lots of space for further improvements
• 3D NAND is the building block for the high capacity, low cost SSDs –- candidate for
future OSD drives
• 3D XPoint™ technology delivers high performance and low latency –- opening new
fastest NVM applications/usages
25
Q & A
26
27
Legal Notices and Disclaimers
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer.
No computer system can be absolutely secure.
Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information
to evaluate performance as you consider your purchase. For more complete information about performance and benchmark results, visit http://www.intel.com/performance.
Cost reduction scenarios described are intended as examples of how a given Intel-based product, in the specified circumstances and configurations, may affect future costs and provide cost
savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction.
This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain
the latest forecast, schedule, specifications and roadmaps.
Statements in this document that refer to Intel’s plans and expectations for the quarter, the year, and the future, are forward-looking statements that involve a number of risks and uncertainties. A detailed
discussion of the factors that could affect Intel’s results and plans is included in Intel’s SEC filings, including the annual report on Form 10-K.
The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.
Intel does not control or audit third-party benchmark data or the web sites referenced in this document. You should visit the referenced web site and confirm whether referenced data are accurate.
Intel, Xeon and the Intel logo are trademarks of Intel Corporation in the United States and other countries.
*Other names and brands may be claimed as the property of others.
© 2015 Intel Corporation.
28
Legal Information: Benchmark and Performance Claims
Disclaimers
Software and workloads used in performance tests may have been optimized for performance only on Intel® microprocessors. Performance tests, such as
SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those
factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated
purchases, including the performance of that product when combined with other products.
Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual
performance. Consult other sources of information to evaluate performance as you consider your purchase.
Test and System Configurations: See Back up for details.
For more complete information about performance and benchmark results, visit http://www.intel.com/performance.
Risk Factors
The above statements and any others in this document that refer to plans and expectations for the first quarter, the year and the future are forward-looking statements that involve
a number of risks and uncertainties. Words such as "anticipates," "expects," "intends," "plans," "believes," "seeks," "estimates," "may," "will," "should" and their variations identify
forward-looking statements. Statements that refer to or are based on projections, uncertain events or assumptions also identify forward-looking statements. Many factors could
affect Intel's actual results, and variances from Intel's current expectations regarding such factors could cause actual results to differ materially from those expressed in these
forward-looking statements. Intel presently considers the following to be important factors that could cause actual results to differ materially from the company's expectations.
Demand for Intel’s products is highly variable and could differ from expectations due to factors including changes in the business and economic conditions; consumer confidence
or income levels; customer acceptance of Intel’s and competitors’ products; competitive and pricing pressures, including actions taken by competitors; supply constraints and other
disruptions affecting customers; changes in customer order patterns including order cancellations; and changes in the level of inventory at customers. Intel’s gross margin
percentage could vary significantly from expectations based on capacity utilization; variations in inventory valuation, including variations related to the timing of qualifying products
for sale; changes in revenue levels; segment product mix; the timing and execution of the manufacturing ramp and associated costs; excess or obsolete inventory; changes in unit
costs; defects or disruptions in the supply of materials or resources; and product manufacturing quality/yields. Variations in gross margin may also be caused by the timing of Intel
product introductions and related expenses, including marketing expenses, and Intel’s ability to respond quickly to technological developments and to introduce new features into
existing products, which may result in restructuring and asset impairment charges. Intel's results could be affected by adverse economic, social, political and physical/infrastructure
conditions in countries where Intel, its customers or its suppliers operate, including military conflict and other security risks, natural disasters, infrastructure disruptions, health
concerns and fluctuations in currency exchange rates. Results may also be affected by the formal or informal imposition by countries of new or revised export and/or import and
doing-business regulations, which could be changed without prior notice. Intel operates in highly competitive industries and its operations have high costs that are either fixed or
difficult to reduce in the short term. The amount, timing and execution of Intel’s stock repurchase program and dividend program could be affected by changes in Intel’s priorities
for the use of cash, such as operational spending, capital spending, acquisitions, and as a result of changes to Intel’s cash flows and changes in tax laws. Product defects or errata
(deviations from published specifications) may adversely impact our expenses, revenues and reputation. Intel’s results could be affected by litigation or regulatory matters
involving intellectual property, stockholder, consumer, antitrust, disclosure and other issues. An unfavorable ruling could include monetary damages or an injunction prohibiting
Intel from manufacturing or selling one or more products, precluding particular business practices, impacting Intel’s ability to design its products, or requiring other remedies such
as compulsory licensing of intellectual property. Intel’s results may be affected by the timing of closing of acquisitions, divestitures and other significant transactions. A detailed
discussion of these and other factors that could affect Intel’s results is included in Intel’s SEC filings, including the company’s most recent reports on Form 10-Q, Form 10-K and
earnings release.
Rev. 1/15/15 29

Contenu connexe

Tendances

Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster inwin stack
 
Ceph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephCeph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephDanielle Womboldt
 
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash TechnologyCeph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash TechnologyCeph Community
 
09 yong.luo-ceph in-ctrip
09 yong.luo-ceph in-ctrip09 yong.luo-ceph in-ctrip
09 yong.luo-ceph in-ctripYong Luo
 
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Community
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Danielle Womboldt
 
Why Software-Defined Storage Matters
Why Software-Defined Storage MattersWhy Software-Defined Storage Matters
Why Software-Defined Storage MattersColleen Corrice
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed_Hat_Storage
 
NGENSTOR_ODA_HPDA
NGENSTOR_ODA_HPDANGENSTOR_ODA_HPDA
NGENSTOR_ODA_HPDAUniFabric
 
Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Community
 
Red Hat® Ceph Storage and Network Solutions for Software Defined Infrastructure
Red Hat® Ceph Storage and Network Solutions for Software Defined InfrastructureRed Hat® Ceph Storage and Network Solutions for Software Defined Infrastructure
Red Hat® Ceph Storage and Network Solutions for Software Defined InfrastructureIntel® Software
 
Benefity Oracle Cloudu (4/4): Storage
Benefity Oracle Cloudu (4/4): StorageBenefity Oracle Cloudu (4/4): Storage
Benefity Oracle Cloudu (4/4): StorageMarketingArrowECS_CZ
 
Red Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed_Hat_Storage
 
Developing a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure EnvironmentsDeveloping a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure EnvironmentsCeph Community
 
Ceph Day Melabourne - Community Update
Ceph Day Melabourne - Community UpdateCeph Day Melabourne - Community Update
Ceph Day Melabourne - Community UpdateCeph Community
 
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient Ceph Community
 
Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Community
 
10 reasons why to choose Pure Storage
10 reasons why to choose Pure Storage10 reasons why to choose Pure Storage
10 reasons why to choose Pure StorageMarketingArrowECS_CZ
 
Ceph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph clusterCeph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph clusterCeph Community
 

Tendances (20)

Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster
 
Ceph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephCeph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for Ceph
 
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash TechnologyCeph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
 
09 yong.luo-ceph in-ctrip
09 yong.luo-ceph in-ctrip09 yong.luo-ceph in-ctrip
09 yong.luo-ceph in-ctrip
 
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash Storage
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
 
Why Software-Defined Storage Matters
Why Software-Defined Storage MattersWhy Software-Defined Storage Matters
Why Software-Defined Storage Matters
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super Storage
 
NGENSTOR_ODA_HPDA
NGENSTOR_ODA_HPDANGENSTOR_ODA_HPDA
NGENSTOR_ODA_HPDA
 
Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage
 
Red Hat® Ceph Storage and Network Solutions for Software Defined Infrastructure
Red Hat® Ceph Storage and Network Solutions for Software Defined InfrastructureRed Hat® Ceph Storage and Network Solutions for Software Defined Infrastructure
Red Hat® Ceph Storage and Network Solutions for Software Defined Infrastructure
 
Benefity Oracle Cloudu (4/4): Storage
Benefity Oracle Cloudu (4/4): StorageBenefity Oracle Cloudu (4/4): Storage
Benefity Oracle Cloudu (4/4): Storage
 
Red Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage Matters
 
Developing a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure EnvironmentsDeveloping a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure Environments
 
Ceph Day Melabourne - Community Update
Ceph Day Melabourne - Community UpdateCeph Day Melabourne - Community Update
Ceph Day Melabourne - Community Update
 
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
 
Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK
 
Infinidat InfiniBox
Infinidat InfiniBoxInfinidat InfiniBox
Infinidat InfiniBox
 
10 reasons why to choose Pure Storage
10 reasons why to choose Pure Storage10 reasons why to choose Pure Storage
10 reasons why to choose Pure Storage
 
Ceph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph clusterCeph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph cluster
 

En vedette

Ceph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache TieringCeph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache TieringCeph Community
 
Ceph Day Shanghai - Ceph in Ctrip
Ceph Day Shanghai - Ceph in CtripCeph Day Shanghai - Ceph in Ctrip
Ceph Day Shanghai - Ceph in CtripCeph Community
 
Ceph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to EnterpriseCeph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to EnterpriseCeph Community
 
AF Ceph: Ceph Performance Analysis and Improvement on Flash
AF Ceph: Ceph Performance Analysis and Improvement on FlashAF Ceph: Ceph Performance Analysis and Improvement on Flash
AF Ceph: Ceph Performance Analysis and Improvement on FlashCeph Community
 
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster Ceph Community
 
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Community
 
Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Community
 
Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools Ceph Community
 
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Community
 
iSCSI Target Support for Ceph
iSCSI Target Support for Ceph iSCSI Target Support for Ceph
iSCSI Target Support for Ceph Ceph Community
 
Ceph Day KL - Bluestore
Ceph Day KL - Bluestore Ceph Day KL - Bluestore
Ceph Day KL - Bluestore Ceph Community
 
Ceph Day Seoul - Community Update
Ceph Day Seoul - Community UpdateCeph Day Seoul - Community Update
Ceph Day Seoul - Community UpdateCeph Community
 
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph Ceph Community
 
Ceph Day Tokyo - High Performance Layered Architecture
Ceph Day Tokyo - High Performance Layered Architecture  Ceph Day Tokyo - High Performance Layered Architecture
Ceph Day Tokyo - High Performance Layered Architecture Ceph Community
 
Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Day Tokyo - Bring Ceph to Enterprise Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Day Tokyo - Bring Ceph to Enterprise Ceph Community
 
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph clusterCeph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph clusterCeph Community
 
Ceph Day Tokyo - Ceph Community Update
Ceph Day Tokyo - Ceph Community Update Ceph Day Tokyo - Ceph Community Update
Ceph Day Tokyo - Ceph Community Update Ceph Community
 
How to Become a Thought Leader in Your Niche
How to Become a Thought Leader in Your NicheHow to Become a Thought Leader in Your Niche
How to Become a Thought Leader in Your NicheLeslie Samuel
 

En vedette (19)

Ceph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache TieringCeph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache Tiering
 
librados
libradoslibrados
librados
 
Ceph Day Shanghai - Ceph in Ctrip
Ceph Day Shanghai - Ceph in CtripCeph Day Shanghai - Ceph in Ctrip
Ceph Day Shanghai - Ceph in Ctrip
 
Ceph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to EnterpriseCeph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to Enterprise
 
AF Ceph: Ceph Performance Analysis and Improvement on Flash
AF Ceph: Ceph Performance Analysis and Improvement on FlashAF Ceph: Ceph Performance Analysis and Improvement on Flash
AF Ceph: Ceph Performance Analysis and Improvement on Flash
 
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
 
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
 
Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage
 
Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools
 
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
 
iSCSI Target Support for Ceph
iSCSI Target Support for Ceph iSCSI Target Support for Ceph
iSCSI Target Support for Ceph
 
Ceph Day KL - Bluestore
Ceph Day KL - Bluestore Ceph Day KL - Bluestore
Ceph Day KL - Bluestore
 
Ceph Day Seoul - Community Update
Ceph Day Seoul - Community UpdateCeph Day Seoul - Community Update
Ceph Day Seoul - Community Update
 
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
 
Ceph Day Tokyo - High Performance Layered Architecture
Ceph Day Tokyo - High Performance Layered Architecture  Ceph Day Tokyo - High Performance Layered Architecture
Ceph Day Tokyo - High Performance Layered Architecture
 
Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Day Tokyo - Bring Ceph to Enterprise Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Day Tokyo - Bring Ceph to Enterprise
 
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph clusterCeph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
 
Ceph Day Tokyo - Ceph Community Update
Ceph Day Tokyo - Ceph Community Update Ceph Day Tokyo - Ceph Community Update
Ceph Day Tokyo - Ceph Community Update
 
How to Become a Thought Leader in Your Niche
How to Become a Thought Leader in Your NicheHow to Become a Thought Leader in Your Niche
How to Become a Thought Leader in Your Niche
 

Similaire à Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance

Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed_Hat_Storage
 
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red_Hat_Storage
 
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Community
 
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red_Hat_Storage
 
Webinar: The Bifurcation of the Flash Market
Webinar: The Bifurcation of the Flash MarketWebinar: The Bifurcation of the Flash Market
Webinar: The Bifurcation of the Flash MarketStorage Switzerland
 
Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Community
 
Impact of Intel Optane Technology on HPC
Impact of Intel Optane Technology on HPCImpact of Intel Optane Technology on HPC
Impact of Intel Optane Technology on HPCMemVerge
 
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Odinot Stanislas
 
Benchmarking Performance: Benefits of PCIe NVMe SSDs for Client Workloads
Benchmarking Performance: Benefits of PCIe NVMe SSDs for Client WorkloadsBenchmarking Performance: Benefits of PCIe NVMe SSDs for Client Workloads
Benchmarking Performance: Benefits of PCIe NVMe SSDs for Client WorkloadsSamsung Business USA
 
Reimagining HPC Compute and Storage Architecture with Intel Optane Technology
Reimagining HPC Compute and Storage Architecture with Intel Optane TechnologyReimagining HPC Compute and Storage Architecture with Intel Optane Technology
Reimagining HPC Compute and Storage Architecture with Intel Optane Technologyinside-BigData.com
 
Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...
Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...
Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...DataStax
 
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...
Implementation of Dense Storage Utilizing  HDDs with SSDs and PCIe Flash  Acc...Implementation of Dense Storage Utilizing  HDDs with SSDs and PCIe Flash  Acc...
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...Red_Hat_Storage
 
RedisConf18 - Re-architecting Redis-on-Flash with Intel 3DX Point™ Memory
RedisConf18 - Re-architecting Redis-on-Flash with Intel 3DX Point™ MemoryRedisConf18 - Re-architecting Redis-on-Flash with Intel 3DX Point™ Memory
RedisConf18 - Re-architecting Redis-on-Flash with Intel 3DX Point™ MemoryRedis Labs
 
Intel ssd dc data center family for PCIe
Intel ssd dc data center family for PCIeIntel ssd dc data center family for PCIe
Intel ssd dc data center family for PCIeLow Hong Chuan
 
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster Ceph Community
 
Presentation architecting a cloud infrastructure
Presentation   architecting a cloud infrastructurePresentation   architecting a cloud infrastructure
Presentation architecting a cloud infrastructurexKinAnx
 
Presentation architecting a cloud infrastructure
Presentation   architecting a cloud infrastructurePresentation   architecting a cloud infrastructure
Presentation architecting a cloud infrastructuresolarisyourep
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red_Hat_Storage
 
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...In-Memory Computing Summit
 
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsRed_Hat_Storage
 

Similaire à Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance (20)

Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
 
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
 
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
 
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
 
Webinar: The Bifurcation of the Flash Market
Webinar: The Bifurcation of the Flash MarketWebinar: The Bifurcation of the Flash Market
Webinar: The Bifurcation of the Flash Market
 
Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage
 
Impact of Intel Optane Technology on HPC
Impact of Intel Optane Technology on HPCImpact of Intel Optane Technology on HPC
Impact of Intel Optane Technology on HPC
 
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
 
Benchmarking Performance: Benefits of PCIe NVMe SSDs for Client Workloads
Benchmarking Performance: Benefits of PCIe NVMe SSDs for Client WorkloadsBenchmarking Performance: Benefits of PCIe NVMe SSDs for Client Workloads
Benchmarking Performance: Benefits of PCIe NVMe SSDs for Client Workloads
 
Reimagining HPC Compute and Storage Architecture with Intel Optane Technology
Reimagining HPC Compute and Storage Architecture with Intel Optane TechnologyReimagining HPC Compute and Storage Architecture with Intel Optane Technology
Reimagining HPC Compute and Storage Architecture with Intel Optane Technology
 
Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...
Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...
Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...
 
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...
Implementation of Dense Storage Utilizing  HDDs with SSDs and PCIe Flash  Acc...Implementation of Dense Storage Utilizing  HDDs with SSDs and PCIe Flash  Acc...
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...
 
RedisConf18 - Re-architecting Redis-on-Flash with Intel 3DX Point™ Memory
RedisConf18 - Re-architecting Redis-on-Flash with Intel 3DX Point™ MemoryRedisConf18 - Re-architecting Redis-on-Flash with Intel 3DX Point™ Memory
RedisConf18 - Re-architecting Redis-on-Flash with Intel 3DX Point™ Memory
 
Intel ssd dc data center family for PCIe
Intel ssd dc data center family for PCIeIntel ssd dc data center family for PCIe
Intel ssd dc data center family for PCIe
 
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
 
Presentation architecting a cloud infrastructure
Presentation   architecting a cloud infrastructurePresentation   architecting a cloud infrastructure
Presentation architecting a cloud infrastructure
 
Presentation architecting a cloud infrastructure
Presentation   architecting a cloud infrastructurePresentation   architecting a cloud infrastructure
Presentation architecting a cloud infrastructure
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
 
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
 
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
 

Dernier

Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfRankYa
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Wonjun Hwang
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 

Dernier (20)

Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdf
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 

Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance

  • 1. SSD/NVM Technology Boosting Ceph Performance JACK Zhang, Enterprise Architect, Intel Xinxin Shu, Software Engineer, Intel Ping She, Technical Mkt Engineer, Intel 10/2015
  • 2. Agenda - Overview • SSDs for Ceph today, The future SSDs is here -NVM Express™ • 3D NAND and 3D XPoint™ for Ceph tomorrow • Yahoo! Case study w/Intel NVMe SSD+Intel Cache Acceleration software • ALL SSD Ceph performance data reviews • Summary, Q&A 2
  • 3. Agenda • SSDs for Ceph today, The future SSDs is here -NVM Express™ • 3D NAND and 3D XPoint™ for Ceph tomorrow • Yahoo! Case study w/Intel NVMe SSD+Intel Cache Acceleration software • ALL SSD Ceph performance data reviews • Summary, Q&A 3
  • 4. SSD as Journal and Caching  SSD as Journal drive + Caching drive  Intel Cache Acceleration Software (CAS with hinting technology is optimized for Ceph Example, Intel CAS + Intel P3700 for Yahoo! Ceph  Separate Journal and Caching with different SSD SSD : HDD ratio (recommendation)  Not all SSDs are same, different density may have different performance  Example, Intel SSD as Journal SATA SSD: 1 S3700: 5 HDDs PCIe/NVMe SSD: 1 P3700 : 20 HDDs Typical SSD usage today at Ceph 4
  • 5. The Future is Here – NVM Express™ 1 2 3 4 5 Built for SSDs Architected from the ground up for SSDs to be efficient, scalable, and manageable Developed to be lean Streamlined protocol with new efficient queuing mechanism to scale for multi-core CPUs, low clock cycles per IO What is NVMe? NVM Express is a standardized high performance software interface for PCI Express® Solid-State Drives Ready for next generation SSDs New storage stack with low latency and small overhead to take full advantage of next generation NVM Industry standard software and drivers that work out of the box TM 5
  • 6. • All NVMe/PCIe SSDs Best performance but: a) High cost, low capacity (today) b) NIC bandwidth • NVMe + Low Cost SATA high density SSDs a) Best TCO for performance, can have higher storage density b) Example, P3700 800GB for Journal drive, 4x S3510 1.6TB as data • SSDs for Client nodes ALL SSD Solutions for Ceph 6
  • 7. Agenda • SSDs for Ceph today, The future SSDs is here -NVM Express™ • 3D NAND and 3D XPoint™ for Ceph tomorrow • Case study, Intel NVMe SSD+iCAS for Yahoo Ceph • ALL SSD Ceph performance data reviews • Summary, Q&A 7
  • 8. Accelerating Solid State Storage in Computing Platforms 3D NAND2D NAND 32 Tiers CAPACITY Enables high-density flash devices COST Achieves lower cost per gigabyte than 2D NAND at maturity CONFIDENCE 3D architecture increases performance and endurance Intel Continues to Drive Technology 8
  • 9. Moore’s Law Continues to Disrupt the Computing Industry U.2 SSD First Intel® SSD for Commercial Usage 2017 >10TB 1,000,000x the capacity while shrinking the form factor 1992 12MB Source: Intel projections on SSD capacity 2019201820172014 >6TB >30TB 1xxTB >10TB 9
  • 10. 3DXpoint™TECHNOLOGY A new class of non-volatile memory Media 1Technology claims are based on comparisons of latency, density and write cycling metrics amongst memory technologies recorded on published specifications of in-market memory products against internal Intel specifications 1000XfasterTHAN NAND1 1000XenduranceOF NAND1 10XdenserTHAN DRAM1 Nand-likedensitiesanddram-likespeeds 10
  • 11. NAND SSD Latency: ~100,000X Size of Data: ~1,000X Latency: 1X Size of Data: 1X SRAM Latency: ~10 MillionX Size of Data: ~10,000X HDD Latency: ~10X Size of Data: ~100X DRAM 3D XPoint ™ Memory Media Latency: ~100X Size of Data: ~1,000X STORAGE MEMORYMEMORYTechnology claims are based on comparisons of latency, density and write cycling metrics amongst memory technologies recorded on published specifications of in-market memory products against internal Intel specifications. 3DXpoint™TECHNOLOGYBreaks the Memory Storage Barrier 11
  • 12. New Faster SSDs Emerge, Where Will They be Used? 12 Extend DRAM In memory database Key value store, memcache, RDMA replacement, NEW Applications/Businesses goal is to lower TCO and execute larger datasets Faster SSD Real time analytics – targeting noSQL databases Fraud detection, ad bidding, real time decision making, trading Cloud hosting – amazing QoS for better application SLAs Ultra High Definition all professional video production Caching Cloud hosting, all flash array, SAN Write buffer for RAID array
  • 13. NAND Flash vs 3D XPoint™ Technology for Ceph tomorrow Enable higher capacity OSDs at lower price 3D MLC and TLC NAND Higher performance, opening up new use cases, DRAM extended, Key/value… 3D XPoint™ Technology 13
  • 14. Agenda • SSDs for Ceph today, The future SSDs is here -NVM Express™ • 3D NAND and 3D XPoint™ for Ceph tomorrow • Yahoo! Case study w/Intel NVMe SSD+Intel Cache Acceleration software • ALL SSD Ceph performance data reviews • Summary, Q&A 14
  • 15. Intel® NVMe SSD Accelerates Ceph for CHALLENGE: • How can I use Ceph for scalable storage solution? (High latency and low throughput due to erasure coding, write twice, huge number of small files) • Use over-provision to address performance is costly. Intel® CAS SOLUTION: • Intel® NVMe SSD – consistently amazing • Intel® CAS 3.0 feature – hinting • Intel® CAS 3.0 fine tuned for Yahoo! – cache metadata YAHOO! PERFORMANCE GOAL: 2X Throughput 1/2 LATENCY COST REDUCTION: • CapEx savings (over-provision ) OpEx savings (Power, Space, Cooling… ) • Improved scalability planning (Performance and Predictability ) User Web Server Client Ceph Cluster 15
  • 16. Yahoo! CEPH Challenges Huge number of small files, why? • Write twice • Erasure Code • Great for data efficiency –vs- RAID1 replication • But bad for # of file and smaller size of each file • Example: • Erasure Coding (8+3) higher utilization: 63% (VS. 3 replications: 25%) • 1M photo: become 11 x 128K • Number of files: 64 – 128 millions files • One file access: become 3-4 disks accesses IO Performance All 8 Erasure Coded chunks must be received! The latency is decided by the slowest chunk Worst latency Best Latency  16
  • 17. Intel CAS Linux 3.0 Feature - Hinting 17 Click to play video Not all data equal. As cache, we treat them differently.
  • 18. Compelling Solution. Now What? • Consideration to adopt  Use Intel NVMe SSD as cache  Intel CAS Linux 3.0 with hinting feature will be released by end of this year  Support RHEL, SLES, CentOS, ext4, ext3, xfs.  Intel will help to fine tune performance for your Ceph workload  Sign up with us as early engagement customer 18 YAHOO! PERFORMANCE GOAL: 2X Throughput 1/2 LATENCY • To Learn More  Ceph IDF 2015 Demo: https://www.youtube.com/watch?v=vtIIbxO4Zlk  Special yahoo speaker IDF 2015: http://intelstudios.edgesuite.net//idf/2015/sf/aep/SSDS002/SSDS002.html • Take Away Message: 5% caching for 2X performance!
  • 19. Agenda • SSDs for Ceph today, The future SSDs is here -NVM Express™ • 3D NAND and 3D XPoint™ for Ceph tomorrow • Yahoo! Case study w/Intel NVMe SSD+Intel Cache Acceleration software • ALL SSD Ceph performance data reviews • Summary, Q&A 19
  • 20. System Configuration 20 CEPH1 OSD… FIO FIO CLIENT 1 OSD CEPH2 OSD…OSD CEPH3 OSD…OSD CEPH4 OSD…OSD FIO FIO CLIENT 2 FIO FIO CLIENT 3 FIO FIO CLIENT 4 Client Node • 4 nodes with Intel® Xeon™ processor E5-2699 v3 @ 2.30GHz, 64GB memory • OS : Ubuntu Trusty Storage Node • 4 nodes with Intel® Xeon™ processor E5-2699 v3 @ 2.30GHz, 64GB memory • Ceph Version : 0.94.2 • OS : Ubuntu Trusty • SSD Setup • 16 DC 3700 400 GB for OSD • 4 P3600 SSD for Journal
  • 21. SSD Cluster vs. HDD Cluster 21 • Compared to HDD cluster  40 HDDs with journal on SSD  For 4K random write, need ~ 32x HDD Cluster (~ 1300 HDDs) to get same performance  For 64K Sequential write, need ~ 1.5x HDD Cluster (~ 60 HDDs) to get the same performance For the use cases that require high performance, using SSD can significantly reduce TCO
  • 22. Comparison with Journal on the same SSD ~67% higher for 64K Sequential write & 14% higher for 4K random write. Deployments 1. Journal on separate PCIe SSD 2. Journal on same SATA SSD 22
  • 23. Comparison with Journal on the Separate SATA SSD Same IOPS for 4K random write, ~ 14% higher performance for 64K Sequential write. Using NVMe SSD as journal can spare more slots so maybe we can eliminate external storage enclosures Deployments Journal on separate PCIe SSD Journal on separate SATA SSD 23
  • 24. Agenda • SSDs for Ceph today, The future SSDs is here -NVM Express™ • 3D NAND and 3D XPoint™ for Ceph tomorrow • Yahoo! Case study w/Intel NVMe SSD+Intel Cache Acceleration software • ALL SSD Ceph performance data reviews • Summary, Q&A 24
  • 25. Summary • NVMe™ was built for high-performance SSDs with the future in mind - and ready today, using PCIe SSD as journal can get higher performance and eliminate external storage enclosure • Intel iCAS with hinting is a leading caching solution • All SSD solutions provide best ever performance, and bright TCO, while there are still lots of space for further improvements • 3D NAND is the building block for the high capacity, low cost SSDs –- candidate for future OSD drives • 3D XPoint™ technology delivers high performance and low latency –- opening new fastest NVM applications/usages 25
  • 27. 27 Legal Notices and Disclaimers Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer. No computer system can be absolutely secure. Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your purchase. For more complete information about performance and benchmark results, visit http://www.intel.com/performance. Cost reduction scenarios described are intended as examples of how a given Intel-based product, in the specified circumstances and configurations, may affect future costs and provide cost savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction. This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps. Statements in this document that refer to Intel’s plans and expectations for the quarter, the year, and the future, are forward-looking statements that involve a number of risks and uncertainties. A detailed discussion of the factors that could affect Intel’s results and plans is included in Intel’s SEC filings, including the annual report on Form 10-K. The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. Intel does not control or audit third-party benchmark data or the web sites referenced in this document. You should visit the referenced web site and confirm whether referenced data are accurate. Intel, Xeon and the Intel logo are trademarks of Intel Corporation in the United States and other countries. *Other names and brands may be claimed as the property of others. © 2015 Intel Corporation.
  • 28. 28 Legal Information: Benchmark and Performance Claims Disclaimers Software and workloads used in performance tests may have been optimized for performance only on Intel® microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your purchase. Test and System Configurations: See Back up for details. For more complete information about performance and benchmark results, visit http://www.intel.com/performance.
  • 29. Risk Factors The above statements and any others in this document that refer to plans and expectations for the first quarter, the year and the future are forward-looking statements that involve a number of risks and uncertainties. Words such as "anticipates," "expects," "intends," "plans," "believes," "seeks," "estimates," "may," "will," "should" and their variations identify forward-looking statements. Statements that refer to or are based on projections, uncertain events or assumptions also identify forward-looking statements. Many factors could affect Intel's actual results, and variances from Intel's current expectations regarding such factors could cause actual results to differ materially from those expressed in these forward-looking statements. Intel presently considers the following to be important factors that could cause actual results to differ materially from the company's expectations. Demand for Intel’s products is highly variable and could differ from expectations due to factors including changes in the business and economic conditions; consumer confidence or income levels; customer acceptance of Intel’s and competitors’ products; competitive and pricing pressures, including actions taken by competitors; supply constraints and other disruptions affecting customers; changes in customer order patterns including order cancellations; and changes in the level of inventory at customers. Intel’s gross margin percentage could vary significantly from expectations based on capacity utilization; variations in inventory valuation, including variations related to the timing of qualifying products for sale; changes in revenue levels; segment product mix; the timing and execution of the manufacturing ramp and associated costs; excess or obsolete inventory; changes in unit costs; defects or disruptions in the supply of materials or resources; and product manufacturing quality/yields. Variations in gross margin may also be caused by the timing of Intel product introductions and related expenses, including marketing expenses, and Intel’s ability to respond quickly to technological developments and to introduce new features into existing products, which may result in restructuring and asset impairment charges. Intel's results could be affected by adverse economic, social, political and physical/infrastructure conditions in countries where Intel, its customers or its suppliers operate, including military conflict and other security risks, natural disasters, infrastructure disruptions, health concerns and fluctuations in currency exchange rates. Results may also be affected by the formal or informal imposition by countries of new or revised export and/or import and doing-business regulations, which could be changed without prior notice. Intel operates in highly competitive industries and its operations have high costs that are either fixed or difficult to reduce in the short term. The amount, timing and execution of Intel’s stock repurchase program and dividend program could be affected by changes in Intel’s priorities for the use of cash, such as operational spending, capital spending, acquisitions, and as a result of changes to Intel’s cash flows and changes in tax laws. Product defects or errata (deviations from published specifications) may adversely impact our expenses, revenues and reputation. Intel’s results could be affected by litigation or regulatory matters involving intellectual property, stockholder, consumer, antitrust, disclosure and other issues. An unfavorable ruling could include monetary damages or an injunction prohibiting Intel from manufacturing or selling one or more products, precluding particular business practices, impacting Intel’s ability to design its products, or requiring other remedies such as compulsory licensing of intellectual property. Intel’s results may be affected by the timing of closing of acquisitions, divestitures and other significant transactions. A detailed discussion of these and other factors that could affect Intel’s results is included in Intel’s SEC filings, including the company’s most recent reports on Form 10-Q, Form 10-K and earnings release. Rev. 1/15/15 29