SlideShare une entreprise Scribd logo
1  sur  50
QCT Ceph
Solution – Design
Consideration and
Reference
Architecture
Gary Lee
AVP, QCT
• Industry Trend and Customer Needs
• Ceph Architecture
• Technology
• Ceph Reference Architecture and QCT Solution
• Test Result
• QCT/Red Hat Ceph Whitepaper
2
AGENDA
3
Industry Trend
and
Customer Needs
4
• Structured Data -> Unstructured/Structured Data
• Data -> Big Data, Fast Data
• Data Processing -> Data Modeling -> Data Science
• IT -> DT
• Monolithic -> Microservice
5
Industry Trend
• Scalable Size
• Variable Type
• Longivity Time
• Distributed Location
• Versatile Workload
6
• Affordable Price
• Available Service
• Continuous Innovation
• Consistent Management
• Neutral Vendor
Customer Needs
7
Ceph
Architecture
Ceph Storage Cluster
8
Cluster Network
Ceph
Linux
CPU
Memory
SSD
HDD
NIC
Ceph
Linux
CPU
Memory
SSD
HDD
NIC
Ceph
Linux
CPU
Memory
SSD
HDD
NIC
Ceph
Linux
CPU
Memory
SSD
HDD
NIC
Object Block File
Unified
Storage
Scale-out
Cluster
Open
Source
Software
Open
Commodity
Hardware
…..
9
Block
I/O
Ceph
Client
RBD
RADO
SGW
Ceph
FS
Object
I/O
File I/O
RADOS/Cluster Network
OSD
File
System
I/O
Disk I/O
PublicNetwork
End-to-end Data Path
App
Service
10
Ceph Software Architecture
Public Network (ex. 10GbE or 40GbE)
Cluster Network (ex. 10GbE or 40GbE)
Ceph Monitor
…...
RCT or RCC
Nx Ceph OSD Nodes
Ceph OSD Node
Clients
Ceph OSD Node Ceph OSD Node
Ceph Hardware Architecture
12
Technology
13
• 2x Intel E5-2600 CPU
• 16x DDR4 Memory
• 12x 3.5” SAS/SATA HDD
• 4x SATA SSD + PCIe M.2
• 1x SATADOM
• 1x 1G/10G NIC
• BMC with 1G NIC
• 1x PCIe x8 Mezz Card
• 1x PCIe x8 SAS Controller
• 1U
QCT Ceph Storage Server
D51PH-1ULH
14
• Mono/Dual Node
• 2x Intel E5-2600 CPU
• 16x DDR4 Memory
• 78x or 2x 35x SSD/HDD
• 1x 1G/10G NIC
• BMC with 1G NIC
• 1x PCIe x8 SAS Controller
• 1x PCIe x8 HHLH Card
• 1x PCIe x16 FHHL Card
• 4U
QCT Ceph Storage Server
T21P-4U
15
• 1x Intel Xeon D SoC CPU
• 4x DDR4 Memory
• 12x SAS/SATA HDD
• 4x SATA SSD
• 2x SATA SSD for OS
• 1x 1G/10G NIC
• BMC with 1G NIC
• 1x PCIe x8 Mezz Card
• 1x PCIe x8 SAS Controller
• 1U
QCT Ceph Storage Server
SD1Q-1ULH
• Standalone, without EC
• Standalone, with EC
• Hyper-converged, without EC
• High Core vs. High Frequency
• 1x OSD ~ (0.3-0.5)x Core + 2G RAM
16
CPU/Memory
• SSD:
– Journal
– Tier
– File System Cache
– Client Cache
• Journal
– HDD: SSD (SATA/SAS): 4~5
– HDD: NVMe: 12~18
17
SSD/NVMe
• 2x NVMe ~40Gb
• 4x NVMe ~100Gb
• 2x SATA SSD ~10Gb
• 1x SAS SSD ~10Gb
• (20~25)x HDD ~10Gb
• ~100x HDD ~40Gb
18
NIC
10G/40G -> 25G/100G
• CPU Offload through RDMA/iWARP
• Erasure Coding Offload
• Allocate computing on different silicon areas
19
NIC
I/O Offloading
• Object Replication
– 1 Primary + 2 Replica (or more)
– CRUSH Allocation Ruleset
• Erasure Coding
– [k+m], e.g. 4+2, 8+3
– Better Data Efficiency
• k/(k+m) vs. 1/(1+replication)
20
Erasure Coding vs. Replication
Size/
Workload
Small Medium Large
Throughput
Transfer Bandwidth
Sequential R/W
Capacity
Cost/capacity
Scalability
IOPS
IOPS/ per 4k Block
Random R/W
Hyper-converged ?
Desktop
Virtualization
Latency
Random R/W
Hadoop ?
21
Workload and Configuration
22
Red Hat Ceph
• Intel ISA-L
• Intel SPDK
• Intel CAS
• Mellanox Accelio Library
23
Vendor-specific Value-added Software
24
Ceph Reference
Architecture and
QCT Solution
• Trade-off among Technologies
• Scalable in Architecture
• Optimized for Workload
• Affordable as Expected
Design Principle
1. Needs for scale-out storage
2. Target workload
3. Access method
4. Storage capacity
5. Data protection methods
6. Fault domain risk tolerance
26
Design Considerations
27
Transactio
n
Data
Warehous
e
Big
Data
Scientific
Block
Transfe
r
Audio Video
IOPS
MB/sec
OLTP
OLAP
HPC
Streaming
DB
Storage Workload
SMALL (500TB*) MEDIUM (>1PB*) LARGE (>2PB*)
Throughput
optimized
QxStor RCT-200
16x D51PH-1ULH (16U)
• 12x 8TB HDDs
• 3x SSDs
• 1x dual port 10GbE
• 3x replica
QxStor RCT-400
6x T21P-4U/Dual (24U)
• 2x 35x 8TB HDDs
• 2x 2x PCIe SSDs
• 2x single port 40GbE
• 3x replica
QxStor RCT-400
11x T21P-4U/Dual (44U)
• 2x 35x 8TB HDDs
• 2x 2x PCIe SSDs
• 2x single port 40GbE
• 3x replica
Cost/Capacity
optimized
IOPS optimized Future direction Future direction NA
* Usable storage capacity
QxStor RCC-400
Nx T21P-4U/Dual
• 2x 35x 8TB HDDs
• 0x SSDs
• 2x dual port 10GbE
• Erasure Coding 4:2
QCT QxStor Red Hat Ceph Storage Edition Portfolio
Workload-driven Integrated Software/Hardware Solution
• Densest 1U Ceph building block
• Best reliability with smaller
failure domain
• Scale at high scale 2x 280TB
• At once obtain best throughput
and density
• Block or object storage
• 3x replication
• Video, audio, image repositories, and streaming media
• Highest density 560TB raw
capacity per chassis with greatest
price/performance
• Typically object storage
• Erasure coding common
for maximizing usable capacity
• Object archive
Throughput-Optimized
RCC-400RCT-200 RCT-400
Cost/Capacity-Optimized
USECASE
QCT QxStor Red Hat Ceph Storage Edition
Co-engineered with Red Hat Storage team to provide Optimized Ceph Solution
30
Ceph Solution Deployment
Using QCT QPT Bare Metal Privision Tool
31
Ceph Solution Deployment
Using QCT QPT Bare Metal Privision Tool
32
QCT Solution Value Proposition
• Workload-driven
• Hardware/software pre-validated, pre-optimized and
pre-integrated
• Up and running in minutes
• Balance between production (stable) and innovation
(up-streaming)
33
Test Result
Client 1
S2B
Client 2
S2B
Client 3
S2B
Ceph 1
S2PH
Ceph 2
S2PH
Ceph 3
S2PH
Ceph 5
S2PH
Ceph 4
S2PH
Client 8
S2B
Client 9
S2B
Client 10
S2B
10Gb
10Gb
Public Network
Cluster Network
General Configuration
• 5 Ceph nodes (S2PH) with each 2 x 10Gb link.
• 10 Client nodes (S2B) with each 2 x 10Gb link.
• Public network : Balanced bandwidth between Client nodes and Ceph nodes.
• Cluster network : Offload the traffic from public network to improve performance.
Option 1 (w/o SSD)
a. 12 OSD per Ceph storage node
b. S2PH (E5-2660) x2
c. RAM : 128 GB
Option 2 : (w/ SSD)
a. 12 OSD / 3 SSD per Ceph storage node
b. S2PH (E5-2660) x2
c. RAM : 12 (OSD) x 2GB = 24 GB
Testing Configuration (Throughput-Optimized)
Client 1
S2S
Client 2
S2S
Client 3
S2S
Ceph 1
S2P
Ceph 2
S2P
Client 6
S2S
Client 7
S2S
Client 8
S2S
10Gb
Public Network
40Gb 40Gb
General Configuration
• 2 Ceph nodes (S2P) with each 2 x 10Gb link.
• 8 Client nodes (S2S) with each 2 x 10Gb link.
• Public network : Balanced bandwidth between Client nodes and Ceph nodes.
• Cluster network : Offload the traffic from public network to improve performance.
Option 1 (w/o SSD)
a. 35 OSD per Ceph storage node
b. S2P (E5-2660) x2
c. RAM : 128 GB
Option 2 : (w/ SSD)
a. 35 OSD / 2 PCI-SSD per Ceph storage node
b. S2P (E5-2660) x2
c. RAM : 128 GB
Testing Configuration (Capacity-Optimized)
Level Component Test Suite
Raw I/O Disk FIO
Network I/O Network iperf
Object API I/O librados radosbench
Object I/O RGW Cosbench
Block I/O RBD librbdfio
36
CBT (Ceph Benchmarking Tool)
37
Linear Scale Out
38
Linear Scale Up
39
Price, in terms of Performance
40
Price, in terms of Capacity
41
Protection Scheme
42
Cluster Network
43
QCT/Red Hat
Ceph
Whitepaper
44
http://www.qct.io/account/d
ownload/download?order_
download_id=1022&dtype=
Reference%20Architecture
QCT/Red Hat Ceph Solution Brief
https://www.redhat.com/en/
files/resources/st-
performance-sizing-guide-
ceph-qct-inc0347490.pdf
http://www.qct.io/Solution/
Software-Defined-
Infrastructure/Storage-
Virtualization/QCT-and-
Red-Hat-Ceph-Storage-
p365c225c226c230
QCT/Red Hat Ceph Reference Architecture
• The Red Hat Ceph Storage Test Drive lab in QCT Solution Center
provides you a free hands-on experience. You'll be able to
explore the features and simplicity of the product in real-time.
• Concepts:
Ceph feature and functional test
• Lab Exercises:
Ceph Basics
Ceph Management - Calamari/CLI
Ceph Object/Block Access
46
QCT Offer TryCeph (Test Drive) Later
47
Remote access
to QCT cloud solution centers
• Easy to test. Anytime and anywhere.
• No facilities and logistic needed
• Configurations
• RCT-200 and newest QCT solutions
QCT Offer TryCeph (Test Drive) Later
• Ceph is Open Architecture
• QCT, Red Hat and Intel collaborate to provide
– Workload-driven,
– Pre-integrated,
– Comprehensive-tested and
– Well-optimized solution
• Red Hat – Open Software/Support Pioneer
Intel – Open Silicon/Technology Innovator
QCT – Open System/Solution Provider
• Together We Provide the Best
48
CONCLUSION
www.QuantaQCT.com
Thank you!
www.QCT.io
QCT CONFIDENTIAL50
Looking for
innovative cloud solution?
Come to QCT,
who else?

Contenu connexe

Tendances

Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Community
 
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Community
 
Designing for High Performance Ceph at Scale
Designing for High Performance Ceph at ScaleDesigning for High Performance Ceph at Scale
Designing for High Performance Ceph at ScaleJames Saint-Rossy
 
Ceph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Community
 
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Danielle Womboldt
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Danielle Womboldt
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephRongze Zhu
 
Ceph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to EnterpriseCeph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to EnterpriseCeph Community
 
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Patrick McGarry
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudPatrick McGarry
 
Ceph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateCeph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateDanielle Womboldt
 
Which Hypervisor is Best?
Which Hypervisor is Best?Which Hypervisor is Best?
Which Hypervisor is Best?Kyle Bader
 
Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data Ceph Community
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Community
 
Ceph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache TieringCeph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache TieringCeph Community
 
Ceph - High Performance Without High Costs
Ceph - High Performance Without High CostsCeph - High Performance Without High Costs
Ceph - High Performance Without High CostsJonathan Long
 

Tendances (18)

Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage
 
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
 
Designing for High Performance Ceph at Scale
Designing for High Performance Ceph at ScaleDesigning for High Performance Ceph at Scale
Designing for High Performance Ceph at Scale
 
Ceph on arm64 upload
Ceph on arm64   uploadCeph on arm64   upload
Ceph on arm64 upload
 
Ceph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance Archiecture
 
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
 
Ceph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to EnterpriseCeph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to Enterprise
 
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
 
Ceph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateCeph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA Update
 
Which Hypervisor is Best?
Which Hypervisor is Best?Which Hypervisor is Best?
Which Hypervisor is Best?
 
Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
 
Ceph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache TieringCeph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache Tiering
 
Ceph - High Performance Without High Costs
Ceph - High Performance Without High CostsCeph - High Performance Without High Costs
Ceph - High Performance Without High Costs
 

Similaire à QCT Ceph Solution - Design Consideration and Reference Architecture

Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Community
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
 
Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster inwin stack
 
Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...
Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...
Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...Databricks
 
Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheNicolas Poggi
 
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...Red_Hat_Storage
 
Oracle real application_cluster
Oracle real application_clusterOracle real application_cluster
Oracle real application_clusterPrabhat gangwar
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed_Hat_Storage
 
PhegData X - High Performance EBS
PhegData X - High Performance EBSPhegData X - High Performance EBS
PhegData X - High Performance EBSHanson Dong
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterPatrick Quairoli
 
Introduction to DPDK
Introduction to DPDKIntroduction to DPDK
Introduction to DPDKKernel TLV
 
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix BarbeiraBackup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix BarbeiraCeph Community
 
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Community
 
The state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the CloudThe state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the CloudNicolas Poggi
 
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...Shuquan Huang
 
ceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortNAVER D2
 
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red_Hat_Storage
 

Similaire à QCT Ceph Solution - Design Consideration and Reference Architecture (20)

Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
 
Ceph
CephCeph
Ceph
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster
 
Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...
Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...
Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...
 
Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket Cache
 
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
 
The state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the CloudThe state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the Cloud
 
Oracle real application_cluster
Oracle real application_clusterOracle real application_cluster
Oracle real application_cluster
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super Storage
 
PhegData X - High Performance EBS
PhegData X - High Performance EBSPhegData X - High Performance EBS
PhegData X - High Performance EBS
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage Cluster
 
Introduction to DPDK
Introduction to DPDKIntroduction to DPDK
Introduction to DPDK
 
NSCC Training Introductory Class
NSCC Training Introductory Class NSCC Training Introductory Class
NSCC Training Introductory Class
 
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix BarbeiraBackup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
 
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
 
The state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the CloudThe state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the Cloud
 
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
 
ceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-short
 
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
 

Dernier

Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DaySri Ambati
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 

Dernier (20)

Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 

QCT Ceph Solution - Design Consideration and Reference Architecture

  • 1. QCT Ceph Solution – Design Consideration and Reference Architecture Gary Lee AVP, QCT
  • 2. • Industry Trend and Customer Needs • Ceph Architecture • Technology • Ceph Reference Architecture and QCT Solution • Test Result • QCT/Red Hat Ceph Whitepaper 2 AGENDA
  • 4. 4
  • 5. • Structured Data -> Unstructured/Structured Data • Data -> Big Data, Fast Data • Data Processing -> Data Modeling -> Data Science • IT -> DT • Monolithic -> Microservice 5 Industry Trend
  • 6. • Scalable Size • Variable Type • Longivity Time • Distributed Location • Versatile Workload 6 • Affordable Price • Available Service • Continuous Innovation • Consistent Management • Neutral Vendor Customer Needs
  • 8. Ceph Storage Cluster 8 Cluster Network Ceph Linux CPU Memory SSD HDD NIC Ceph Linux CPU Memory SSD HDD NIC Ceph Linux CPU Memory SSD HDD NIC Ceph Linux CPU Memory SSD HDD NIC Object Block File Unified Storage Scale-out Cluster Open Source Software Open Commodity Hardware …..
  • 11. Public Network (ex. 10GbE or 40GbE) Cluster Network (ex. 10GbE or 40GbE) Ceph Monitor …... RCT or RCC Nx Ceph OSD Nodes Ceph OSD Node Clients Ceph OSD Node Ceph OSD Node Ceph Hardware Architecture
  • 13. 13 • 2x Intel E5-2600 CPU • 16x DDR4 Memory • 12x 3.5” SAS/SATA HDD • 4x SATA SSD + PCIe M.2 • 1x SATADOM • 1x 1G/10G NIC • BMC with 1G NIC • 1x PCIe x8 Mezz Card • 1x PCIe x8 SAS Controller • 1U QCT Ceph Storage Server D51PH-1ULH
  • 14. 14 • Mono/Dual Node • 2x Intel E5-2600 CPU • 16x DDR4 Memory • 78x or 2x 35x SSD/HDD • 1x 1G/10G NIC • BMC with 1G NIC • 1x PCIe x8 SAS Controller • 1x PCIe x8 HHLH Card • 1x PCIe x16 FHHL Card • 4U QCT Ceph Storage Server T21P-4U
  • 15. 15 • 1x Intel Xeon D SoC CPU • 4x DDR4 Memory • 12x SAS/SATA HDD • 4x SATA SSD • 2x SATA SSD for OS • 1x 1G/10G NIC • BMC with 1G NIC • 1x PCIe x8 Mezz Card • 1x PCIe x8 SAS Controller • 1U QCT Ceph Storage Server SD1Q-1ULH
  • 16. • Standalone, without EC • Standalone, with EC • Hyper-converged, without EC • High Core vs. High Frequency • 1x OSD ~ (0.3-0.5)x Core + 2G RAM 16 CPU/Memory
  • 17. • SSD: – Journal – Tier – File System Cache – Client Cache • Journal – HDD: SSD (SATA/SAS): 4~5 – HDD: NVMe: 12~18 17 SSD/NVMe
  • 18. • 2x NVMe ~40Gb • 4x NVMe ~100Gb • 2x SATA SSD ~10Gb • 1x SAS SSD ~10Gb • (20~25)x HDD ~10Gb • ~100x HDD ~40Gb 18 NIC 10G/40G -> 25G/100G
  • 19. • CPU Offload through RDMA/iWARP • Erasure Coding Offload • Allocate computing on different silicon areas 19 NIC I/O Offloading
  • 20. • Object Replication – 1 Primary + 2 Replica (or more) – CRUSH Allocation Ruleset • Erasure Coding – [k+m], e.g. 4+2, 8+3 – Better Data Efficiency • k/(k+m) vs. 1/(1+replication) 20 Erasure Coding vs. Replication
  • 21. Size/ Workload Small Medium Large Throughput Transfer Bandwidth Sequential R/W Capacity Cost/capacity Scalability IOPS IOPS/ per 4k Block Random R/W Hyper-converged ? Desktop Virtualization Latency Random R/W Hadoop ? 21 Workload and Configuration
  • 23. • Intel ISA-L • Intel SPDK • Intel CAS • Mellanox Accelio Library 23 Vendor-specific Value-added Software
  • 25. • Trade-off among Technologies • Scalable in Architecture • Optimized for Workload • Affordable as Expected Design Principle
  • 26. 1. Needs for scale-out storage 2. Target workload 3. Access method 4. Storage capacity 5. Data protection methods 6. Fault domain risk tolerance 26 Design Considerations
  • 28. SMALL (500TB*) MEDIUM (>1PB*) LARGE (>2PB*) Throughput optimized QxStor RCT-200 16x D51PH-1ULH (16U) • 12x 8TB HDDs • 3x SSDs • 1x dual port 10GbE • 3x replica QxStor RCT-400 6x T21P-4U/Dual (24U) • 2x 35x 8TB HDDs • 2x 2x PCIe SSDs • 2x single port 40GbE • 3x replica QxStor RCT-400 11x T21P-4U/Dual (44U) • 2x 35x 8TB HDDs • 2x 2x PCIe SSDs • 2x single port 40GbE • 3x replica Cost/Capacity optimized IOPS optimized Future direction Future direction NA * Usable storage capacity QxStor RCC-400 Nx T21P-4U/Dual • 2x 35x 8TB HDDs • 0x SSDs • 2x dual port 10GbE • Erasure Coding 4:2 QCT QxStor Red Hat Ceph Storage Edition Portfolio Workload-driven Integrated Software/Hardware Solution
  • 29. • Densest 1U Ceph building block • Best reliability with smaller failure domain • Scale at high scale 2x 280TB • At once obtain best throughput and density • Block or object storage • 3x replication • Video, audio, image repositories, and streaming media • Highest density 560TB raw capacity per chassis with greatest price/performance • Typically object storage • Erasure coding common for maximizing usable capacity • Object archive Throughput-Optimized RCC-400RCT-200 RCT-400 Cost/Capacity-Optimized USECASE QCT QxStor Red Hat Ceph Storage Edition Co-engineered with Red Hat Storage team to provide Optimized Ceph Solution
  • 30. 30 Ceph Solution Deployment Using QCT QPT Bare Metal Privision Tool
  • 31. 31 Ceph Solution Deployment Using QCT QPT Bare Metal Privision Tool
  • 32. 32 QCT Solution Value Proposition • Workload-driven • Hardware/software pre-validated, pre-optimized and pre-integrated • Up and running in minutes • Balance between production (stable) and innovation (up-streaming)
  • 34. Client 1 S2B Client 2 S2B Client 3 S2B Ceph 1 S2PH Ceph 2 S2PH Ceph 3 S2PH Ceph 5 S2PH Ceph 4 S2PH Client 8 S2B Client 9 S2B Client 10 S2B 10Gb 10Gb Public Network Cluster Network General Configuration • 5 Ceph nodes (S2PH) with each 2 x 10Gb link. • 10 Client nodes (S2B) with each 2 x 10Gb link. • Public network : Balanced bandwidth between Client nodes and Ceph nodes. • Cluster network : Offload the traffic from public network to improve performance. Option 1 (w/o SSD) a. 12 OSD per Ceph storage node b. S2PH (E5-2660) x2 c. RAM : 128 GB Option 2 : (w/ SSD) a. 12 OSD / 3 SSD per Ceph storage node b. S2PH (E5-2660) x2 c. RAM : 12 (OSD) x 2GB = 24 GB Testing Configuration (Throughput-Optimized)
  • 35. Client 1 S2S Client 2 S2S Client 3 S2S Ceph 1 S2P Ceph 2 S2P Client 6 S2S Client 7 S2S Client 8 S2S 10Gb Public Network 40Gb 40Gb General Configuration • 2 Ceph nodes (S2P) with each 2 x 10Gb link. • 8 Client nodes (S2S) with each 2 x 10Gb link. • Public network : Balanced bandwidth between Client nodes and Ceph nodes. • Cluster network : Offload the traffic from public network to improve performance. Option 1 (w/o SSD) a. 35 OSD per Ceph storage node b. S2P (E5-2660) x2 c. RAM : 128 GB Option 2 : (w/ SSD) a. 35 OSD / 2 PCI-SSD per Ceph storage node b. S2P (E5-2660) x2 c. RAM : 128 GB Testing Configuration (Capacity-Optimized)
  • 36. Level Component Test Suite Raw I/O Disk FIO Network I/O Network iperf Object API I/O librados radosbench Object I/O RGW Cosbench Block I/O RBD librbdfio 36 CBT (Ceph Benchmarking Tool)
  • 39. 39 Price, in terms of Performance
  • 40. 40 Price, in terms of Capacity
  • 46. • The Red Hat Ceph Storage Test Drive lab in QCT Solution Center provides you a free hands-on experience. You'll be able to explore the features and simplicity of the product in real-time. • Concepts: Ceph feature and functional test • Lab Exercises: Ceph Basics Ceph Management - Calamari/CLI Ceph Object/Block Access 46 QCT Offer TryCeph (Test Drive) Later
  • 47. 47 Remote access to QCT cloud solution centers • Easy to test. Anytime and anywhere. • No facilities and logistic needed • Configurations • RCT-200 and newest QCT solutions QCT Offer TryCeph (Test Drive) Later
  • 48. • Ceph is Open Architecture • QCT, Red Hat and Intel collaborate to provide – Workload-driven, – Pre-integrated, – Comprehensive-tested and – Well-optimized solution • Red Hat – Open Software/Support Pioneer Intel – Open Silicon/Technology Innovator QCT – Open System/Solution Provider • Together We Provide the Best 48 CONCLUSION
  • 50. www.QCT.io QCT CONFIDENTIAL50 Looking for innovative cloud solution? Come to QCT, who else?

Notes de l'éditeur

  1. Here is 3 skus based on small-medium-large scale. For larger scale, suggest customer to adopt RCT-400 or RCC-400. QCT is planning to do sku that is optimized for IOPS-intensive workloads. We’ll launch it in 2016H2