SlideShare une entreprise Scribd logo
1  sur  27
Télécharger pour lire hors ligne
Presented by
Date
Ceph and software defined
storage on ARM Servers
1
February 12, 2015
Yazen Ghannam Steve Capper
<yazen.ghannam@linaro.org> <steve.capper@linaro.org>
connect.linaro.org
● Part 1: Introduction to Ceph
○ What is Ceph?
○ Lightning Introduction to Ceph Architecture
○ Replication
● Part 2: Linaro Work
○ Motivations & Goals
○ Linaro Austin Colocation Cluster
○ Performance Testing
○ Optimization Opportunities
○ Encountered Issues/Current Limitations
○ Future Work
○ Q & A
Outline
2
connect.linaro.org
Part 1: Introduction to Ceph
3
connect.linaro.org
● Ceph is a distributed object store with no single point of failure.
● It scales up to exabyte levels of storage and runs on commodity
hardware.
● Ceph data are exposed as follows:
What is Ceph?
Ceph Object Store
RESTful interface with
Amazon S3 and OpenStack
Swift compliant APIs.
Ceph Block Device
Linux kernel driver available
for clients. Also has libvirt
support.
Ceph Filesystem
Linux kernel driver available
for clients. Also has FUSE
support.
4
connect.linaro.org
At the host level...
● We have Object Storage Devices (OSDs) and Monitors.
○ Monitors keep track of the components of the Ceph cluster (i.e.,
where the OSDs are).
○ The device, host, rack, row, and room are stored by the Monitors and
used to compute a failure domain.
○ OSDs store the Ceph data objects.
● A host can run multiple OSDs, but it needs to be appropriately specced.
Lightning Introduction to Ceph Architecture (1)
5
connect.linaro.org
At the block device level…
● Object Storage Device (OSD) can be an entire drive, a partition, or
a folder.
● OSDs must be formatted in ext4, XFS, or btrfs (experimental).
Lightning Introduction to Ceph Architecture (2)
Drive/Partition
Filesystem
OSD
Pools
Drive/Partition
Filesystem
OSD
Drive/Partition
Filesystem
OSD
Drive/Partition
Filesystem
OSD
6
connect.linaro.org
At the data organization level...
● Data are partitioned into pools.
● Pools contain a number of Placement Groups (PGs).
● Ceph data objects map to PGs (via a modulo of hash of name).
● PGs then map to multiple OSDs.
Lightning Introduction to Ceph Architecture (3)
Pool: mydata
obj
obj PG #1
PG #2
obj obj
OSD
OSD
OSD
OSD
7
connect.linaro.org
At the client level…
● Objects can be accessed directly.
● Objects can be accessed through Ceph Object Gateway.
● Pools can be used for CephFS (requires 2 pools: data & metadata).
● Pools can be used to create Rados Block Devices.
Lightning Introduction to Ceph Architecture (4)
Pools
Rados Block Device
Filesystem
CephFS
Client
Ceph Obj. Gateway
8
connect.linaro.org
Host #3
Host #2
Host #1
Lightning Introduction to Ceph Architecture (5)
Filesystem
OSD
Filesystem
OSD
Filesystem
OSD
Filesystem
OSD
Pool:
mydata obj obj
obj obj
PG #1 PG #2
Rados Block Device
Filesystem
CephFS
Client
Ceph Obj. Gateway
Drive/Partition Drive/Partition Drive/Partition Drive/Partition
OSD OSD
Filesystem
Drive/Partition Drive/Partition
Filesystem
Pool:
metadata obj obj
obj obj
PG #1 PG #2
Pool:
data obj obj
obj obj
PG #1 PG #2
9
connect.linaro.org
● Ceph pools, by default, are configured to replicate data between
OSDs.
● This allows us to lose some OSDs and not lose data.
● The replication level states how many instances of the object are
to reside on OSDs.
● Large objects will consume significant amounts of cumulative disk
space if replicated.
● An alternative to replication is to adopt erasure coding.
Replication
10
connect.linaro.org
Part 2: Linaro Work
11
connect.linaro.org
● Motivation
○ Ceph is intended to be massively scalable and to be used with commodity
hardware.
○ Ceph clusters would ideally have lots of I/O (storage and network).
○ Ceph is a large system that interacts with many different pieces of software
and hardware (e.g., Kernel, libraries, network).
○ Enterprise ARMv8 vendors are targeting the high-density, highly-scalable
storage solutions market with relatively strong cores and lots of available I/O.
● Goals
○ Bring up a simple Ceph cluster on commodity ARMv8 hardware.
○ Look for CPU hotspots during performance testing.
■ Start with simple workloads, especially those that are part of Ceph.
○ Focus on optimizations specific to AArch64.
Motivation & Goals
12
connect.linaro.org
● 4 systems
○ AMD Opteron A1100 (codenamed Seattle) x3
■ With Cryptographic Extension and CRC
■ 16GB RAM
■ 10GbE available
■ Monitor/OSD nodes
○ APM X-Gene Mustang
■ 16GB RAM
■ Client/Admin node
● Each node has 1 hard drive
○ 500GB 7200RPM
○ OSD Partition (390GB)
● Each node has 1 ethernet connection on a 1Gb network
Linaro Austin Colocation Cluster (1): Hardware
AMD Seattle
Node 1
------------------
MON
OSD
MDS
AMD Seattle
Node 2
------------------
MON
OSD
AMD Seattle
Node 3
------------------
MON
OSD
APM X-Gene
Mustang
------------------
Admin
Client
switch
13
connect.linaro.org
● Fedora 21
● Linux Kernel 3.17
○ arm64 CRC32 module (available in 3.19)
● Ceph v0.91
○ RPM packages built from Ceph git repo
○ Local YUM repo serving updated packages to cluster
● ceph-deploy
○ Used to easily deploy Ceph cluster, including package installs,
setting up keys, etc.
Linaro Austin Colocation Cluster (2): Software
14
connect.linaro.org
● Single Node
○ “perf record {workload}”
● Cluster
○ Client Node: Execute {workload}
○ OSD Nodes: “perf top” for all system data.
○ OSD Nodes: “perf top -p {osd pid}” for only OSD process.
■ Useful when there is little system load.
Performance Testing (1): Collecting Data
15
connect.linaro.org
● Rados Bench
○ rados bench -p rbd 300 write --no-cleanup
○ rados bench -p rbd 300 seq
○ rados bench -p rbd 300 rand
● OSD Bench
○ ceph tell osd.# bench
Performance Testing (2): Workloads, Ceph
16
connect.linaro.org
● dd
○ rbd create name --size #MBs
○ rbd map name -p rbd
○ mkfs [options] /dev/rbd/rbd/name
○ mount /dev/rbd/rbd/name /mnt
○ dd if=/dev/zero of=/mnt/zerofile [options]
● Write a lot of objects
○ foreach file in a_lot_of_files:
rados put object-name-# $file -p data
Performance Testing (3): Workloads, Other
17
connect.linaro.org
● Known
○ CRC32C
■ Ceph (upstreaming)
■ Linux Kernel (upstream already, should arrive in 3.19)
● Possible
○ how memcpy is called (it is a CPU hotspot)
○ tcmalloc
○ Boost C++ libraries
○ rocksdb
Optimization Opportunities
18
connect.linaro.org
● Issues
○ Linux Perf symbol decode
○ Python 2.7 hang when starting Ceph (now fixed)
● Limitations:
○ I/O bound with single OSD on 7200RPM hard drive with 1Gb network
■ Ideal: 8+ SSDs per node; each SSD with an individual OSD
■ Ideal: 10Gb network to support the nodes
○ Only several nodes forming a Ceph cluster (due to lack of hardware)
■ Ideal: 10+ nodes forming a cluster
Encountered Issues/Current Limitations
19
connect.linaro.org
● Teuthology (for ceph-qa)
● More workload profiling:
○ CephFS
○ Ceph Object Gateway (radosgw)
● Ceph prerequisites that could be investigated on AArch64:
○ Boost C++ Libraries
○ tcmalloc
Future Work
20
connect.linaro.org
Q & A
21
connect.linaro.org
Backup Slides
23
connect.linaro.org
● Given a 1GB object, let’s split it into 2 x 512MB chunks (A and B).
● Now, let’s introduce a third 512MB chunk P (for parity), and compute
each individual byte P[i] as follows:
P[i] = A[i] ^ B[i]
● We can now lose one of A, B, or P and still be able to reconstruct our
original data:
● To get this level of redundancy with replication requires 2GB of disk
space, as opposed to 1.5 GB with our parity coding.
Erasure coding - An Example
A B
A B P
24
connect.linaro.org
● Erasure codes can get more elaborate. One can split an object into k data
chunks and compute m coding chunks.
● This allows us to lose m chunks before data loss.
● The object will reside on k + m OSDs.
● Pools are configured whether or not to use erasure coding.
● The mathematics gets more complicated as m is increased and requires
specialized Galois Field arithmetic routines.
● Thankfully, these have already been ported over to ARM (both 32-bit and
64-bit) using NEON.
Erasure coding generalized
25
More about Linaro: http://www.linaro.org/about/
More about Linaro engineering: http://www.linaro.org/engineering/
How to join: http://www.linaro.org/about/how-to-join
Linaro members: www.linaro.org/members
connect.linaro.org
26
connect.linaro.org
27
Disclaimer and Attribution
The information presented in this document is for informational purposes only and may contain technical inaccuracies, omissions and typographical errors.
The information contained herein is subject to change and may be rendered inaccurate for many reasons, including but not limited to product and roadmap changes,
component and motherboard version changes, new model and/or product releases, product differences between differing manufacturers, software changes, BIOS
flashes, firmware upgrades, or the like. AMD assumes no obligation to update or otherwise correct or revise this information. However, AMD reserves the right to
revise this information and to make changes from time to time to the content hereof without obligation of AMD to notify any person of such revisions or changes.
AMD MAKES NO REPRESENTATIONS OR WARRANTIES WITH RESPECT TO THE CONTENTS HEREOF AND ASSUMES NO RESPONSIBILITY FOR ANY
INACCURACIES, ERRORS OR OMISSIONS THAT MAY APPEAR IN THIS INFORMATION.
AMD SPECIFICALLY DISCLAIMS ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE. IN NO EVENT WILL
AMD BE LIABLE TO ANY PERSON FOR ANY DIRECT, INDIRECT, SPECIAL OR OTHER CONSEQUENTIAL DAMAGES ARISING FROM THE USE OF ANY
INFORMATION CONTAINED HEREIN, EVEN IF AMD IS EXPRESSLY ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
Trademark Attribution
AMD, the AMD Arrow logo and combinations thereof are trademarks of Advanced Micro Devices, Inc. in the United States and/or other jurisdictions. Other names used
in this presentation are for identification purposes only and may be trademarks of their respective owners.
©2015 Advanced Micro Devices, Inc. All rights reserved.

Contenu connexe

Tendances

Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turk
buildacloud
 

Tendances (16)

What you need to know about ceph
What you need to know about cephWhat you need to know about ceph
What you need to know about ceph
 
Designing for High Performance Ceph at Scale
Designing for High Performance Ceph at ScaleDesigning for High Performance Ceph at Scale
Designing for High Performance Ceph at Scale
 
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackCeph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017
 
CephFS update February 2016
CephFS update February 2016CephFS update February 2016
CephFS update February 2016
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turk
 
New Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference ArchitecturesNew Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference Architectures
 
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
 
What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...
What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...
What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...
 
librados
libradoslibrados
librados
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph cluster
 
Ceph Performance: Projects Leading up to Jewel
Ceph Performance: Projects Leading up to JewelCeph Performance: Projects Leading up to Jewel
Ceph Performance: Projects Leading up to Jewel
 
Ceph - High Performance Without High Costs
Ceph - High Performance Without High CostsCeph - High Performance Without High Costs
Ceph - High Performance Without High Costs
 
Ceph and Mirantis OpenStack
Ceph and Mirantis OpenStackCeph and Mirantis OpenStack
Ceph and Mirantis OpenStack
 

En vedette

En vedette (10)

NixOS @ Hackspace Jena
NixOS @ Hackspace JenaNixOS @ Hackspace Jena
NixOS @ Hackspace Jena
 
Flying Circus Ceph Case Study (CEPH Usergroup Berlin)
Flying Circus Ceph Case Study (CEPH Usergroup Berlin)Flying Circus Ceph Case Study (CEPH Usergroup Berlin)
Flying Circus Ceph Case Study (CEPH Usergroup Berlin)
 
Osdt_osca_ceph_20160706
Osdt_osca_ceph_20160706Osdt_osca_ceph_20160706
Osdt_osca_ceph_20160706
 
Keynote: The Phoenix Project: Lessons Learned - PuppetConf 2014
Keynote: The Phoenix Project: Lessons Learned - PuppetConf 2014Keynote: The Phoenix Project: Lessons Learned - PuppetConf 2014
Keynote: The Phoenix Project: Lessons Learned - PuppetConf 2014
 
Cephのベンチマークをしました
CephのベンチマークをしましたCephのベンチマークをしました
Cephのベンチマークをしました
 
分散ストレージ技術Cephの最新情報
分散ストレージ技術Cephの最新情報分散ストレージ技術Cephの最新情報
分散ストレージ技術Cephの最新情報
 
Ceph Performance on OpenStack - Barcelona Summit
Ceph Performance on OpenStack - Barcelona SummitCeph Performance on OpenStack - Barcelona Summit
Ceph Performance on OpenStack - Barcelona Summit
 
Ceph アーキテクチャ概説
Ceph アーキテクチャ概説Ceph アーキテクチャ概説
Ceph アーキテクチャ概説
 
分散ストレージソフトウェアCeph・アーキテクチャー概要
分散ストレージソフトウェアCeph・アーキテクチャー概要分散ストレージソフトウェアCeph・アーキテクチャー概要
分散ストレージソフトウェアCeph・アーキテクチャー概要
 
40分でわかるHadoop徹底入門 (Cloudera World Tokyo 2014 講演資料)
40分でわかるHadoop徹底入門 (Cloudera World Tokyo 2014 講演資料) 40分でわかるHadoop徹底入門 (Cloudera World Tokyo 2014 講演資料)
40分でわかるHadoop徹底入門 (Cloudera World Tokyo 2014 講演資料)
 

Similaire à HKG15-401: Ceph and Software Defined Storage on ARM servers

NetflixOSS meetup lightning talks and roadmap
NetflixOSS meetup lightning talks and roadmapNetflixOSS meetup lightning talks and roadmap
NetflixOSS meetup lightning talks and roadmap
Ruslan Meshenberg
 

Similaire à HKG15-401: Ceph and Software Defined Storage on ARM servers (20)

Experiences building a distributed shared log on RADOS - Noah Watkins
Experiences building a distributed shared log on RADOS - Noah WatkinsExperiences building a distributed shared log on RADOS - Noah Watkins
Experiences building a distributed shared log on RADOS - Noah Watkins
 
OpenEBS hangout #4
OpenEBS hangout #4OpenEBS hangout #4
OpenEBS hangout #4
 
LMG Lightning Talks - SFO17-205
LMG Lightning Talks - SFO17-205LMG Lightning Talks - SFO17-205
LMG Lightning Talks - SFO17-205
 
100Gbps OpenStack For Providing High-Performance NFV
100Gbps OpenStack For Providing High-Performance NFV100Gbps OpenStack For Providing High-Performance NFV
100Gbps OpenStack For Providing High-Performance NFV
 
[OpenStack Day in Korea 2015] Track 1-6 - 갈라파고스의 이구아나, 인프라에 오픈소스를 올리다. 그래서 보이...
[OpenStack Day in Korea 2015] Track 1-6 - 갈라파고스의 이구아나, 인프라에 오픈소스를 올리다. 그래서 보이...[OpenStack Day in Korea 2015] Track 1-6 - 갈라파고스의 이구아나, 인프라에 오픈소스를 올리다. 그래서 보이...
[OpenStack Day in Korea 2015] Track 1-6 - 갈라파고스의 이구아나, 인프라에 오픈소스를 올리다. 그래서 보이...
 
NetflixOSS meetup lightning talks and roadmap
NetflixOSS meetup lightning talks and roadmapNetflixOSS meetup lightning talks and roadmap
NetflixOSS meetup lightning talks and roadmap
 
Kubernetes from scratch at veepee sysadmins days 2019
Kubernetes from scratch at veepee   sysadmins days 2019Kubernetes from scratch at veepee   sysadmins days 2019
Kubernetes from scratch at veepee sysadmins days 2019
 
Improving Scalability of Xen: The 3,000 Domains Experiment
Improving Scalability of Xen: The 3,000 Domains ExperimentImproving Scalability of Xen: The 3,000 Domains Experiment
Improving Scalability of Xen: The 3,000 Domains Experiment
 
Bringing up Android on your favorite X86 Workstation or VM (AnDevCon Boston, ...
Bringing up Android on your favorite X86 Workstation or VM (AnDevCon Boston, ...Bringing up Android on your favorite X86 Workstation or VM (AnDevCon Boston, ...
Bringing up Android on your favorite X86 Workstation or VM (AnDevCon Boston, ...
 
Ceph
CephCeph
Ceph
 
Optimizing Servers for High-Throughput and Low-Latency at Dropbox
Optimizing Servers for High-Throughput and Low-Latency at DropboxOptimizing Servers for High-Throughput and Low-Latency at Dropbox
Optimizing Servers for High-Throughput and Low-Latency at Dropbox
 
Known basic of NFV Features
Known basic of NFV FeaturesKnown basic of NFV Features
Known basic of NFV Features
 
Kubernetes @ Squarespace (SRE Portland Meetup October 2017)
Kubernetes @ Squarespace (SRE Portland Meetup October 2017)Kubernetes @ Squarespace (SRE Portland Meetup October 2017)
Kubernetes @ Squarespace (SRE Portland Meetup October 2017)
 
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix BarbeiraBackup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
 
Sheepdog Status Report
Sheepdog Status ReportSheepdog Status Report
Sheepdog Status Report
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-Gene
 
Host Data Plane Acceleration: SmartNIC Deployment Models
Host Data Plane Acceleration: SmartNIC Deployment ModelsHost Data Plane Acceleration: SmartNIC Deployment Models
Host Data Plane Acceleration: SmartNIC Deployment Models
 
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance BarriersCeph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
 
QEMU Disk IO Which performs Better: Native or threads?
QEMU Disk IO Which performs Better: Native or threads?QEMU Disk IO Which performs Better: Native or threads?
QEMU Disk IO Which performs Better: Native or threads?
 

Plus de Linaro

Deep Learning Neural Network Acceleration at the Edge - Andrea Gallo
Deep Learning Neural Network Acceleration at the Edge - Andrea GalloDeep Learning Neural Network Acceleration at the Edge - Andrea Gallo
Deep Learning Neural Network Acceleration at the Edge - Andrea Gallo
Linaro
 
HPC network stack on ARM - Linaro HPC Workshop 2018
HPC network stack on ARM - Linaro HPC Workshop 2018HPC network stack on ARM - Linaro HPC Workshop 2018
HPC network stack on ARM - Linaro HPC Workshop 2018
Linaro
 
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
Linaro
 
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
Linaro
 
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainlineHKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
Linaro
 
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainlineHKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
Linaro
 
HKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
HKG18- 115 - Partitioning ARM Systems with the Jailhouse HypervisorHKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
HKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
Linaro
 
HKG18-TR08 - Upstreaming SVE in QEMU
HKG18-TR08 - Upstreaming SVE in QEMUHKG18-TR08 - Upstreaming SVE in QEMU
HKG18-TR08 - Upstreaming SVE in QEMU
Linaro
 
HKG18-120 - Devicetree Schema Documentation and Validation
HKG18-120 - Devicetree Schema Documentation and Validation HKG18-120 - Devicetree Schema Documentation and Validation
HKG18-120 - Devicetree Schema Documentation and Validation
Linaro
 
HKG18-223 - Trusted FirmwareM: Trusted boot
HKG18-223 - Trusted FirmwareM: Trusted bootHKG18-223 - Trusted FirmwareM: Trusted boot
HKG18-223 - Trusted FirmwareM: Trusted boot
Linaro
 

Plus de Linaro (20)

Deep Learning Neural Network Acceleration at the Edge - Andrea Gallo
Deep Learning Neural Network Acceleration at the Edge - Andrea GalloDeep Learning Neural Network Acceleration at the Edge - Andrea Gallo
Deep Learning Neural Network Acceleration at the Edge - Andrea Gallo
 
Arm Architecture HPC Workshop Santa Clara 2018 - Kanta Vekaria
Arm Architecture HPC Workshop Santa Clara 2018 - Kanta VekariaArm Architecture HPC Workshop Santa Clara 2018 - Kanta Vekaria
Arm Architecture HPC Workshop Santa Clara 2018 - Kanta Vekaria
 
Huawei’s requirements for the ARM based HPC solution readiness - Joshua Mora
Huawei’s requirements for the ARM based HPC solution readiness - Joshua MoraHuawei’s requirements for the ARM based HPC solution readiness - Joshua Mora
Huawei’s requirements for the ARM based HPC solution readiness - Joshua Mora
 
Bud17 113: distribution ci using qemu and open qa
Bud17 113: distribution ci using qemu and open qaBud17 113: distribution ci using qemu and open qa
Bud17 113: distribution ci using qemu and open qa
 
OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018
OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018
OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018
 
HPC network stack on ARM - Linaro HPC Workshop 2018
HPC network stack on ARM - Linaro HPC Workshop 2018HPC network stack on ARM - Linaro HPC Workshop 2018
HPC network stack on ARM - Linaro HPC Workshop 2018
 
It just keeps getting better - SUSE enablement for Arm - Linaro HPC Workshop ...
It just keeps getting better - SUSE enablement for Arm - Linaro HPC Workshop ...It just keeps getting better - SUSE enablement for Arm - Linaro HPC Workshop ...
It just keeps getting better - SUSE enablement for Arm - Linaro HPC Workshop ...
 
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
 
Yutaka Ishikawa - Post-K and Arm HPC Ecosystem - Linaro Arm HPC Workshop Sant...
Yutaka Ishikawa - Post-K and Arm HPC Ecosystem - Linaro Arm HPC Workshop Sant...Yutaka Ishikawa - Post-K and Arm HPC Ecosystem - Linaro Arm HPC Workshop Sant...
Yutaka Ishikawa - Post-K and Arm HPC Ecosystem - Linaro Arm HPC Workshop Sant...
 
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
 
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainlineHKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
 
HKG18-100K1 - George Grey: Opening Keynote
HKG18-100K1 - George Grey: Opening KeynoteHKG18-100K1 - George Grey: Opening Keynote
HKG18-100K1 - George Grey: Opening Keynote
 
HKG18-318 - OpenAMP Workshop
HKG18-318 - OpenAMP WorkshopHKG18-318 - OpenAMP Workshop
HKG18-318 - OpenAMP Workshop
 
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainlineHKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
 
HKG18-315 - Why the ecosystem is a wonderful thing, warts and all
HKG18-315 - Why the ecosystem is a wonderful thing, warts and allHKG18-315 - Why the ecosystem is a wonderful thing, warts and all
HKG18-315 - Why the ecosystem is a wonderful thing, warts and all
 
HKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
HKG18- 115 - Partitioning ARM Systems with the Jailhouse HypervisorHKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
HKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
 
HKG18-TR08 - Upstreaming SVE in QEMU
HKG18-TR08 - Upstreaming SVE in QEMUHKG18-TR08 - Upstreaming SVE in QEMU
HKG18-TR08 - Upstreaming SVE in QEMU
 
HKG18-113- Secure Data Path work with i.MX8M
HKG18-113- Secure Data Path work with i.MX8MHKG18-113- Secure Data Path work with i.MX8M
HKG18-113- Secure Data Path work with i.MX8M
 
HKG18-120 - Devicetree Schema Documentation and Validation
HKG18-120 - Devicetree Schema Documentation and Validation HKG18-120 - Devicetree Schema Documentation and Validation
HKG18-120 - Devicetree Schema Documentation and Validation
 
HKG18-223 - Trusted FirmwareM: Trusted boot
HKG18-223 - Trusted FirmwareM: Trusted bootHKG18-223 - Trusted FirmwareM: Trusted boot
HKG18-223 - Trusted FirmwareM: Trusted boot
 

Dernier

Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...
Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...
Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...
Medical / Health Care (+971588192166) Mifepristone and Misoprostol tablets 200mg
 
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
masabamasaba
 
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
masabamasaba
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
Health
 
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
masabamasaba
 
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
masabamasaba
 

Dernier (20)

Announcing Codolex 2.0 from GDK Software
Announcing Codolex 2.0 from GDK SoftwareAnnouncing Codolex 2.0 from GDK Software
Announcing Codolex 2.0 from GDK Software
 
%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisa%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisa
 
Microsoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdfMicrosoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdf
 
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital TransformationWSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
 
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park %in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
 
Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...
Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...
Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...
 
%in Harare+277-882-255-28 abortion pills for sale in Harare
%in Harare+277-882-255-28 abortion pills for sale in Harare%in Harare+277-882-255-28 abortion pills for sale in Harare
%in Harare+277-882-255-28 abortion pills for sale in Harare
 
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
 
Direct Style Effect Systems - The Print[A] Example - A Comprehension Aid
Direct Style Effect Systems -The Print[A] Example- A Comprehension AidDirect Style Effect Systems -The Print[A] Example- A Comprehension Aid
Direct Style Effect Systems - The Print[A] Example - A Comprehension Aid
 
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
 
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
 
%in Soweto+277-882-255-28 abortion pills for sale in soweto
%in Soweto+277-882-255-28 abortion pills for sale in soweto%in Soweto+277-882-255-28 abortion pills for sale in soweto
%in Soweto+277-882-255-28 abortion pills for sale in soweto
 
What Goes Wrong with Language Definitions and How to Improve the Situation
What Goes Wrong with Language Definitions and How to Improve the SituationWhat Goes Wrong with Language Definitions and How to Improve the Situation
What Goes Wrong with Language Definitions and How to Improve the Situation
 
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
 
%in Rustenburg+277-882-255-28 abortion pills for sale in Rustenburg
%in Rustenburg+277-882-255-28 abortion pills for sale in Rustenburg%in Rustenburg+277-882-255-28 abortion pills for sale in Rustenburg
%in Rustenburg+277-882-255-28 abortion pills for sale in Rustenburg
 
%in Benoni+277-882-255-28 abortion pills for sale in Benoni
%in Benoni+277-882-255-28 abortion pills for sale in Benoni%in Benoni+277-882-255-28 abortion pills for sale in Benoni
%in Benoni+277-882-255-28 abortion pills for sale in Benoni
 
Artyushina_Guest lecture_YorkU CS May 2024.pptx
Artyushina_Guest lecture_YorkU CS May 2024.pptxArtyushina_Guest lecture_YorkU CS May 2024.pptx
Artyushina_Guest lecture_YorkU CS May 2024.pptx
 
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
 
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
 

HKG15-401: Ceph and Software Defined Storage on ARM servers

  • 1. Presented by Date Ceph and software defined storage on ARM Servers 1 February 12, 2015 Yazen Ghannam Steve Capper <yazen.ghannam@linaro.org> <steve.capper@linaro.org>
  • 2. connect.linaro.org ● Part 1: Introduction to Ceph ○ What is Ceph? ○ Lightning Introduction to Ceph Architecture ○ Replication ● Part 2: Linaro Work ○ Motivations & Goals ○ Linaro Austin Colocation Cluster ○ Performance Testing ○ Optimization Opportunities ○ Encountered Issues/Current Limitations ○ Future Work ○ Q & A Outline 2
  • 4. connect.linaro.org ● Ceph is a distributed object store with no single point of failure. ● It scales up to exabyte levels of storage and runs on commodity hardware. ● Ceph data are exposed as follows: What is Ceph? Ceph Object Store RESTful interface with Amazon S3 and OpenStack Swift compliant APIs. Ceph Block Device Linux kernel driver available for clients. Also has libvirt support. Ceph Filesystem Linux kernel driver available for clients. Also has FUSE support. 4
  • 5. connect.linaro.org At the host level... ● We have Object Storage Devices (OSDs) and Monitors. ○ Monitors keep track of the components of the Ceph cluster (i.e., where the OSDs are). ○ The device, host, rack, row, and room are stored by the Monitors and used to compute a failure domain. ○ OSDs store the Ceph data objects. ● A host can run multiple OSDs, but it needs to be appropriately specced. Lightning Introduction to Ceph Architecture (1) 5
  • 6. connect.linaro.org At the block device level… ● Object Storage Device (OSD) can be an entire drive, a partition, or a folder. ● OSDs must be formatted in ext4, XFS, or btrfs (experimental). Lightning Introduction to Ceph Architecture (2) Drive/Partition Filesystem OSD Pools Drive/Partition Filesystem OSD Drive/Partition Filesystem OSD Drive/Partition Filesystem OSD 6
  • 7. connect.linaro.org At the data organization level... ● Data are partitioned into pools. ● Pools contain a number of Placement Groups (PGs). ● Ceph data objects map to PGs (via a modulo of hash of name). ● PGs then map to multiple OSDs. Lightning Introduction to Ceph Architecture (3) Pool: mydata obj obj PG #1 PG #2 obj obj OSD OSD OSD OSD 7
  • 8. connect.linaro.org At the client level… ● Objects can be accessed directly. ● Objects can be accessed through Ceph Object Gateway. ● Pools can be used for CephFS (requires 2 pools: data & metadata). ● Pools can be used to create Rados Block Devices. Lightning Introduction to Ceph Architecture (4) Pools Rados Block Device Filesystem CephFS Client Ceph Obj. Gateway 8
  • 9. connect.linaro.org Host #3 Host #2 Host #1 Lightning Introduction to Ceph Architecture (5) Filesystem OSD Filesystem OSD Filesystem OSD Filesystem OSD Pool: mydata obj obj obj obj PG #1 PG #2 Rados Block Device Filesystem CephFS Client Ceph Obj. Gateway Drive/Partition Drive/Partition Drive/Partition Drive/Partition OSD OSD Filesystem Drive/Partition Drive/Partition Filesystem Pool: metadata obj obj obj obj PG #1 PG #2 Pool: data obj obj obj obj PG #1 PG #2 9
  • 10. connect.linaro.org ● Ceph pools, by default, are configured to replicate data between OSDs. ● This allows us to lose some OSDs and not lose data. ● The replication level states how many instances of the object are to reside on OSDs. ● Large objects will consume significant amounts of cumulative disk space if replicated. ● An alternative to replication is to adopt erasure coding. Replication 10
  • 12. connect.linaro.org ● Motivation ○ Ceph is intended to be massively scalable and to be used with commodity hardware. ○ Ceph clusters would ideally have lots of I/O (storage and network). ○ Ceph is a large system that interacts with many different pieces of software and hardware (e.g., Kernel, libraries, network). ○ Enterprise ARMv8 vendors are targeting the high-density, highly-scalable storage solutions market with relatively strong cores and lots of available I/O. ● Goals ○ Bring up a simple Ceph cluster on commodity ARMv8 hardware. ○ Look for CPU hotspots during performance testing. ■ Start with simple workloads, especially those that are part of Ceph. ○ Focus on optimizations specific to AArch64. Motivation & Goals 12
  • 13. connect.linaro.org ● 4 systems ○ AMD Opteron A1100 (codenamed Seattle) x3 ■ With Cryptographic Extension and CRC ■ 16GB RAM ■ 10GbE available ■ Monitor/OSD nodes ○ APM X-Gene Mustang ■ 16GB RAM ■ Client/Admin node ● Each node has 1 hard drive ○ 500GB 7200RPM ○ OSD Partition (390GB) ● Each node has 1 ethernet connection on a 1Gb network Linaro Austin Colocation Cluster (1): Hardware AMD Seattle Node 1 ------------------ MON OSD MDS AMD Seattle Node 2 ------------------ MON OSD AMD Seattle Node 3 ------------------ MON OSD APM X-Gene Mustang ------------------ Admin Client switch 13
  • 14. connect.linaro.org ● Fedora 21 ● Linux Kernel 3.17 ○ arm64 CRC32 module (available in 3.19) ● Ceph v0.91 ○ RPM packages built from Ceph git repo ○ Local YUM repo serving updated packages to cluster ● ceph-deploy ○ Used to easily deploy Ceph cluster, including package installs, setting up keys, etc. Linaro Austin Colocation Cluster (2): Software 14
  • 15. connect.linaro.org ● Single Node ○ “perf record {workload}” ● Cluster ○ Client Node: Execute {workload} ○ OSD Nodes: “perf top” for all system data. ○ OSD Nodes: “perf top -p {osd pid}” for only OSD process. ■ Useful when there is little system load. Performance Testing (1): Collecting Data 15
  • 16. connect.linaro.org ● Rados Bench ○ rados bench -p rbd 300 write --no-cleanup ○ rados bench -p rbd 300 seq ○ rados bench -p rbd 300 rand ● OSD Bench ○ ceph tell osd.# bench Performance Testing (2): Workloads, Ceph 16
  • 17. connect.linaro.org ● dd ○ rbd create name --size #MBs ○ rbd map name -p rbd ○ mkfs [options] /dev/rbd/rbd/name ○ mount /dev/rbd/rbd/name /mnt ○ dd if=/dev/zero of=/mnt/zerofile [options] ● Write a lot of objects ○ foreach file in a_lot_of_files: rados put object-name-# $file -p data Performance Testing (3): Workloads, Other 17
  • 18. connect.linaro.org ● Known ○ CRC32C ■ Ceph (upstreaming) ■ Linux Kernel (upstream already, should arrive in 3.19) ● Possible ○ how memcpy is called (it is a CPU hotspot) ○ tcmalloc ○ Boost C++ libraries ○ rocksdb Optimization Opportunities 18
  • 19. connect.linaro.org ● Issues ○ Linux Perf symbol decode ○ Python 2.7 hang when starting Ceph (now fixed) ● Limitations: ○ I/O bound with single OSD on 7200RPM hard drive with 1Gb network ■ Ideal: 8+ SSDs per node; each SSD with an individual OSD ■ Ideal: 10Gb network to support the nodes ○ Only several nodes forming a Ceph cluster (due to lack of hardware) ■ Ideal: 10+ nodes forming a cluster Encountered Issues/Current Limitations 19
  • 20. connect.linaro.org ● Teuthology (for ceph-qa) ● More workload profiling: ○ CephFS ○ Ceph Object Gateway (radosgw) ● Ceph prerequisites that could be investigated on AArch64: ○ Boost C++ Libraries ○ tcmalloc Future Work 20
  • 22.
  • 24. connect.linaro.org ● Given a 1GB object, let’s split it into 2 x 512MB chunks (A and B). ● Now, let’s introduce a third 512MB chunk P (for parity), and compute each individual byte P[i] as follows: P[i] = A[i] ^ B[i] ● We can now lose one of A, B, or P and still be able to reconstruct our original data: ● To get this level of redundancy with replication requires 2GB of disk space, as opposed to 1.5 GB with our parity coding. Erasure coding - An Example A B A B P 24
  • 25. connect.linaro.org ● Erasure codes can get more elaborate. One can split an object into k data chunks and compute m coding chunks. ● This allows us to lose m chunks before data loss. ● The object will reside on k + m OSDs. ● Pools are configured whether or not to use erasure coding. ● The mathematics gets more complicated as m is increased and requires specialized Galois Field arithmetic routines. ● Thankfully, these have already been ported over to ARM (both 32-bit and 64-bit) using NEON. Erasure coding generalized 25
  • 26. More about Linaro: http://www.linaro.org/about/ More about Linaro engineering: http://www.linaro.org/engineering/ How to join: http://www.linaro.org/about/how-to-join Linaro members: www.linaro.org/members connect.linaro.org 26
  • 27. connect.linaro.org 27 Disclaimer and Attribution The information presented in this document is for informational purposes only and may contain technical inaccuracies, omissions and typographical errors. The information contained herein is subject to change and may be rendered inaccurate for many reasons, including but not limited to product and roadmap changes, component and motherboard version changes, new model and/or product releases, product differences between differing manufacturers, software changes, BIOS flashes, firmware upgrades, or the like. AMD assumes no obligation to update or otherwise correct or revise this information. However, AMD reserves the right to revise this information and to make changes from time to time to the content hereof without obligation of AMD to notify any person of such revisions or changes. AMD MAKES NO REPRESENTATIONS OR WARRANTIES WITH RESPECT TO THE CONTENTS HEREOF AND ASSUMES NO RESPONSIBILITY FOR ANY INACCURACIES, ERRORS OR OMISSIONS THAT MAY APPEAR IN THIS INFORMATION. AMD SPECIFICALLY DISCLAIMS ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE. IN NO EVENT WILL AMD BE LIABLE TO ANY PERSON FOR ANY DIRECT, INDIRECT, SPECIAL OR OTHER CONSEQUENTIAL DAMAGES ARISING FROM THE USE OF ANY INFORMATION CONTAINED HEREIN, EVEN IF AMD IS EXPRESSLY ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Trademark Attribution AMD, the AMD Arrow logo and combinations thereof are trademarks of Advanced Micro Devices, Inc. in the United States and/or other jurisdictions. Other names used in this presentation are for identification purposes only and may be trademarks of their respective owners. ©2015 Advanced Micro Devices, Inc. All rights reserved.