SlideShare une entreprise Scribd logo
1  sur  15
1
Unit – I
INTRODUCTION
Covers: Server Centric IT Architecture and its Limitations; Storage – Centric IT Architecture and its
advantages; Case study: Replacing a server with Storage Networks; The Data Storage and Data
Access problem; The Battle for size and access.
Server Centric IT Architecture and its Limitations
 In conventional IT architectures, storage devices are normally only connected to a single server. To
increase fault tolerance, storage devices are sometimes connected to two servers, with only one
server actually able to use the storage device at any one time
 In both cases, the storage device exists only in relation to the server to which it is connected. Other
servers cannot directly access the data; they always have to go through the server that is connected
to the storage device. This conventional IT architecture is therefore called server-centric IT
architecture.
 In this approach, servers and storage devices are generally connected together by SCSI cables.
Limitation
 In conventional server-centric IT architecture storage devices exist only in relation to the one or two
servers to which they are connected. The failure of both of these computers would make it impossible
to access this data
 If a computer requires more storage space than is connected to it, it is no help whatsoever that
another computer still has attached storage space, which is not currently used
 Consequently, it is necessary to connect ever more storage devices to a computer. This throws up the
problem that each computer can accommodate only a limited number of I/O cards
 The length of SCSI cables is limited to a maximum of 25 m. This means that the storage capacity that
can be connected to a computer using conventional technologies is limited. Conventional
technologies are therefore no longer sufficient to satisfy the growing demand for storage capacity
2
Storage – Centric IT Architecture and its advantages
 In storage networks storage devices exist completely independently of any computer.
Several servers can access the same storage device directly over the storage network
without another server having to be involved.
 The idea behind storage networks is that the SCSI cable is replaced by a network that is
installed in addition to the existing LAN and is primarily used for data exchange between
computers and storage devices.
Advantages:
 The storage network permits all computers to access the disk subsystem and share it. Free
storage capacity can thus be flexibly assigned to the computer that needs it at the time
3
Case study: Replacing a server with Storage Networks
 In the following we will illustrate some advantages of storage-centric IT architecture using a case study: in
a production environment an application server is no longer powerful enough. The ageing computer must
be replaced by a higher-performance device. Whereas such a measure can be very complicated in a
conventional, server-centric IT architecture, it can be carried out very elegantly in a storage network.
4
The Data Storage and Data Access problem
Although we don’t always realize it, accessing information on a daily basis the way we do means
there must be computers out there that store the data we need, making certain it’s available when
we need it, and ensuring the data’s both accurate and up-to-date. Rapid changes within the
computer networking industry have had a dynamic effect on our ability to retrieve information, and
networking innovations have provided powerful tools that allow us to access data on a personal and
global scale.
With so much data to store and with such global access to it, the collision between networking
technology and storage innovations was inevitable. The gridlock of too much data coupled with too
many requests for access has long challenged IT professionals. To storage and networking vendors,
as well as computer researchers, the problem is not new. And as long as personal computing devices
and corporate data centers demand greater storage capacity to offset our increasing appetite for
access, the challenge will be with us.
The Challenge of Designing Applications
Non-Linear Performance in Applications
The major factors influencing non-linear performance are twofold. First is the availability of
sufficient online storage capacity for application data coupled with adequate temporary storage
resources, including RAM and cache storage for processing application transactions. Second is the
number of users who will interact with the application and thus access the online storage for
application data retrieval and storage of new data. With this condition is the utilization of the
temporary online storage resources (over and above the RAM and system cache required), used by
the application to process the number of planned transactions in a timely manner
Storing and accessing data starts with the requirements of a business
application.
In all fairness to the application designers and product developers,
the choice of database is really very limited. Most designs just note
the type of database or databases required, be it relational or non-
relational. This decision in many cases is made from economic and
existing infrastructure factors. For example, how many times does an
application come online using a database purely because that’s the
existing database of choice for the enterprise? In other cases,
applications may be implemented using file systems, when they were
actually designed to leverage the relational operations of an RDBMS.
5
The Battle for size and access
The Problem: Size
Wider bandwidth is needed. The connection between the server and storage unit requires a faster
data transfer rate. The client/server storage model uses bus technology to connect and a device
protocol tocommunicate, limiting the data transfer to about 10MB per second (maybe 40MB per
second, tops).
The problem is size. The database and supporting online storage currently installed has exceeded its
limitations, resulting in lagging requests for data and subsequent unresponsive applications. You
may be able to physically store 500GB on the storage devices; however, it's unlikely the single server
will provide sufficient connectivity to service application requests for data in a timely fashion-
thereby bringing on the non-linear performance window quite rapidly.
Solution Storage networking enables faster data transfers, as well as the capability for servers to
access larger data stores through applications and systems that share storage devices and data.
The Problem: Access
The problem is access. There are too many users for the supported configuration. The network cannot deliver
the user transactions into the server nor respond in a timely manner. Given the server cannot handle the
number of transactions submitted, the storage and server components are grid locked in attempting to satisfy
requests for data to be read or written to storage.
The single distribution strategy needs revisiting. A single distribution strategy can create an information
bottleneck at the disembarkation point. We will explore this later in Parts III and IV of this book where
application of SAN and NAS solutions are discussed. It's important to note, however, that a single distribution
strategy is only a logical term for placing user data where it is most effectively accessed. It doesn't necessarily
mean they are placed in a single physical location.
Solution With storage networking, user transactions can access data more directly, bypassing the overhead
of I/O operations and unnecessary data movement operations to and through the server.
6
INTELLIGENT DISK SUBSYSTEMS – 1
Covers: Architecture of Intelligent Disk Subsystems; Hard disks and Internal I/O Channels, JBOD,
Storage virtualization using RAID and different RAID levels
Architecture of Intelligent Disk Subsystems
 In contrast to a file server, a disk subsystem can be visualised as a hard disk server
 Servers are connected to the connection port of the disk subsystem using standard I/O
techniques such as Small Computer System Interface (SCSI), Fibre Channel or Internet SCSI
(iSCSI) and can thus use the storage capacity that the disk subsystem provides
 The internal structure of the disk subsystem is completely hidden from the server, which sees
only the hard disks that the disk subsystem provides to the server.
The connection ports are extended to the hard disks of the disk subsystem by means of internal I/O
channels (Figure 2.2). In most disk subsystems there is a controller between the connection ports
and the hard disks. The controller can significantly increase the data availability and data access
performance with the aid of a so-called RAID procedure. Furthermore, some controllers realise the
copying services instant copy and remote mirroring and further additional services. The controller
uses a cache in an attempt to accelerate read and write accesses to the server.
 Disk subsystems are available in all sizes. Small disk subsystems have one to two connection
ports for servers or storage networks, six to eight hard disks and, depending on the disk capacity,
storage capacity of a few terabytes
 Regardless of storage networks, most disk subsystems have the advantage that free disk space
can be flexibly assigned to each server connected to the disk subsystem (storage pooling)
 All servers are either directly connected to the disk subsystem or indirectly connected via a
storage network
7
8
Hard disks and Internal I/O Channels
I/O channels can be designed with built-in redundancy in order to increase the fault-tolerance of a
disk subsystem. The following cases can be differentiated here:
• Active
In active cabling the individual physical hard disks are only connected via one I/O channel. If this
access path fails, then it is no longer possible to access the data.
• Active/passive
In active/passive cabling the individual hard disks are connected via two I/O channels (Figure 2.5,
right). In normal operation the controller communicates with the hard disks via the first I/O channel
and the second I/O channel is not used. In the event of the failure of the first I/O channel, the disk
subsystem switches from the first to the second I/O channel.
9
• Active/active (no load sharing)
In this cabling method the controller uses both I/O channels in normal operation. The hard disks are
divided into two groups: in normal operation the first group is addressed via the first I/O channel
and the second via the second I/O channel. If one I/O channel fails, both groups are addressed via
the other I/O channel.
• Active/active (load sharing)
In this approach all hard disks are addressed via both I/O channels in normal operation. The
controller divides the load dynamically between the two I/O channels so that the available hardware
can be optimally utilised. If one I/O channel fails, then the communication goes through the other
channel only.
10
JBOD: JUST A BUNCH OF DISKS
If we compare disk subsystems with regard to their controllers we can differentiate between three
levels of complexity:
(1) No controller;
(2) RAID controller and
(3) Intelligent controller with additional services such as instant copy and remote mirroring .
 If the disk subsystem has no internal controller, it is only an enclosure full of disks (JBODs).
 In this instance, the hard disks are permanently fitted into the enclosure and the
connections for I/O channels and power supply are taken outwards at a single point.
Therefore, a JBOD is simpler to manage than a few loose hard disks
 Typical JBOD disk subsystems have space for 8 or 16 hard disks. A connected server
recognises all these hard disks as independent disks. Therefore, 16 device addresses are
required for a JBOD disk subsystem incorporating 16 hard disks
STORAGE VIRTUALISATION USING RAID
 A disk subsystem with a RAID controller offers greater functional scope than a JBOD disk
subsystem.
 RAID was originally called ‘Redundant Array of Inexpensive Disks’. Today RAID stands for
‘Redundant Array of Independent Disks’
 RAID has two main goals: to increase performance by striping and to increase fault-tolerance
by redundancy.
 Striping distributes the data over several hard disks and thus distributes the load over more
hardware. Redundancy means that additional information is stored so that the operation of
the application itself can continue in the event of the failure of a hard disk.
 A RAID controller can distribute the data that a server writes to the virtual hard disk amongst
the individual physical hard disks in various manners. These different procedures are known
as RAID levels
11
Different RAID Levels
 RAID 0: block-by-block striping
 RAID 0 distributes the data that the server writes to the virtual hard disk onto one physical
hard disk after another block-by-block (block-by-block striping)
 RAID 0 increases the performance of the virtual hard disk, but not its fault-tolerance. If a
physical hard disk is lost, all the data on the virtual hard disk is lost. To be precise, therefore,
the ‘R’ for ‘Redundant’ in RAID is incorrect in the case of RAID 0, with ‘RAID 0’ standing
instead for ‘zero redundancy’.
12
 RAID 1: block-by-block mirroring
 In contrast to RAID 0, in RAID 1 fault-tolerance is of primary importance
 The basic form of RAID 1 brings together two physical hard disks to form a virtual hard disk by
mirroring the data on the two physical hard disks. If the server writes a block to the virtual
hard disk, the RAID controller writes this block to both physical hard disks.
 Performance increases are only possible in read operations.
 Performance in writing is affected, This is because the RAID controller has to send the data to both
hard disks
13
 RAID 0+1/RAID 10: striping and mirroring combined
 The problem with RAID 0 and RAID 1 is that they increase either performance (RAID 0) or
fault-tolerance (RAID 1).
 It would be nice to have both performance and fault-tolerance, This is where RAID 0+1 and
RAID 10 come into play
14
RAID 4 and RAID 5: parity instead of mirroring
 RAID 10 provides excellent performance at a high level of fault-tolerance. The problem with
this is that mirroring using RAID 1 means that all data is written to the physical hard disk
twice. RAID 10 thus doubles the required storage capacity
 The idea of RAID 4 and RAID 5 is to replace all mirror disks of RAID 10 with a single parity
hard disk
 RAID controller calculates the parity block PABCD for the blocks A, B, C and D. If one of the
four data disks fails, the RAID controller can reconstruct the data of the defective disks using
the three other data disks and the parity disk.
 From a mathematical point of view the parity block is calculated with the aid of the logical
XOR operator
 The space saving offered by RAID 4 and RAID 5, which remains to be discussed, comes at a
price in relation to RAID 10. Changing a data block changes the value of the associated parity
block. This means that each write operation to the virtual hard disk requires
1. The physical writing of the data block
2. The recalculation of the parity block
3. The physical writing of the newly calculated parity block.
This extra cost for write operations in RAID 4 and RAID 5 is called the write penalty of RAID 4
or the write penalty of RAID 5.
15
How RAID 5 overcomes limitations of RAID 5
To get around this performance bottleneck, RAID 5 distributes the parity blocks over all hard disks.
Figure above, illustrates the procedure. As in RAID 4, the RAID controller writes the parity block
PABCD for the blocks A, B, C and D onto the fifth physical hard disk. Unlike RAID 4, however, in RAID 5
the parity block PEFGH moves to the fourth physical hard disk for the next four blocks E, F, G, H.

Contenu connexe

Tendances

Multiprocessor Architecture (Advanced computer architecture)
Multiprocessor Architecture  (Advanced computer architecture)Multiprocessor Architecture  (Advanced computer architecture)
Multiprocessor Architecture (Advanced computer architecture)vani261
 
Multiprocessor architecture
Multiprocessor architectureMultiprocessor architecture
Multiprocessor architectureArpan Baishya
 
CDW: SAN vs. NAS
CDW: SAN vs. NASCDW: SAN vs. NAS
CDW: SAN vs. NASSpiceworks
 
RAID - (Redundant Array of Inexpensive Disks or Drives, or Redundant Array of...
RAID - (Redundant Array of Inexpensive Disks or Drives, or Redundant Array of...RAID - (Redundant Array of Inexpensive Disks or Drives, or Redundant Array of...
RAID - (Redundant Array of Inexpensive Disks or Drives, or Redundant Array of...Jason Augustine
 
Chapter 12 - Mass Storage Systems
Chapter 12 - Mass Storage SystemsChapter 12 - Mass Storage Systems
Chapter 12 - Mass Storage SystemsWayne Jones Jnr
 
Storage Area Networks Unit 4 Notes
Storage Area Networks Unit 4 NotesStorage Area Networks Unit 4 Notes
Storage Area Networks Unit 4 NotesSudarshan Dhondaley
 
Distributed operating system(os)
Distributed operating system(os)Distributed operating system(os)
Distributed operating system(os)Dinesh Modak
 
Introduction to san ( storage area networks )
Introduction to san ( storage area networks )Introduction to san ( storage area networks )
Introduction to san ( storage area networks )sagaroceanic11
 
Introduction to distributed file systems
Introduction to distributed file systemsIntroduction to distributed file systems
Introduction to distributed file systemsViet-Trung TRAN
 

Tendances (20)

DAS RAID NAS SAN
DAS RAID NAS SANDAS RAID NAS SAN
DAS RAID NAS SAN
 
Storage
StorageStorage
Storage
 
network storage
network storagenetwork storage
network storage
 
Multiprocessor Architecture (Advanced computer architecture)
Multiprocessor Architecture  (Advanced computer architecture)Multiprocessor Architecture  (Advanced computer architecture)
Multiprocessor Architecture (Advanced computer architecture)
 
Multiprocessor architecture
Multiprocessor architectureMultiprocessor architecture
Multiprocessor architecture
 
CDW: SAN vs. NAS
CDW: SAN vs. NASCDW: SAN vs. NAS
CDW: SAN vs. NAS
 
RAID - (Redundant Array of Inexpensive Disks or Drives, or Redundant Array of...
RAID - (Redundant Array of Inexpensive Disks or Drives, or Redundant Array of...RAID - (Redundant Array of Inexpensive Disks or Drives, or Redundant Array of...
RAID - (Redundant Array of Inexpensive Disks or Drives, or Redundant Array of...
 
Chapter 12 - Mass Storage Systems
Chapter 12 - Mass Storage SystemsChapter 12 - Mass Storage Systems
Chapter 12 - Mass Storage Systems
 
Cache memory
Cache memory Cache memory
Cache memory
 
Storage Area Networks Unit 4 Notes
Storage Area Networks Unit 4 NotesStorage Area Networks Unit 4 Notes
Storage Area Networks Unit 4 Notes
 
Raid
RaidRaid
Raid
 
Distributed operating system(os)
Distributed operating system(os)Distributed operating system(os)
Distributed operating system(os)
 
Os presentation
Os presentationOs presentation
Os presentation
 
NAS Concepts
NAS ConceptsNAS Concepts
NAS Concepts
 
raid technology
raid technologyraid technology
raid technology
 
Cache memory
Cache memoryCache memory
Cache memory
 
4. the grid evolution
4. the grid evolution4. the grid evolution
4. the grid evolution
 
Introduction to san ( storage area networks )
Introduction to san ( storage area networks )Introduction to san ( storage area networks )
Introduction to san ( storage area networks )
 
Introduction to distributed file systems
Introduction to distributed file systemsIntroduction to distributed file systems
Introduction to distributed file systems
 
Storage Managment
Storage ManagmentStorage Managment
Storage Managment
 

Similaire à Storage Area Networks Unit 1 Notes

Positioning IBM Flex System 16 Gb Fibre Channel Fabric for Storage-Intensive ...
Positioning IBM Flex System 16 Gb Fibre Channel Fabric for Storage-Intensive ...Positioning IBM Flex System 16 Gb Fibre Channel Fabric for Storage-Intensive ...
Positioning IBM Flex System 16 Gb Fibre Channel Fabric for Storage-Intensive ...IBM India Smarter Computing
 
Storage Virtualization: Towards an Efficient and Scalable Framework
Storage Virtualization: Towards an Efficient and Scalable FrameworkStorage Virtualization: Towards an Efficient and Scalable Framework
Storage Virtualization: Towards an Efficient and Scalable FrameworkCSCJournals
 
Postponed Optimized Report Recovery under Lt Based Cloud Memory
Postponed Optimized Report Recovery under Lt Based Cloud MemoryPostponed Optimized Report Recovery under Lt Based Cloud Memory
Postponed Optimized Report Recovery under Lt Based Cloud MemoryIJARIIT
 
Building a Highly Available Data Infrastructure
Building a Highly Available Data InfrastructureBuilding a Highly Available Data Infrastructure
Building a Highly Available Data InfrastructureDataCore Software
 
Why is Virtualization Creating Storage Sprawl? By Storage Switzerland
Why is Virtualization Creating Storage Sprawl? By Storage SwitzerlandWhy is Virtualization Creating Storage Sprawl? By Storage Switzerland
Why is Virtualization Creating Storage Sprawl? By Storage SwitzerlandINFINIDAT
 
final-unit-ii-cc-cloud computing-2022.pdf
final-unit-ii-cc-cloud computing-2022.pdffinal-unit-ii-cc-cloud computing-2022.pdf
final-unit-ii-cc-cloud computing-2022.pdfSamiksha880257
 
Introduction to Enterprise Data Storage, Direct Attached Storage, Storage Ar...
Introduction to Enterprise Data Storage,  Direct Attached Storage, Storage Ar...Introduction to Enterprise Data Storage,  Direct Attached Storage, Storage Ar...
Introduction to Enterprise Data Storage, Direct Attached Storage, Storage Ar...ssuserec8a711
 
[IC Manage] Workspace Acceleration & Network Storage Reduction
[IC Manage] Workspace Acceleration & Network Storage Reduction[IC Manage] Workspace Acceleration & Network Storage Reduction
[IC Manage] Workspace Acceleration & Network Storage ReductionPerforce
 
Efficient and scalable multitenant placement approach for in memory database ...
Efficient and scalable multitenant placement approach for in memory database ...Efficient and scalable multitenant placement approach for in memory database ...
Efficient and scalable multitenant placement approach for in memory database ...CSITiaesprime
 
50 Shades of Grey in Software-Defined Storage
50 Shades of Grey in Software-Defined Storage50 Shades of Grey in Software-Defined Storage
50 Shades of Grey in Software-Defined StorageStorMagic
 
E newsletter promise_&_challenges_of_cloud storage-2
E newsletter promise_&_challenges_of_cloud storage-2E newsletter promise_&_challenges_of_cloud storage-2
E newsletter promise_&_challenges_of_cloud storage-2Anil Vasudeva
 
03 Data Recovery - Notes
03 Data Recovery - Notes03 Data Recovery - Notes
03 Data Recovery - NotesKranthi
 
How to choose a server for your data center's needs
How to choose a server for your data center's needsHow to choose a server for your data center's needs
How to choose a server for your data center's needsIT Tech
 
SSDs Deliver More at the Point-of-Processing
SSDs Deliver More at the Point-of-ProcessingSSDs Deliver More at the Point-of-Processing
SSDs Deliver More at the Point-of-ProcessingSamsung Business USA
 
IRJET- Distributed Decentralized Data Storage using IPFS
IRJET- Distributed Decentralized Data Storage using IPFSIRJET- Distributed Decentralized Data Storage using IPFS
IRJET- Distributed Decentralized Data Storage using IPFSIRJET Journal
 
Cidr11 paper32
Cidr11 paper32Cidr11 paper32
Cidr11 paper32jujukoko
 

Similaire à Storage Area Networks Unit 1 Notes (20)

Positioning IBM Flex System 16 Gb Fibre Channel Fabric for Storage-Intensive ...
Positioning IBM Flex System 16 Gb Fibre Channel Fabric for Storage-Intensive ...Positioning IBM Flex System 16 Gb Fibre Channel Fabric for Storage-Intensive ...
Positioning IBM Flex System 16 Gb Fibre Channel Fabric for Storage-Intensive ...
 
Storage Virtualization: Towards an Efficient and Scalable Framework
Storage Virtualization: Towards an Efficient and Scalable FrameworkStorage Virtualization: Towards an Efficient and Scalable Framework
Storage Virtualization: Towards an Efficient and Scalable Framework
 
Challenges in Managing IT Infrastructure
Challenges in Managing IT InfrastructureChallenges in Managing IT Infrastructure
Challenges in Managing IT Infrastructure
 
Postponed Optimized Report Recovery under Lt Based Cloud Memory
Postponed Optimized Report Recovery under Lt Based Cloud MemoryPostponed Optimized Report Recovery under Lt Based Cloud Memory
Postponed Optimized Report Recovery under Lt Based Cloud Memory
 
Building a Highly Available Data Infrastructure
Building a Highly Available Data InfrastructureBuilding a Highly Available Data Infrastructure
Building a Highly Available Data Infrastructure
 
Why is Virtualization Creating Storage Sprawl? By Storage Switzerland
Why is Virtualization Creating Storage Sprawl? By Storage SwitzerlandWhy is Virtualization Creating Storage Sprawl? By Storage Switzerland
Why is Virtualization Creating Storage Sprawl? By Storage Switzerland
 
Advanced DB chapter 2.pdf
Advanced DB chapter 2.pdfAdvanced DB chapter 2.pdf
Advanced DB chapter 2.pdf
 
final-unit-ii-cc-cloud computing-2022.pdf
final-unit-ii-cc-cloud computing-2022.pdffinal-unit-ii-cc-cloud computing-2022.pdf
final-unit-ii-cc-cloud computing-2022.pdf
 
Introduction to Enterprise Data Storage, Direct Attached Storage, Storage Ar...
Introduction to Enterprise Data Storage,  Direct Attached Storage, Storage Ar...Introduction to Enterprise Data Storage,  Direct Attached Storage, Storage Ar...
Introduction to Enterprise Data Storage, Direct Attached Storage, Storage Ar...
 
[IC Manage] Workspace Acceleration & Network Storage Reduction
[IC Manage] Workspace Acceleration & Network Storage Reduction[IC Manage] Workspace Acceleration & Network Storage Reduction
[IC Manage] Workspace Acceleration & Network Storage Reduction
 
Efficient and scalable multitenant placement approach for in memory database ...
Efficient and scalable multitenant placement approach for in memory database ...Efficient and scalable multitenant placement approach for in memory database ...
Efficient and scalable multitenant placement approach for in memory database ...
 
50 Shades of Grey in Software-Defined Storage
50 Shades of Grey in Software-Defined Storage50 Shades of Grey in Software-Defined Storage
50 Shades of Grey in Software-Defined Storage
 
E newsletter promise_&_challenges_of_cloud storage-2
E newsletter promise_&_challenges_of_cloud storage-2E newsletter promise_&_challenges_of_cloud storage-2
E newsletter promise_&_challenges_of_cloud storage-2
 
03 Data Recovery - Notes
03 Data Recovery - Notes03 Data Recovery - Notes
03 Data Recovery - Notes
 
Mis cloud computing
Mis cloud computingMis cloud computing
Mis cloud computing
 
Distributed dbms (ddbms)
Distributed dbms (ddbms)Distributed dbms (ddbms)
Distributed dbms (ddbms)
 
How to choose a server for your data center's needs
How to choose a server for your data center's needsHow to choose a server for your data center's needs
How to choose a server for your data center's needs
 
SSDs Deliver More at the Point-of-Processing
SSDs Deliver More at the Point-of-ProcessingSSDs Deliver More at the Point-of-Processing
SSDs Deliver More at the Point-of-Processing
 
IRJET- Distributed Decentralized Data Storage using IPFS
IRJET- Distributed Decentralized Data Storage using IPFSIRJET- Distributed Decentralized Data Storage using IPFS
IRJET- Distributed Decentralized Data Storage using IPFS
 
Cidr11 paper32
Cidr11 paper32Cidr11 paper32
Cidr11 paper32
 

Plus de Sudarshan Dhondaley

Architectural Styles and Case Studies, Software architecture ,unit–2
Architectural Styles and Case Studies, Software architecture ,unit–2Architectural Styles and Case Studies, Software architecture ,unit–2
Architectural Styles and Case Studies, Software architecture ,unit–2Sudarshan Dhondaley
 
Designing and documenting software architecture unit 5
Designing and documenting software architecture unit 5Designing and documenting software architecture unit 5
Designing and documenting software architecture unit 5Sudarshan Dhondaley
 
Software architecture Unit 1 notes
Software architecture Unit 1 notesSoftware architecture Unit 1 notes
Software architecture Unit 1 notesSudarshan Dhondaley
 

Plus de Sudarshan Dhondaley (6)

C# Unit5 Notes
C# Unit5 NotesC# Unit5 Notes
C# Unit5 Notes
 
C# Unit 2 notes
C# Unit 2 notesC# Unit 2 notes
C# Unit 2 notes
 
C# Unit 1 notes
C# Unit 1 notesC# Unit 1 notes
C# Unit 1 notes
 
Architectural Styles and Case Studies, Software architecture ,unit–2
Architectural Styles and Case Studies, Software architecture ,unit–2Architectural Styles and Case Studies, Software architecture ,unit–2
Architectural Styles and Case Studies, Software architecture ,unit–2
 
Designing and documenting software architecture unit 5
Designing and documenting software architecture unit 5Designing and documenting software architecture unit 5
Designing and documenting software architecture unit 5
 
Software architecture Unit 1 notes
Software architecture Unit 1 notesSoftware architecture Unit 1 notes
Software architecture Unit 1 notes
 

Dernier

Decoding the Tweet _ Practical Criticism in the Age of Hashtag.pptx
Decoding the Tweet _ Practical Criticism in the Age of Hashtag.pptxDecoding the Tweet _ Practical Criticism in the Age of Hashtag.pptx
Decoding the Tweet _ Practical Criticism in the Age of Hashtag.pptxDhatriParmar
 
Measures of Position DECILES for ungrouped data
Measures of Position DECILES for ungrouped dataMeasures of Position DECILES for ungrouped data
Measures of Position DECILES for ungrouped dataBabyAnnMotar
 
Oppenheimer Film Discussion for Philosophy and Film
Oppenheimer Film Discussion for Philosophy and FilmOppenheimer Film Discussion for Philosophy and Film
Oppenheimer Film Discussion for Philosophy and FilmStan Meyer
 
Q4-PPT-Music9_Lesson-1-Romantic-Opera.pptx
Q4-PPT-Music9_Lesson-1-Romantic-Opera.pptxQ4-PPT-Music9_Lesson-1-Romantic-Opera.pptx
Q4-PPT-Music9_Lesson-1-Romantic-Opera.pptxlancelewisportillo
 
Q-Factor HISPOL Quiz-6th April 2024, Quiz Club NITW
Q-Factor HISPOL Quiz-6th April 2024, Quiz Club NITWQ-Factor HISPOL Quiz-6th April 2024, Quiz Club NITW
Q-Factor HISPOL Quiz-6th April 2024, Quiz Club NITWQuiz Club NITW
 
Q-Factor General Quiz-7th April 2024, Quiz Club NITW
Q-Factor General Quiz-7th April 2024, Quiz Club NITWQ-Factor General Quiz-7th April 2024, Quiz Club NITW
Q-Factor General Quiz-7th April 2024, Quiz Club NITWQuiz Club NITW
 
Man or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptx
Man or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptxMan or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptx
Man or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptxDhatriParmar
 
4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptx4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptxmary850239
 
Grade Three -ELLNA-REVIEWER-ENGLISH.pptx
Grade Three -ELLNA-REVIEWER-ENGLISH.pptxGrade Three -ELLNA-REVIEWER-ENGLISH.pptx
Grade Three -ELLNA-REVIEWER-ENGLISH.pptxkarenfajardo43
 
How to Fix XML SyntaxError in Odoo the 17
How to Fix XML SyntaxError in Odoo the 17How to Fix XML SyntaxError in Odoo the 17
How to Fix XML SyntaxError in Odoo the 17Celine George
 
Textual Evidence in Reading and Writing of SHS
Textual Evidence in Reading and Writing of SHSTextual Evidence in Reading and Writing of SHS
Textual Evidence in Reading and Writing of SHSMae Pangan
 
Daily Lesson Plan in Mathematics Quarter 4
Daily Lesson Plan in Mathematics Quarter 4Daily Lesson Plan in Mathematics Quarter 4
Daily Lesson Plan in Mathematics Quarter 4JOYLYNSAMANIEGO
 
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...Team Lead Succeed – Helping you and your team achieve high-performance teamwo...
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...Association for Project Management
 
Mythology Quiz-4th April 2024, Quiz Club NITW
Mythology Quiz-4th April 2024, Quiz Club NITWMythology Quiz-4th April 2024, Quiz Club NITW
Mythology Quiz-4th April 2024, Quiz Club NITWQuiz Club NITW
 
ESP 4-EDITED.pdfmmcncncncmcmmnmnmncnmncmnnjvnnv
ESP 4-EDITED.pdfmmcncncncmcmmnmnmncnmncmnnjvnnvESP 4-EDITED.pdfmmcncncncmcmmnmnmncnmncmnnjvnnv
ESP 4-EDITED.pdfmmcncncncmcmmnmnmncnmncmnnjvnnvRicaMaeCastro1
 
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...DhatriParmar
 

Dernier (20)

Decoding the Tweet _ Practical Criticism in the Age of Hashtag.pptx
Decoding the Tweet _ Practical Criticism in the Age of Hashtag.pptxDecoding the Tweet _ Practical Criticism in the Age of Hashtag.pptx
Decoding the Tweet _ Practical Criticism in the Age of Hashtag.pptx
 
Paradigm shift in nursing research by RS MEHTA
Paradigm shift in nursing research by RS MEHTAParadigm shift in nursing research by RS MEHTA
Paradigm shift in nursing research by RS MEHTA
 
Measures of Position DECILES for ungrouped data
Measures of Position DECILES for ungrouped dataMeasures of Position DECILES for ungrouped data
Measures of Position DECILES for ungrouped data
 
prashanth updated resume 2024 for Teaching Profession
prashanth updated resume 2024 for Teaching Professionprashanth updated resume 2024 for Teaching Profession
prashanth updated resume 2024 for Teaching Profession
 
INCLUSIVE EDUCATION PRACTICES FOR TEACHERS AND TRAINERS.pptx
INCLUSIVE EDUCATION PRACTICES FOR TEACHERS AND TRAINERS.pptxINCLUSIVE EDUCATION PRACTICES FOR TEACHERS AND TRAINERS.pptx
INCLUSIVE EDUCATION PRACTICES FOR TEACHERS AND TRAINERS.pptx
 
Oppenheimer Film Discussion for Philosophy and Film
Oppenheimer Film Discussion for Philosophy and FilmOppenheimer Film Discussion for Philosophy and Film
Oppenheimer Film Discussion for Philosophy and Film
 
Q4-PPT-Music9_Lesson-1-Romantic-Opera.pptx
Q4-PPT-Music9_Lesson-1-Romantic-Opera.pptxQ4-PPT-Music9_Lesson-1-Romantic-Opera.pptx
Q4-PPT-Music9_Lesson-1-Romantic-Opera.pptx
 
Q-Factor HISPOL Quiz-6th April 2024, Quiz Club NITW
Q-Factor HISPOL Quiz-6th April 2024, Quiz Club NITWQ-Factor HISPOL Quiz-6th April 2024, Quiz Club NITW
Q-Factor HISPOL Quiz-6th April 2024, Quiz Club NITW
 
Q-Factor General Quiz-7th April 2024, Quiz Club NITW
Q-Factor General Quiz-7th April 2024, Quiz Club NITWQ-Factor General Quiz-7th April 2024, Quiz Club NITW
Q-Factor General Quiz-7th April 2024, Quiz Club NITW
 
Man or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptx
Man or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptxMan or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptx
Man or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptx
 
4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptx4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptx
 
Mattingly "AI & Prompt Design: Large Language Models"
Mattingly "AI & Prompt Design: Large Language Models"Mattingly "AI & Prompt Design: Large Language Models"
Mattingly "AI & Prompt Design: Large Language Models"
 
Grade Three -ELLNA-REVIEWER-ENGLISH.pptx
Grade Three -ELLNA-REVIEWER-ENGLISH.pptxGrade Three -ELLNA-REVIEWER-ENGLISH.pptx
Grade Three -ELLNA-REVIEWER-ENGLISH.pptx
 
How to Fix XML SyntaxError in Odoo the 17
How to Fix XML SyntaxError in Odoo the 17How to Fix XML SyntaxError in Odoo the 17
How to Fix XML SyntaxError in Odoo the 17
 
Textual Evidence in Reading and Writing of SHS
Textual Evidence in Reading and Writing of SHSTextual Evidence in Reading and Writing of SHS
Textual Evidence in Reading and Writing of SHS
 
Daily Lesson Plan in Mathematics Quarter 4
Daily Lesson Plan in Mathematics Quarter 4Daily Lesson Plan in Mathematics Quarter 4
Daily Lesson Plan in Mathematics Quarter 4
 
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...Team Lead Succeed – Helping you and your team achieve high-performance teamwo...
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...
 
Mythology Quiz-4th April 2024, Quiz Club NITW
Mythology Quiz-4th April 2024, Quiz Club NITWMythology Quiz-4th April 2024, Quiz Club NITW
Mythology Quiz-4th April 2024, Quiz Club NITW
 
ESP 4-EDITED.pdfmmcncncncmcmmnmnmncnmncmnnjvnnv
ESP 4-EDITED.pdfmmcncncncmcmmnmnmncnmncmnnjvnnvESP 4-EDITED.pdfmmcncncncmcmmnmnmncnmncmnnjvnnv
ESP 4-EDITED.pdfmmcncncncmcmmnmnmncnmncmnnjvnnv
 
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...
 

Storage Area Networks Unit 1 Notes

  • 1. 1 Unit – I INTRODUCTION Covers: Server Centric IT Architecture and its Limitations; Storage – Centric IT Architecture and its advantages; Case study: Replacing a server with Storage Networks; The Data Storage and Data Access problem; The Battle for size and access. Server Centric IT Architecture and its Limitations  In conventional IT architectures, storage devices are normally only connected to a single server. To increase fault tolerance, storage devices are sometimes connected to two servers, with only one server actually able to use the storage device at any one time  In both cases, the storage device exists only in relation to the server to which it is connected. Other servers cannot directly access the data; they always have to go through the server that is connected to the storage device. This conventional IT architecture is therefore called server-centric IT architecture.  In this approach, servers and storage devices are generally connected together by SCSI cables. Limitation  In conventional server-centric IT architecture storage devices exist only in relation to the one or two servers to which they are connected. The failure of both of these computers would make it impossible to access this data  If a computer requires more storage space than is connected to it, it is no help whatsoever that another computer still has attached storage space, which is not currently used  Consequently, it is necessary to connect ever more storage devices to a computer. This throws up the problem that each computer can accommodate only a limited number of I/O cards  The length of SCSI cables is limited to a maximum of 25 m. This means that the storage capacity that can be connected to a computer using conventional technologies is limited. Conventional technologies are therefore no longer sufficient to satisfy the growing demand for storage capacity
  • 2. 2 Storage – Centric IT Architecture and its advantages  In storage networks storage devices exist completely independently of any computer. Several servers can access the same storage device directly over the storage network without another server having to be involved.  The idea behind storage networks is that the SCSI cable is replaced by a network that is installed in addition to the existing LAN and is primarily used for data exchange between computers and storage devices. Advantages:  The storage network permits all computers to access the disk subsystem and share it. Free storage capacity can thus be flexibly assigned to the computer that needs it at the time
  • 3. 3 Case study: Replacing a server with Storage Networks  In the following we will illustrate some advantages of storage-centric IT architecture using a case study: in a production environment an application server is no longer powerful enough. The ageing computer must be replaced by a higher-performance device. Whereas such a measure can be very complicated in a conventional, server-centric IT architecture, it can be carried out very elegantly in a storage network.
  • 4. 4 The Data Storage and Data Access problem Although we don’t always realize it, accessing information on a daily basis the way we do means there must be computers out there that store the data we need, making certain it’s available when we need it, and ensuring the data’s both accurate and up-to-date. Rapid changes within the computer networking industry have had a dynamic effect on our ability to retrieve information, and networking innovations have provided powerful tools that allow us to access data on a personal and global scale. With so much data to store and with such global access to it, the collision between networking technology and storage innovations was inevitable. The gridlock of too much data coupled with too many requests for access has long challenged IT professionals. To storage and networking vendors, as well as computer researchers, the problem is not new. And as long as personal computing devices and corporate data centers demand greater storage capacity to offset our increasing appetite for access, the challenge will be with us. The Challenge of Designing Applications Non-Linear Performance in Applications The major factors influencing non-linear performance are twofold. First is the availability of sufficient online storage capacity for application data coupled with adequate temporary storage resources, including RAM and cache storage for processing application transactions. Second is the number of users who will interact with the application and thus access the online storage for application data retrieval and storage of new data. With this condition is the utilization of the temporary online storage resources (over and above the RAM and system cache required), used by the application to process the number of planned transactions in a timely manner Storing and accessing data starts with the requirements of a business application. In all fairness to the application designers and product developers, the choice of database is really very limited. Most designs just note the type of database or databases required, be it relational or non- relational. This decision in many cases is made from economic and existing infrastructure factors. For example, how many times does an application come online using a database purely because that’s the existing database of choice for the enterprise? In other cases, applications may be implemented using file systems, when they were actually designed to leverage the relational operations of an RDBMS.
  • 5. 5 The Battle for size and access The Problem: Size Wider bandwidth is needed. The connection between the server and storage unit requires a faster data transfer rate. The client/server storage model uses bus technology to connect and a device protocol tocommunicate, limiting the data transfer to about 10MB per second (maybe 40MB per second, tops). The problem is size. The database and supporting online storage currently installed has exceeded its limitations, resulting in lagging requests for data and subsequent unresponsive applications. You may be able to physically store 500GB on the storage devices; however, it's unlikely the single server will provide sufficient connectivity to service application requests for data in a timely fashion- thereby bringing on the non-linear performance window quite rapidly. Solution Storage networking enables faster data transfers, as well as the capability for servers to access larger data stores through applications and systems that share storage devices and data. The Problem: Access The problem is access. There are too many users for the supported configuration. The network cannot deliver the user transactions into the server nor respond in a timely manner. Given the server cannot handle the number of transactions submitted, the storage and server components are grid locked in attempting to satisfy requests for data to be read or written to storage. The single distribution strategy needs revisiting. A single distribution strategy can create an information bottleneck at the disembarkation point. We will explore this later in Parts III and IV of this book where application of SAN and NAS solutions are discussed. It's important to note, however, that a single distribution strategy is only a logical term for placing user data where it is most effectively accessed. It doesn't necessarily mean they are placed in a single physical location. Solution With storage networking, user transactions can access data more directly, bypassing the overhead of I/O operations and unnecessary data movement operations to and through the server.
  • 6. 6 INTELLIGENT DISK SUBSYSTEMS – 1 Covers: Architecture of Intelligent Disk Subsystems; Hard disks and Internal I/O Channels, JBOD, Storage virtualization using RAID and different RAID levels Architecture of Intelligent Disk Subsystems  In contrast to a file server, a disk subsystem can be visualised as a hard disk server  Servers are connected to the connection port of the disk subsystem using standard I/O techniques such as Small Computer System Interface (SCSI), Fibre Channel or Internet SCSI (iSCSI) and can thus use the storage capacity that the disk subsystem provides  The internal structure of the disk subsystem is completely hidden from the server, which sees only the hard disks that the disk subsystem provides to the server. The connection ports are extended to the hard disks of the disk subsystem by means of internal I/O channels (Figure 2.2). In most disk subsystems there is a controller between the connection ports and the hard disks. The controller can significantly increase the data availability and data access performance with the aid of a so-called RAID procedure. Furthermore, some controllers realise the copying services instant copy and remote mirroring and further additional services. The controller uses a cache in an attempt to accelerate read and write accesses to the server.  Disk subsystems are available in all sizes. Small disk subsystems have one to two connection ports for servers or storage networks, six to eight hard disks and, depending on the disk capacity, storage capacity of a few terabytes  Regardless of storage networks, most disk subsystems have the advantage that free disk space can be flexibly assigned to each server connected to the disk subsystem (storage pooling)  All servers are either directly connected to the disk subsystem or indirectly connected via a storage network
  • 7. 7
  • 8. 8 Hard disks and Internal I/O Channels I/O channels can be designed with built-in redundancy in order to increase the fault-tolerance of a disk subsystem. The following cases can be differentiated here: • Active In active cabling the individual physical hard disks are only connected via one I/O channel. If this access path fails, then it is no longer possible to access the data. • Active/passive In active/passive cabling the individual hard disks are connected via two I/O channels (Figure 2.5, right). In normal operation the controller communicates with the hard disks via the first I/O channel and the second I/O channel is not used. In the event of the failure of the first I/O channel, the disk subsystem switches from the first to the second I/O channel.
  • 9. 9 • Active/active (no load sharing) In this cabling method the controller uses both I/O channels in normal operation. The hard disks are divided into two groups: in normal operation the first group is addressed via the first I/O channel and the second via the second I/O channel. If one I/O channel fails, both groups are addressed via the other I/O channel. • Active/active (load sharing) In this approach all hard disks are addressed via both I/O channels in normal operation. The controller divides the load dynamically between the two I/O channels so that the available hardware can be optimally utilised. If one I/O channel fails, then the communication goes through the other channel only.
  • 10. 10 JBOD: JUST A BUNCH OF DISKS If we compare disk subsystems with regard to their controllers we can differentiate between three levels of complexity: (1) No controller; (2) RAID controller and (3) Intelligent controller with additional services such as instant copy and remote mirroring .  If the disk subsystem has no internal controller, it is only an enclosure full of disks (JBODs).  In this instance, the hard disks are permanently fitted into the enclosure and the connections for I/O channels and power supply are taken outwards at a single point. Therefore, a JBOD is simpler to manage than a few loose hard disks  Typical JBOD disk subsystems have space for 8 or 16 hard disks. A connected server recognises all these hard disks as independent disks. Therefore, 16 device addresses are required for a JBOD disk subsystem incorporating 16 hard disks STORAGE VIRTUALISATION USING RAID  A disk subsystem with a RAID controller offers greater functional scope than a JBOD disk subsystem.  RAID was originally called ‘Redundant Array of Inexpensive Disks’. Today RAID stands for ‘Redundant Array of Independent Disks’  RAID has two main goals: to increase performance by striping and to increase fault-tolerance by redundancy.  Striping distributes the data over several hard disks and thus distributes the load over more hardware. Redundancy means that additional information is stored so that the operation of the application itself can continue in the event of the failure of a hard disk.  A RAID controller can distribute the data that a server writes to the virtual hard disk amongst the individual physical hard disks in various manners. These different procedures are known as RAID levels
  • 11. 11 Different RAID Levels  RAID 0: block-by-block striping  RAID 0 distributes the data that the server writes to the virtual hard disk onto one physical hard disk after another block-by-block (block-by-block striping)  RAID 0 increases the performance of the virtual hard disk, but not its fault-tolerance. If a physical hard disk is lost, all the data on the virtual hard disk is lost. To be precise, therefore, the ‘R’ for ‘Redundant’ in RAID is incorrect in the case of RAID 0, with ‘RAID 0’ standing instead for ‘zero redundancy’.
  • 12. 12  RAID 1: block-by-block mirroring  In contrast to RAID 0, in RAID 1 fault-tolerance is of primary importance  The basic form of RAID 1 brings together two physical hard disks to form a virtual hard disk by mirroring the data on the two physical hard disks. If the server writes a block to the virtual hard disk, the RAID controller writes this block to both physical hard disks.  Performance increases are only possible in read operations.  Performance in writing is affected, This is because the RAID controller has to send the data to both hard disks
  • 13. 13  RAID 0+1/RAID 10: striping and mirroring combined  The problem with RAID 0 and RAID 1 is that they increase either performance (RAID 0) or fault-tolerance (RAID 1).  It would be nice to have both performance and fault-tolerance, This is where RAID 0+1 and RAID 10 come into play
  • 14. 14 RAID 4 and RAID 5: parity instead of mirroring  RAID 10 provides excellent performance at a high level of fault-tolerance. The problem with this is that mirroring using RAID 1 means that all data is written to the physical hard disk twice. RAID 10 thus doubles the required storage capacity  The idea of RAID 4 and RAID 5 is to replace all mirror disks of RAID 10 with a single parity hard disk  RAID controller calculates the parity block PABCD for the blocks A, B, C and D. If one of the four data disks fails, the RAID controller can reconstruct the data of the defective disks using the three other data disks and the parity disk.  From a mathematical point of view the parity block is calculated with the aid of the logical XOR operator  The space saving offered by RAID 4 and RAID 5, which remains to be discussed, comes at a price in relation to RAID 10. Changing a data block changes the value of the associated parity block. This means that each write operation to the virtual hard disk requires 1. The physical writing of the data block 2. The recalculation of the parity block 3. The physical writing of the newly calculated parity block. This extra cost for write operations in RAID 4 and RAID 5 is called the write penalty of RAID 4 or the write penalty of RAID 5.
  • 15. 15 How RAID 5 overcomes limitations of RAID 5 To get around this performance bottleneck, RAID 5 distributes the parity blocks over all hard disks. Figure above, illustrates the procedure. As in RAID 4, the RAID controller writes the parity block PABCD for the blocks A, B, C and D onto the fifth physical hard disk. Unlike RAID 4, however, in RAID 5 the parity block PEFGH moves to the fourth physical hard disk for the next four blocks E, F, G, H.