SlideShare une entreprise Scribd logo
1  sur  108
Introduction to Cloud Computing, services
and deployment models
• Introduction to Cloud Computing – Origins and Motivation
• 3-4-5 rule of Cloud Computing
• Types of Clouds and Services
• Cloud Infrastructure and Deployment
Motivation
Powerful
multi-core
processors
General
purpose
graphic
processors
Superior
software
methodologies
Virtualization
leveraging the
powerful
hardware
Wider bandwidth
for communication
Large
number of
devices
Explosion of
domain
applications
1. Web Scale
Problems
2. Web 2.0 and
Social
Networking
3. Information
Explosion
4. Mobile Web
Evolution of Web
Explosive growth in applications:
biomedical informatics, space exploration, business analytics,
web 2.0 social networking: YouTube, Facebook
Extreme scale content generation: e-science and e-business data flood
Extraordinary rate of digital content consumption: digital gluttony:
Apple iPhone, iPad, Amazon Kindle, Android, Windows Phone
Exponential growth in compute capabilities:
multi-core, storage, bandwidth, virtual machines (virtualization)
Very short cycle of obsolete in technologies:
Windows 8, Ubuntu, Mac; Java versions; C  C#; Python
Newer architectures: web services, persistence models, distributed file
systems/repositories (Google, Hadoop), multi-core, wireless and mobile
Diverse knowledge and skill levels of the workforce
TechnologyAdvances
64-bit
processor
Multi-core architectures
Virtualization: bare metal, hypervisor. …
VM0 VM1 VMn
Web-services, SOA, WS standards
Services interface
Cloud applications: data-intensive,
compute-intensive, storage-intensive
Storage
Models: S3,
BigTable,
BlobStore, ...
Bandwidth
WS
What is Cloud Computing?
Cloud Computing is a general term used to describe a new class of
network based computing that takes place over the Internet,
 basically a step on from Utility Computing
 a collection/group of integrated and networked hardware,
software and Internet infrastructure (called a platform).
 Using the Internet for communication and transport provides
hardware, software and networking services to clients
These platforms hide the complexity and details of the underlying
infrastructure from users and applications by providing very simple
graphical interface or API (Applications Programming Interface).
What is Cloud Computing cont.…
In addition, the platform provides on demand services, that are always
on, anywhere, anytime and any place.
Pay for use and as needed, elastic
 scale up and down in capacity and functionalities
The hardware and software services are available to
 general public, enterprises, corporations and businesses markets
Drivers for the new Platform
http://blogs.technet.com/b/yungchou/archive/2011/03/03/chou-s-theories-of-cloud-computing-the-5-3-2-principle.aspx
Cloud Summary
• Shared pool of
configurable
computing
resources
• On-demand
network access
• Provisioned by
the Service
Provider
Cloud Summary…
Cloud computing is an umbrella term used to refer to Internet based
development and services
A number of characteristics define cloud data, applications services
and infrastructure:
Remotely hosted: Services or data are hosted on remote infrastructure.
Ubiquitous: Services or data are available from anywhere.
Commodity model: The result is a utility computing model similar to
traditional that of traditional utilities, like gas and electricity - you pay
for what you would want!
Cloud Computing: Definition
The US National Institute of Standards (NIST) defines
cloud computing as follows:
Cloud computing is a model for enabling ubiquitous,
convenient, on-demand network access to a shared pool
of configurable computing resources (e.g., networks,
servers, storage, applications, and services) that can be
rapidly provisioned and released with minimal
management effort or service provider interaction.
NIST specifies 3-4-5 rule of Cloud Computing
3 cloud service models or service types for any cloud platform
4 deployment models
5 essential characteristics of cloud computing infrastructure
3-4-5 rule of Cloud Computing
Characteristics of Cloud Computing
 On demand self-
service
 Broad network
access
 Resource
pooling
 Rapid elasticity
 Measured
service
4 Deployment Models
1. Public Cloud Mega-scale cloud infrastructure is
made available to the general public
or a large industry group and is
owned by an organization selling
cloud services.
4 Deployment Models
2. Private Cloud
The cloud infrastructure is operated
solely for an organization. It may be
managed by the organization or a
third party and may exist on
premise or off premise.
4 Deployment Models
3. Hybrid Cloud The cloud infrastructure is a
composition of two or more clouds
(private or public) that remain
unique entities but are bound
together by standardized or
proprietary technology that enables
data and application portability
4 Deployment Models
4. Community Cloud Community Clouds are when an
‘infrastructure is shared by several
organizations and supports a specific
community that has shared concerns
(e.g., mission, security requirements,
policy, and compliance considerations).
It may be managed by the organizations
or a third party and may exist on premise
or off premise’ according to NIST.
A community cloud is a cloud service
shared between multiple organizations
with a common tie/goal/objective.
E.g. OpenCirrus
3 Cloud Service Models
Software as a
Service (SaaS)
Platform as a
Service (PaaS)
Infrastructure as a
Service (IaaS)
Google
App
Engine
SalesForce CRM
LotusLive
Software as a service features a complete
application
offered as a service on demand.
A single instance of the software runs on the cloud
and services multiple end users or client
organizations.
E.g. salesforce.com , Google Apps
Software as a Service (SaaS)
Platform as a service encapsulates a layer of software and provides it as a
service that can be used to build higher-level services.
2 Perspectives for PaaS :-
1. Producer:- Someone producing PaaS might produce a platform by
integrating an OS, middleware, application software, and even a
development environment that is then provided to a customer as a service.
2. Consumer:-Someone using PaaS would see an encapsulated service that
is presented to them through an API. The customer interacts with the
platform through the API, and the platform does what is necessary to
manage and scale itself to provide a given level of service.
Platform as a Service
Infrastructure as a service delivers basic storage and
computing capabilities as standardized services over the
network.
Servers, storage systems, switches, routers , and other
systems are pooled and made available to handle
workloads that range from application components to
high-performance computing applications.
Infrastructure as a Service
Service Models Summary
Cloud Software as a Service (SaaS)
The capability provided to the consumer is to use the provider’s applications running on a cloud
infrastructure and accessible from various client devices through a thin client interface such as a Web
browser (e.g., web-based email). The consumer does not manage or control the underlying cloud
infrastructure, network, servers, operating systems, storage, or even individual application capabilities,
with the possible exception of limited user-specific application configuration settings.
Cloud Platform as a Service (PaaS)
The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-
created applications using programming languages and tools supported by the provider (e.g., Java,
Python, .Net). The consumer does not manage or control the underlying cloud infrastructure, network,
servers, operating systems, or storage, but the consumer has control over the deployed applications and
possibly application hosting environment configurations.
Cloud Infrastructure as a Service (IaaS)
The capability provided to the consumer is to rent processing, storage, networks, and other
fundamental computing resources where the consumer is able to deploy and run arbitrary software,
which can include operating systems and applications. The consumer does not manage or control the
underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and
possibly select networking components (e.g., firewalls, load balancers).
.
Key Technology is Virtualization
Cloud Infrastructures
Hardware
Operating System
App App App
Traditional Stack
Hardware
OS
App App App
Hypervisor
OS OS
Virtualized Stack
Virtualization plays an important role as an enabling technology for
datacentre implementation by abstracting compute, network, and storage
service platforms from the underlying physical hardware
Distributed Management of Virtual Machines
Reservation-Based Provisioning of Virtualized Resources
Provisioning to Meet SLA Commitments
Management of Virtualized Resources
Distributed Management of Virtual Machines
Managing VMs need to set up custom software environments for VMs, setting up and managing
networking for interrelated VMs, and reducing the various overheads involved in using VMs.
Thus, VI managers must be able to efficiently orchestrate all these different tasks.
Reservation-Based Provisioning of Virtualized Resources
Determination of the demand for resources is known beforehand so that the computational
resources must be available at exactly that time to process the data produced by the
equipment
Provisioning to Meet SLA Commitments
IaaS clouds can be used to deploy services that will be consumed by users other than the one
that deployed the services. cloud providers are typically not directly exposed to the service
semantics or the SLAs that service owners may contract with their end users. The capacity
requirements are, thus, less predictable and more elastic.
The key component of an IaaS cloud architecture is the cloud OS, which manages the
physical and virtual infrastructures and controls the provisioning of virtual resources
according to the needs of the user services
A cloud OS’s role is to efficiently manage datacenter resources to deliver a flexible,
secure, and isolated multitenant execution environment for user services that
abstracts the underlying physical infrastructure and offers different interfaces and
APIs for interacting with the cloud
While local users and administrators can interact with the cloud using local interfaces
and administrative tools that offer rich functionality for managing, controlling, and
monitoring the virtual and physical infrastructure, remote cloud users employ public
cloud interfaces that usually provide more limited functionality
Cloud Infrastructure Anatomy
The cloud operating system is
responsible for:
1. managing the physical and
virtual infrastructure,
2. orchestraing and
commanding service
provisioning and deployment
3. providing federation
capabilities for accessing and
deploying virtual resources in
remote cloud infrastructures
• To provide an abstraction of the underlying infrastructure, technology, the cloud OS can
use adapters or drivers to interact with a variety of virtualization technologies. These
include hypervisor, network, storage, and information drivers.
• The core cloud OS components, including the virtual machine (VM) manager, network
manager, storage manager, and information manager, rely on these infrastructure drivers
to deploy, manage, and monitor the virtualized infrastructures.
• In addition to the infrastructure drivers, the cloud OS can include different cloud drivers
to enable access to remote providers.
• OpenNebula (http://opennebula.org) is an example of an open cloud OS platform
focused on datacenter virtualization that fits with the architecture as seen above. Other
open cloud managers, such as OpenStack (http://openstack.org) and Eucalyptus
(www.eucalyptus.com), primarily focus on public cloud features.
Infrastructure and Cloud Drivers
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
• Key Technology is Virtualization
CloudOS–Cont’d
Hardware
Operating System
App App App
Traditional Stack
Hardware
OS
App App App
Hypervisor
OS OS
Virtualized Stack
• Virtualization plays an important role as an enabling technology
for datacentre implementation by abstracting compute, network,
and storage service platforms from the underlying physical
hardware
• Virtualization is simulating a hardware platform, operating system,
storage device or network resources
• Cloud can exist without Virtualization, although it will be difficult and inefficient.
• Cloud makes notion of “Pay for what you use”, “infinite availability- use as much you want”.
• These notions are practical only if we have
– lot of flexibility
– efficiency in the back-end.
• This efficiency is readily available in Virtualized Environments and Machines
ImportanceofVirtualizationin
CloudComputing
29
• Virtualization allows multiple operating system instances to run concurrently on a single computer
• It is a means of separating hardware from a single operating system.
• Each “guest” OS is managed by a Virtual Machine Monitor (VMM), also known as a hypervisor
• Because the virtualization system sits between the guest and the hardware, it can control the
guests’ use of CPU, memory, and storage, even allowing a guest OS to migrate from one machine to
another
• Instead of purchasing and maintaining an entire computer for one application, each application can
be given its own operating system, and all those operating systems can reside on a single piece of
hardware
• Virtualization allows an operator to control a guest operating system’s use of CPU, memory, storage,
and other resources, so each guest receives only the resources that it needs
Introduction to Virtualization
30
Introduction to Virtualization – cont’d
31
Introduction to Virtualization – cont’d
Before Virtualization
• Single OS image per machine
• Software and hardware tightly
coupled
• Running multiple applications on
same machine often creates
conflict
• Underutilized resources
• Inflexible and costly
infrastructure
After Virtualization
• Hardware-independence of
operating system and
applications
• Virtual machines can be
provisioned to any system
• Can manage OS and
application as a single unit by
encapsulating them into
virtual machines
32
• OS assumes complete control of the underlying hardware.
• Virtualization architecture provides this illusion through a hypervisor/VMM.
• Hypervisor/VMM is a software layer which:
• Allows multiple Guest OS (Virtual Machines) to run simultaneously on a
single physical host
• Provides hardware abstraction to the running Guest OSs and efficiently
multiplexes underlying hardware resources
Virtualization Architecture
33
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
A cloud OS defines the VM as the basic execution unit and the
virtualized services (group of VMs for executing a multitier service )
as the basic management entity
This concept helps create scalable applications because the user can
either add VMs as needed (horizontal scaling) or resize a VM (if
supported by the underlying hypervisor technology) to satisfy a VM
workload increase (vertical scaling)
Individual multitier applications are isolated from each other, but
individual VMs in the same applications are not, as they all can share
a communication network and services when needed
A VM consists of a set of parameters and attributes, including the OS
kernel, VM image, memory and CPU capacity, network interfaces etc
Virtual Machine Manager
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
The VM manager is responsible for managing a VM’s entire life cycle
and performing different VM actions — deploy, migrate, suspend,
resume, shut down — according to user commands or scheduling
strategies
To perform these actions, the VM manager relies on the hypervisor
drivers, which expose the basic functionality of underlying hypervisors
such as Xen, KVM, and VMware to avoid limiting the cloud OS to a
specific virtualization technology
The VM manager is also responsible for preserving the service - level
agreements contracted with the users, which are usually expressed in
terms of VM availability in infrastructure clouds
To guarantee this availability, the VM manager should include different
mechanisms for detecting VM crashes and automatically restarting the
VM in case of failure
Virtual Machine Manager – cont’d
A thin layer of software that generally provides virtual partitioning capabilities which runs
directly on hardware, but underneath higher-level virtualization services
Sometimes referred to as a “bare metal” approach
Hypervisor / VMM
36
• Partitioning Kernel
▪ “Partition” is isolation boundary
▪ Few virtualization functions; relies on virtualization
stack
• Very thin layer of software
▪ Microkernel
▪ Highly reliable
▪ Basis for smaller Trusted Computing Base (TCB)
• No device drivers
▪ Drivers run in a partition
• Well-defined interface
▪ Allow others to create support for their OSes as guests
Hypervisor
37
Types of Virtualization
Two kinds of virtualization approaches available
according to VMM
• Hosted When VMM runs in a operating
system
• Bare-metal approach runs VMM on top of
hardware directly
– Fairly complex to implement but good in
performance
Types of Virtualization
38
A cloud OS defines the VM as the basic execution unit and the virtualized services
(group of VMs for executing a multitier service ) as the basic management entity .
This concept helps create scalable applications because the user can either add VMs
as needed (horizontal scaling) or resize a VM (if supported by the underlying
hypervisor technology) to satisfy a VM workload increase (vertical scaling).
A VM consists of a set of parameters and attributes, including the OS kernel, VM
image, memory and CPU capacity, network interfaces etc.
Virtual Machine Manager
The VM manager is responsible for managing a VM’s entire life cycle and performing
different VM actions — deploy, migrate, suspend, resume, shut down — according to
user commands or scheduling strategies.
To perform these actions, the VM manager relies on the hypervisor drivers, which
expose the basic functionality of underlying hypervisors such as Xen, KVM, and
VMware to avoid limiting the cloud OS to a specific virtualization technology.
The VM manager is also responsible for preserving the service - level agreements
contracted with the users, which are usually expressed in terms of VM availability in
infrastructure clouds.
To guarantee this availability, the VM manager should include different mechanisms
for detecting VM crashes and automatically restarting the VM in case of failure.
Virtual Machine Manager…
Virtual infrastructure (VI) management—the management of virtual machines distributed across a pool
of physical resources
Virtual machines require a fair amount of configuration, including preparation of the machine’s software
environment and network configuration
This configuration must be done on-the-fly, with as little time between the time the VMs are requested
and the time they are available to the user e.g., an application requiring a Web server and a database
server
Virtual infrastructure manager must be capable of allocating resources efficiently, taking into account an
organization’s goals (such as minimizing power consumption and other operational costs) and reacting
to changes in the physical infrastructure.
VI should provide best-effort provisioning and advance reservations to guarantee quality of
service (QoS) for applications that require resources at specific times
Resource Management using Virtualization Technologies
• An important cloud service model called “Infrastructure as a
Service” (IaaS), which enables computing, storage and Network
resources to be delivered as a service.
• Here users will be able to load any operating system and
other software they need and execute most of the
existing enterprise services without many changes.
• However, the burden of maintaining the installed
operating system and any middleware continues to fall on
the user/customer. Ensuring the availability of the
application is also the user’s job since IaaS vendors only
provide virtual hardware resources.
• Storage as a Service (StaaS) takes a detailed look at key
Amazon Storage Services.
• Amazon Web Services (AWS), from Amazon.com, has a suite of
cloud service products - S3, EC2, CloudWatch.
Introduction to IaaS
▪ Storage as a Service is one of the two major
services offered by IaaS .
▪ It includes
▪ simple storage service which consists of
highly reliable and available storage.
Example - Amazon S3
▪ Simple & relational database services.
Example – Amazon SimpleDB and RDS
(Relational Database Service) which is
MySQL instance over a cloud.
Storage as a Service StaaS
43
▪ Data storage requirement is ever increasing in
the enterprise/industry. Both
▪ Structured data like the Relational database
which are vital for e-commerce businesses.
▪ Unstructured data in various documents like
plans, strategy etc as per the process
require huge storage even in a small company.
▪ Enterprises may also have to store objects for
their customers e.g. Online photo album
▪ Need to protect the data - both security and
availability is to be provided as per the demand
in spite of various HW, network and SW failures
Data Storage Needs
44
▪ This is the second of the two major services
provided by IaaS.
▪ It makes extensive use of virtualization
technique to provide the computing resources
requested by the user
▪ Typically one or more Virtual computers
(networked together) are provided to the user
▪ These could be increased or decreased as per
the need from time to time
▪ Sudden increase in traffic can be taken care of
Compute as a service
45
There are many CaaS providers, few of the
diverse instances of Compute as a service are -
1. Amazon’s Elastic Compute Cloud – EC2 (this
session)
2. HP’s - CloudSystemMatrix
3. HP lab research prototype – Cells as a Service
46
Compute as a service
StaaS
Virtual N/W
CaaS
IaaS model
47
• Compute as a Service where computing is
made possible by providing the virtual
resources,
• would need Storage to keep the results of
computing, to make them persistent/
available.
• It would also need virtual network for
communication with the compute instances.
• Thus together CaaS, StaaS and virtual network
make up the Infrastructure as a Service - IaaS
48
IaaS model
• This is highly reliable, scalable, available and
fast storage in the cloud for storing and
retrieving data using simple web services.
• There are three ways of accessing S3
• AWS(Amazon Web Service) console
• REST-ful APIs with HTTP operations like
GET, PUT, DELETE and HEAD
• Libraries and SDKs that abstract these
operations
• There are several S3 browsers available to
access the storage and use it as though it’s
local dir/folder
Amazon Simple Storage Service S3
49
Let’s consider that a user wants to back up
(upload) some data for later need.
1. Sign up for S3 at http://aws.amazon.com/s3/
to get AWS access and secret keys similar to
user-id and passwd( Note these keys are for
the complete amazon solution not just S3)
2. Use these credentials to sign in to AWS
Management console
http://console.aws.amazon.com/s3/home
3. Create a bucket giving a name and
geographical location. (Buckets can store
objects/files)
Using Amazon S3
50
4. Press upload button and follow instructions to
upload the file/object.
5. Now the file is backed up and is available for
use/sharing.
This could also be achieved programmatically if
necessary by including these steps at
appropriate place(s) in the code.
51
Using Amazon S3
Getting Started with Amazon Simple Storage Service:
Simple Storage Service (S3) –
Amazon S3
52
▪ Files are objects in S3. Objects are referred to
with keys – an optional directory path name
followed by object name. Objects are replicated
across geographical locations in multiple places
to protect against failures but the consistency is
not guaranteed unless versioning is enabled.
▪ Objects can be up to 5 terabytes in size and a
bucket can have unlimited number of objects.
▪ Objects have to be stored in buckets which
have unique names and a location (region)
associated with it. There can be 100 buckets per
account
Buckets, objects and keys
53
▪ Each object can be accessed by its key via
corresponding URL path of AWS console
http://<bucket name>.S3.amazonaws.com/<key>
Or
http://S3.amazonaws.com/<bucketname>/<key>
• Note that key can be “proj1/file1” but that is just
a label not the dir structure or hierarchy. There
is no hierarchy in S3.
• Also anyone can access the object if it is Public.
Accessing objects in S3
54
▪ Users can set permissions for others by right
clicking the object in AWS console and granting
anonymous read permissions for example static
read for a web site.
▪ Alternately they can select object > go to object
menu and click on the “Make Public” option.
▪ They can give permission to specific users to
read/modify object, by clicking on “properties”
option and then mentioning the email ids of
those who are allowed to access/read/write.
Accessing private objects in S3
55
• User can allow others to add/pick up
objects to/from their buckets . This is specially
useful when clients want some document to get
modified.
• Clients can put the doc/object in a bucket for
modification and after it is modified, collect it
back from the same or another bucket. If the
object/doc is put in the same bucket , then Key
is changed to differentiate modified doc/object
from the earlier one.
S3 access security contd
56
▪ There is yet another way to ensure security of
S3 objects. User can turn “Logging On” for a
bucket at the time of its creation or do it from
AWS management console.
▪ This creates detailed access logs which allow
one to see who all accessed, which objects, at
what time, from which IP address and what all
operations were performed.
S3 access security contd
57
• One way of ensuring against loss of data is to
create replicas across multiple storage devices
which helps in two replica failures also. This is
the default mechanism.
• User could request RRS – reduced redundancy
storage for non critical data under which only
two replicas are created.
• S3 does not guarantee consistency of data
across replicas. Versioning when enabled can
take care of inadvertent data loss and also
make it possible to revert to previous version.
Data protection
58
• S3 objects can be up to 5 terabytes which is
more than the size of an uncompressed 1080p
HD movie.
• In case the need for still larger storage arises,
the user will have to split it into smaller chunks,
store them separately and re-compose at the
application level.
• Uploading large objects does take time in spite
of the large bandwidth. Moreover if a failure
occurs, the whole process has to be repeated.
Large objects
59
• To get over this difficulty multi-part upload is
done. This is an elegant solution which not only
splits the object into multiple parts (10000 parts
per object in S3) to upload independently but
also uses the network bandwidth optimally by
parallelizing the uploads. Very efficient solution.
• Since the uploads of the parts are independent
any failure issue in any one part can be rectified
by repeating only that part upload, thereby a
tremendous saving of time!
Uploading large objects
60
Amazon Elastic compute Cloud (EC2) is a
unique service provider which allows an
enterprise/ user to have virtual servers with
virtual storage and virtual networking to satisfy
the diverse needs -
• The needs of the enterprise vary among high
storage and/or high end computing at different
times for different applications
• Networking/clustering needs as well as
environment needs also vary depending on the
work context
Compute as a service – EC2
61
Just as with S3, EC2 also can be accessed via
Amazon Web services AWS console
• EC2 console dashboard can be used to create
an instance ( compute resource ), check status
and also to terminate the instance
• Clicking on Launch instance will take you to a
list of supported OS images - Amazon Machine
Images(AMI) from which one can choose
• Once you choose OS, the wizard pops up to
take your choice of version, whether you want it
monitored etc
Amazon EC2
62
• Next a user has to create a key-value pair to
securely connect to the instance once it’s
operative
• Create a key value pair and save to the file in a
safe place. User can reuse the same for multiple
instances
• Now security groups for the instance need to be
set so that certain ports can be kept open or
blocked depending on the context
• When the instance is launched you get the DNS
name of the server which can be used for
remote login as if it were on the same network
Amazon EC2 contd
63
• Use key value pair to login AWS console; get
the Windows admin password from the AWS
instance screen to remotely connect to the
instance/ compute resource
• For a linux m/c from the directory where the key
value file is saved give the following command
ssh -i <keyvaluefile>ec2-67-202-62-112.compute-
1.amazonaws.com
follow a few confirmation screens and one is
logged into the compute resource remotel
Accessing EC2
64
• It’s possible to access EC2 to get compute
resources using command line utilities also for
which you need to
1. Download the zip, unpack, set the
environment variables
2. Set up security environment by getting
x.509 certificate and private key; copy in
appropriate dir
3. Set region for the virtual resources; the list
of regions can be seen and selection made.
Pricing depends upon this selection.
Accessing EC2 contd
65
• EC2 computing resources are requested in
terms of EC2 Compute Unit (CU) for computing
power, like we use bytes for memory
• One EC2 CU is 1.0-1.2 GHz Xeon processor
• There are some Standard Instances families
with configuration suitable for certain needs
hence recommended by Amazon
• Also available are High memory instances, High
CPU instances, Cluster compute instances for
High performance or Graphic processing
EC2 computing resources request
66
• After getting the resources of required CU, one
needs to configure OS by selecting from the
available images –
AMI Amazon Machine Images
• In case some other software is needed, it can
be installed on top of the OS image and then
this can be stored as another AMI or alternately
VMware image can be imported as AMI.
Configuration of EC2 instance
67
• EC2 also has regions (like S3) which need to be
set( The list of regions is available to select
from)
• There are multiple isolated virtual data centers
called availability zones corresponding to each
region for protection against failures
• One can have the instance placed in two
availability zones of the required region to
ensure availability and tolerance against failure
in any one zone
Region and availability
68
▪ Load balancing and scalability
Elastic load balancer is a service EC2 cloud
offers which distributes the load among the
instances
It can be further configured to route requests from
the same client to the same server by timer and
application controlled sessions
It can sense the failover and spawn a new server if
the load is high for other servers
Load balancer also scales up or down the number
of servers based on the number of requests
(Hence the name Elastic)
Some more Configuration of EC2 instance
69
There are two types of block storage available for
EC2 that appear as disk storage
• Elastic block storage(EBS) exists independent
of any instance. The size can be configured and
attached to one or more EC2 instances. It is
data persistent.
• Instance storage is configured for EC2, which
can be attached to one and only one instance.
It’s not persistent, ceases to exist when instance
is terminated. So if you need persistence create
instance storage using S3.
EC2 storage resources
70
• Networking between EC2 instances and also
with outside world via gateways/firewalls will
have to happen.
• EC2 instances therefore need both public and
private addresses.
• Private addresses are used for communication
within EC2 , like intranet; for any communication
between EC2 instances since these addresses
can be resolved quickly using NAT- network
Address Translation.
EC2 Networking resources
71
• Public addresses can be resolved with DNS
server and used for communication with
addresses outside the cloud which is routed via
gateway.
• Similarly inward communication from outside the
cloud can be received by the public address, of
course after passing the firewall and then
gateway which routes it appropriately.
EC2 N/W resources contd
72
• Elastic IP addresses – These are network
addresses available for an account (up to 5 per
account) which can be dynamically associated
with any instance so that it becomes that
instance’s public address and earlier assigned
public address gets de-assigned.
• These are specially useful in cases of failure of
one EC2 instance. It’s elastic IP address can be
reassigned to another EC2 instance dynamically
so that the requests get routed to the other
instance immediately.
Elastic IP address
73
• It is an online file storage web service that provides storage for data archiving
and backup.
• Glacier is part of the Amazon Web Services suite of cloud computing services,
and is designed for long-term storage of data that is infrequently accessed and
for which retrieval latency times of 3 to 5 hours are acceptable.
• Storage costs are a consistent $0.01 per gigabyte per month, which is
substantially cheaper than Amazon's own Simple Storage Service (S3) service.
• In Amazon Glacier, a vault is a container for storing archives, and an archive is
any object, such as a photo, video, or document that you store in a vault. An
archive is the base unit of storage in Amazon Glacier.
• A vault is a container for storing archives. When you create a vault, you specify a
vault name and a region in which you want to create the vault.
• An archive is any object, such as a photo, video, or document, that you store in a
vault. It is a base unit of storage in Amazon Glacier. Each archive has a unique
ID and an optional description. When you upload an archive, Amazon Glacier
returns a response that includes an archive ID. This archive ID is unique in the
region in which the archive is stored.
• An AWS account has full permission to perform all actions on the vaults in the
account. However, the AWS Identity and Access Management (IAM) users don't
have any permissions by default.
Amazon Glacier
74
• Amazon Elastic Block Store (Amazon EBS) provides block level storage
volumes for use with EC2 instances.
• EBS volumes are highly available and reliable storage volumes that can
be attached to any running instance that is in the same Availability
Zone.
• EBS volumes that are attached to an EC2 instance are exposed as
storage volumes that persist independently from the life of the instance.
• With Amazon EBS, you pay only for what you use.
• Amazon EBS is recommended when data changes frequently and
requires long-term persistence.
• EBS volumes are particularly well-suited for use as the primary storage
for file systems, databases, or for any applications that require fine
granular updates and access to raw, unformatted, block-level storage.
• Amazon EBS is particularly helpful for database-style applications that
frequently encounter many random reads and writes across the data
set.
Amazon Elastic Block Store -
(Amazon EBS)
75
• For simplified data encryption, you can launch your EBS volumes as
encrypted volumes.
• Amazon EBS encryption offers you a simple encryption solution for your
EBS volumes without the need for you to build, manage, and secure your
own key management infrastructure.
• When you create an encrypted EBS volume and attach it to a supported
instance type, data stored at rest on the volume, disk I/O, and snapshots
created from the volume are all encrypted.
• The encryption occurs on the servers that hosts EC2 instances, providing
encryption of data-in-transit from EC2 instances to EBS storage.
• You can back up the data on your EBS volumes to Amazon S3 by taking
point-in-time snapshots. Snapshots are incremental backups, which means
that only the blocks on the device that have changed after your most recent
snapshot are saved. When you delete a snapshot, only the data exclusive to
that snapshot is removed.
• Active snapshots contain all of the information needed to restore your data
(from the time the snapshot was taken) to a new EBS volume.
• If you are dealing with snapshots of sensitive data, you should consider
encrypting your data manually before taking the snapshot or storing the data
on a volume that is enabled with Amazon EBS encryption.
AmazonElasticBlockStore -
(AmazonEBS) – cont’d
76
AWS Import/Export
• AWS Import/Export supports data upload and download from Amazon
S3 buckets, and data upload to Amazon Elastic Block Store (Amazon
EBS) snapshots and Amazon Glacier vaults. The AWS Import/Export
Getting Started steps assume you already use Amazon S3, Amazon
EBS, or Amazon Glacier.
• To upload or download data from Amazon S3, you need to have an
existing Amazon S3 account and be familiar with Amazon S3.
• To upload data to an Amazon EBS snapshot for Amazon EC2, you need
to have an existing Amazon EC2 instance to associate with the data,
and an Amazon S3 bucket to store AWS Import/Export log files.
• To upload data to an Amazon Glacier vault, you need to have an
Amazon Glacier vault to associate with the data, and an Amazon S3
bucket to store AWS Import/Export log files.
• CloudFront is a web service that speeds up distribution of your static and
dynamic web content, for example, .html, .css, .php, and image files, to end
users.
• CloudFront delivers your content through a worldwide network of data centers
called edge locations.
• When a user requests content that you're serving with CloudFront, the user is
routed to the edge location that provides the lowest latency (time delay), so
content is delivered with the best possible performance.
• If the content is already in the edge location with the lowest latency, CloudFront
delivers it immediately.
• If the content is not currently in that edge location, CloudFront retrieves it from an
Amazon S3 bucket or an HTTP server (for example, a web server) that you have
identified as the source for the definitive version of your content.
Amazon CloudFront
78
This concept is best illustrated by an example.
Suppose you're serving the following image from a
traditional web server, not from CloudFront:
You're serving the image using the
URL http://example.com/globe_west_540.png.
Your users can easily navigate to this URL and see the image, but they
probably don't know that their request was routed from one network to
another—through the complex collection of interconnected networks that
comprise the Internet—until the image was found.
Amazon CloudFront -
Example 1
79
• Further suppose that the web server from which you're
serving the image is in Seattle, Washington, USA, and
that a user in Austin, Texas, USA requests the image.
The traceroute list below shows one way that this
request could be routed.
Amazon CloudFront -
Example 2
80
• In this example, the request was routed 10 times within the United States before the
image was retrieved,
• which is not an unusually large number of hops. If your user were in Europe, the
request would be routed through even more networks to reach your server in Seattle.
• The number of networks and the distance that the request and the image must travel
have a significant impact on the performance, reliability, and availability of the image.
• CloudFront speeds up the distribution of your content by routing each user request to
the edge location that can best serve your content.
• Typically, this is the CloudFront edge location that provides the lowest latency. This
dramatically reduces the number of networks that your users' requests must pass
through, which improves performance.
• Users get lower latency—the time it takes to load the first byte of the object—and
higher data transfer rates.
• You also get increased reliability and availability because copies of your objects are
now held in multiple edge locations around the world.
AmazonCloudFront-
Example2 – cont’d
81
• Amazon Web Services provides fully managed relational
and NoSQL database services, as well as fully managed
in-memory caching as a service and a fully managed
petabyte-scale data-warehouse service. Or, you can
operate your own database in the cloud on Amazon EC2
and Amazon EBS.
AWS Database Services
82
• This is a simple NoSQL data store interface of
key-value pair, which allows storage and
retrieval of attributes based on the key. A simple
alternative to Relational database.
• SDB is organized into domains. Each item in a
domain must have a unique key provided at the
time of creation. It can have up to 256 attributes
in the form of name-value, similar to a row with
primary key in RDBMS. But in SDB an attribute
can be multi-valued and all of them together
stored against the same attribute name.
Amazon SimpleDB (SDB)
83
• SDB has many features which increase it’s
reliability and availability
• Automatic resource addition proportional to
the request rate
• Automatic indexing of the attributes for quick
retrieval
• Automatic replicating across different
locations(availability)
• Fields can be added to the dataset anytime
since SDB is schema-less; that makes it
scalable
SDB admin
84
• RDB is a traditional DB abstraction in the cloud
• MySQL instance
• RDB instance can be created using the tab in the
AWS management console
• AWS console allows the user to manage the RDB
• How often the backup should happen, how long
should the backup data be available etc can be
configured
• Snapshots of DB can be taken from time to time
• Using Amazon APIs user can build a custom tool
to manage the data if needed
Amazon Relational DB (RDB)
85
• Amazon Relational Database Service (Amazon RDS) is a web
service that makes it easier to set up, operate, and scale a
relational database in the cloud. It provides cost-efficient,
resizeable capacity for an industry-standard relational
database and manages common database administration
tasks.
Why would you want a managed relational database service?
• Because Amazon RDS takes over many of the difficult or
tedious management tasks of a relational database.
Amazon Relational Database Service -
(Amazon RDS)
86
• Amazon RDS manages backups, software patching, automatic
failure detection, and recovery.
• You can use the database products you are already familiar
with: MySQL, PostgreSQL, Oracle, Microsoft SQL Server, and
the new, MySQL-compatible Amazon Aurora DB engine.
• In addition to the security in your database package, you can
help control who can access your RDS databases by using
AWS IAM to define users and permissions. You can also help
protect your databases by putting them in a virtual private
cloud.
Amazon RDS - Cont’d
87
DB Instances
Regions and Availability Zones
Security Groups
DB Parameter Groups
DB Option Groups
Amazon RDS Components
88
DB Instances
• The basic building block of Amazon RDS is the DB instance.
• A DB instance is an isolated database environment in the cloud.
• A DB instance can contain multiple user-created databases, and you
can access it by using the same tools and applications that you use
with a stand-alone database instance.
• You can create and modify a DB instance by using the Amazon RDS
command line interface, the Amazon RDS API, or the AWS
Management Console.
• Each DB instance runs a DB engine. Amazon RDS currently supports
the MySQL, PostgreSQL, Oracle, and Microsoft SQL Server DB
engines.
Amazon RDS Components
- Cont’d
89
• For each DB instance, you can select from 5 GB to 3 TB of
associated storage capacity.
• DB instance storage comes in three types: Magnetic, General
Purpose (SSD), and Provisioned IOPS (SSD). They differ in
performance characteristics and price, allowing you to tailor your
storage performance and cost to the needs of your database.
• You can run a DB instance on a virtual private cloud using Amazon's
Virtual Private Cloud (VPC) service. When you use a virtual private
cloud, you have control over your virtual networking environment:
you can select your own IP address range, create subnets, and
configure routing and access control lists.
• The basic functionality of Amazon RDS is the same whether it is
running in a VPC or not.
Amazon RDS Components
- Cont’d
90
Regions and Availability Zones
• Amazon cloud computing resources are housed in highly available data center facilities in
different areas of the world (for example, North America, Europe, or Asia). Each data center
location is called a region.
• Each region contains multiple distinct locations called Availability Zones, or AZs. Each Availability
Zone is engineered to be isolated from failures in other Availability Zones, and to provide
inexpensive, low-latency network connectivity to other Availability Zones in the same region.
• By launching instances in separate Availability Zones, you can protect your applications from the
failure of a single location.
• You can run your DB instance in several Availability Zones, an option called a Multi-AZ
deployment. When you select this option, Amazon automatically provisions and maintains a
synchronous standby replica of your DB instance in a different Availability Zone.
Amazon RDS Components
- Cont’d
91
Security Groups
• A security group controls the access to a DB instance. It does so by
allowing access to IP address ranges or Amazon EC2 instances that
you specify.
• Amazon RDS uses DB security groups, VPC security groups, and
EC2 security groups.
DB Parameter Groups
• You manage the configuration of a DB engine by using a DB
parameter group.
• A DB parameter group contains engine configuration values that can
be applied to one or more DB instances of the same instance type.
• Amazon RDS applies a default DB parameter group if you don’t
specify a DB parameter group when you create a DB instance.
• The default group contains defaults for the specific database engine
and instance class of the DB instance.
Amazon RDS Components
- Cont’d
92
DB Option Groups
• Some DB engines offer tools that simplify managing your
databases and making the best use of your data.
• Amazon RDS makes such tools available through option
groups.
• Currently, option groups are available for Oracle,
Microsoft SQL Server, and MySQL 5.6 DB instances.
Amazon RDS Components
- Cont’d
93
There are several ways that you can interact with Amazon RDS. Available RDS
Interfaces:
Amazon RDS Console
Command Line Interface
Programmatic Interfaces
Amazon RDS Console : The Amazon RDS console is a simple web-based user
interface. From the console, you can perform almost all tasks you need to do
from the RDS console with no programming required. To access the Amazon
RDS console, sign in to the AWS Management Console and open the Amazon
RDS console at https://console.aws.amazon.com/rds/.
Command Line Interface: Amazon RDS provides a Java-based command line
interface that gives you access to much of the functionality that is available in
the amazon RDS API. For more information, see the Amazon RDS Command
Line Toolkit.
Programmatic Interfaces: There are list of resources that you can use to access
Amazon RDS programmatically.
Amazon RDS - Interaction
94
How You Are Charged for Amazon RDS
When you use Amazon RDS, you pay only for
what you use, and there are no minimum or
setup fees. You are billed according to the
following criteria.
– Instance class
– Running time
– Storage
– I/O requests per month
– Backup storage
– Data transfer
Amazon RDS - charges
95
• DynamoDB is a fully managed NoSQL database service that provides fast
and predictable performance with seamless scalability.
• If you are a developer, you can use DynamoDB to create a database table
that can store and retrieve any amount of data, and serve any level of
request traffic.
• DynamoDB automatically spreads the data and traffic for the table over a
sufficient number of servers to handle the request capacity specified by the
customer and the amount of data stored, while maintaining consistent and
fast performance.
• All data items are stored on solid state disks (SSDs) and are automatically
replicated across multiple Availability Zones in a Region to provide built-in
high availability and data durability.
• If you are a database administrator, you can create a new DynamoDB
database table, scale up or down your request capacity for the table without
downtime or performance degradation, and gain visibility into resource
utilization and performance metrics, all through the AWS Management
Console.
• With DynamoDB, you can offload the administrative burdens of operating and
scaling distributed databases to AWS, so you don't have to worry about
hardware provisioning, setup and configuration, replication, software
patching, or cluster scaling.
Amazon DynamoDB
96
• AWS provides a variety of computing and networking services
to meet the needs of your applications.
• You can provision virtual servers, set up a firewall, configure
Internet access, allocate and route IP addresses, and scale
your infrastructure to meet increasing demand.
• You can use the compute and networking services with the
storage, database, and application services to provide a
complete solution for computing, query processing, and
storage across a wide range of applications.
Compute and NetworkingServicesforAWS
97
The following are concepts that should be understood
before using the compute and networking services.
Instances and AMIs
VPCs and Subnets
Security Groups
Amazon Route 53 Hosted Zones
Auto Scaling Groups
Load Balancer
Compute and Networking Services forAWS –
Key Concepts
98
Instances and AMIs
• Amazon Elastic Compute Cloud (Amazon EC2) provides resizeable
computing capacity—literally, servers in Amazon's data centers—
that you use to build and host your software systems.
• An Amazon Machine Image (AMI) is a template that contains a
software configuration (for example, an operating system, an
application server, and applications).
• From an AMI, you launch an instance, which is a copy of the AMI
running as a virtual server on a host computer in Amazon's data
center.
• You can launch multiple instances from an AMI, as shown in the
following figure.
Compute and Networking Services forAWS -
Architecture
99
• When you launch an instance, you select an instance type, which determines the
hardware capabilities (such as memory, CPU, and storage) of the host computer for
the instance. You can access your instance using its assigned public DNS name or
public IP address.
• Your instances keep running until you stop or terminate them, or until they fail. If an
instance fails, you can launch a new one from the AMI.
• You start from an existing AMI that most closely meets your needs, log on to the
instance, and then customize the instance with additional software and settings. You
can save this customized configuration as a new AMI, which you can then use to
launch new instances whenever you need them.
Compute and Networking Services forAWS -
Architecture
- cont’d
100
VPCs and Subnets
• A virtual private cloud (VPC) is a virtual network dedicated to your AWS
account. It is logically isolated from other virtual networks in the AWS
cloud, providing security and robust networking functionality for your
compute resources.
• A VPC closely resembles a traditional network that you'd operate in your
own data center, with the benefits of using the scalable infrastructure of
AWS.
• A subnet is a segment of a VPC's IP address range that you can launch
instances into. Subnets enable you to group instances based on your
security and operational needs.
• To enable instances in a subnet to reach the Internet and AWS services,
you must add an Internet gateway to the VPC and a route table with a
route to the Internet to the subnet.
• It is recommended that you launch your EC2 instances into a VPC.
• Note that if you created your AWS account after 2013-12-04, you have a
default VPC and you must launch EC2 instances into a default or a
nondefault VPC.
101
Compute and Networking Services forAWS -
Architecture
- cont’d
Security Groups
• A security group acts as a virtual firewall for your instance to control inbound and outbound traffic.
• You can specify one or more security groups when you launch your instance. When you create a
security group, you add rules that control the inbound traffic that's allowed, and a separate set of
rules that control the outbound traffic. All other traffic is discarded.
• You can modify the rules for a security group at any time and the new rules are automatically
enforced.
102
Compute and Networking Services forAWS -
Architecture
- cont’d
Amazon Route 53 Hosted Zones
• Amazon Route 53 is a highly available and scalable cloud Domain Name
System (DNS) web service.
• It is designed as an extremely reliable and cost-effective way to route visitors
to websites by translating domain names (such as www.example.com) into
the numeric IP addresses (such as 192.0.2.1) that computers use to connect
to each other.
• AWS assigns URLs to your AWS resources, such as your EC2 instances.
• However, you might want a URL that is easy for your users to remember. For
example, you can map your domain name to your AWS resource. If you don't
have a domain name, you can search for available domains and register
them using Amazon Route 53.
• If you have an existing domain name, you can transfer it to Amazon
Route 53.
• Amazon Route 53 enables you to organize your DNS records using hosted
zones.
• When you create a hosted zone, you receive four name servers to help
ensure a high level of availability.
103
Compute and Networking Services forAWS -
Architecture
- cont’d
104
Compute and Networking Services forAWS -
Architecture
- cont’d
Auto Scaling Groups
• Auto Scaling supports groups of virtual servers, an Auto
Scaling group that can grow or shrink on demand.
105
Compute and Networking Services forAWS -
Architecture
- cont’d
Load Balancer
• A load balancer distributes traffic
to multiple instances.
• You can achieve even higher
levels of fault tolerance by using
your load balancer with instances
in multiple Availability Zones.
• As instances are launched and
terminated, the load balancer
automatically directs traffic to the
running instances.
• Elastic Load Balancing also
performs health checks on each
instance.
• If an instance is not responding,
the load balancer can
automatically redirect traffic to the
healthy instances
106
Compute and Networking Services forAWS -
Architecture
- cont’d
Architecture
• The following diagram shows an example architecture for your compute
and networking services.
• There are EC2 instances in public and private subnets.
• Access to the instances in the public subnets over protocols like SSH or
RDP is controlled by one or more security groups.
• Security groups also control whether the instances can talk to each
other.
• The Auto Scaling group maintains a fleet of EC2 instances that can
scale to handle the current load. This Auto Scaling group spans multiple
Availability Zones to protect against the potential failure of a single
Availability Zone.
• The load balancer distributes traffic evenly among the EC2 instances in
the Auto Scaling group.
• When the Auto Scaling group launches or terminates instances based
on load, the load balancer automatically adjusts accordingly. Amazon
Route 53 provides secure and reliable routing of your domain name to
your infrastructure hosted on AWS.
107
Compute and Networking Services forAWS -
Architecture
- cont’d
Thanks!!!
Queries?
108

Contenu connexe

Similaire à Cloud Computing genral for all concepts.pptx

Introduction To Cloud Computing
Introduction To Cloud ComputingIntroduction To Cloud Computing
Introduction To Cloud Computingkevnikool
 
Cloud Computing - Security Benefits and Risks
Cloud Computing - Security Benefits and RisksCloud Computing - Security Benefits and Risks
Cloud Computing - Security Benefits and RisksWilliam McBorrough
 
An introduction to the cloud 11 v1
An introduction to the cloud 11 v1An introduction to the cloud 11 v1
An introduction to the cloud 11 v1charan7575
 
NSUT_Lecture1_cloud computing[1].pptx
NSUT_Lecture1_cloud computing[1].pptxNSUT_Lecture1_cloud computing[1].pptx
NSUT_Lecture1_cloud computing[1].pptxUtkarshKumar608655
 
Cloud models and platforms
Cloud models and platformsCloud models and platforms
Cloud models and platformsPrabhat gangwar
 
Data Security Model Enhancement In Cloud Environment
Data Security Model Enhancement In Cloud EnvironmentData Security Model Enhancement In Cloud Environment
Data Security Model Enhancement In Cloud EnvironmentIOSR Journals
 
A proposal for implementing cloud computing in newspaper company
A proposal for implementing cloud computing in newspaper companyA proposal for implementing cloud computing in newspaper company
A proposal for implementing cloud computing in newspaper companyKingsley Mensah
 
cloud computing based its solutions term paper
cloud computing based its solutions term papercloud computing based its solutions term paper
cloud computing based its solutions term paperShashi Bhushan
 
Cloud computing course and tutorials
Cloud computing course and tutorialsCloud computing course and tutorials
Cloud computing course and tutorialsUdara Sandaruwan
 
Introduction to Cloud Computing
Introduction to Cloud ComputingIntroduction to Cloud Computing
Introduction to Cloud ComputingAlessandro Iudica
 
Introduction to Cloud Computing.pptx
Introduction to Cloud Computing.pptxIntroduction to Cloud Computing.pptx
Introduction to Cloud Computing.pptxojaswiniwagh
 
Cloud Computing & CloudStack Open Source
Cloud Computing & CloudStack Open SourceCloud Computing & CloudStack Open Source
Cloud Computing & CloudStack Open SourceAhmadShah Sultani
 

Similaire à Cloud Computing genral for all concepts.pptx (20)

Introduction To Cloud Computing
Introduction To Cloud ComputingIntroduction To Cloud Computing
Introduction To Cloud Computing
 
Cloudcomputing
CloudcomputingCloudcomputing
Cloudcomputing
 
Yongsan presentation 2
Yongsan presentation 2Yongsan presentation 2
Yongsan presentation 2
 
Cloud Computing - Security Benefits and Risks
Cloud Computing - Security Benefits and RisksCloud Computing - Security Benefits and Risks
Cloud Computing - Security Benefits and Risks
 
An introduction to the cloud 11 v1
An introduction to the cloud 11 v1An introduction to the cloud 11 v1
An introduction to the cloud 11 v1
 
NSUT_Lecture1_cloud computing[1].pptx
NSUT_Lecture1_cloud computing[1].pptxNSUT_Lecture1_cloud computing[1].pptx
NSUT_Lecture1_cloud computing[1].pptx
 
Cloud models and platforms
Cloud models and platformsCloud models and platforms
Cloud models and platforms
 
Data Security Model Enhancement In Cloud Environment
Data Security Model Enhancement In Cloud EnvironmentData Security Model Enhancement In Cloud Environment
Data Security Model Enhancement In Cloud Environment
 
Intoduction of cloud computing
Intoduction of cloud computingIntoduction of cloud computing
Intoduction of cloud computing
 
Cs6703 grid and cloud computing unit 3
Cs6703 grid and cloud computing unit 3Cs6703 grid and cloud computing unit 3
Cs6703 grid and cloud computing unit 3
 
A proposal for implementing cloud computing in newspaper company
A proposal for implementing cloud computing in newspaper companyA proposal for implementing cloud computing in newspaper company
A proposal for implementing cloud computing in newspaper company
 
Cloud Computing
Cloud ComputingCloud Computing
Cloud Computing
 
Unit 1
Unit 1Unit 1
Unit 1
 
Cloud computing
Cloud computingCloud computing
Cloud computing
 
cloud computing based its solutions term paper
cloud computing based its solutions term papercloud computing based its solutions term paper
cloud computing based its solutions term paper
 
Cloud computing course and tutorials
Cloud computing course and tutorialsCloud computing course and tutorials
Cloud computing course and tutorials
 
CC.pptx
CC.pptxCC.pptx
CC.pptx
 
Introduction to Cloud Computing
Introduction to Cloud ComputingIntroduction to Cloud Computing
Introduction to Cloud Computing
 
Introduction to Cloud Computing.pptx
Introduction to Cloud Computing.pptxIntroduction to Cloud Computing.pptx
Introduction to Cloud Computing.pptx
 
Cloud Computing & CloudStack Open Source
Cloud Computing & CloudStack Open SourceCloud Computing & CloudStack Open Source
Cloud Computing & CloudStack Open Source
 

Dernier

Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetHyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetEnjoy Anytime
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...HostedbyConfluent
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraDeakin University
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsHyundai Motor Group
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?XfilesPro
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAndikSusilo4
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxnull - The Open Security Community
 

Dernier (20)

Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetHyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning era
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food Manufacturing
 
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptxVulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
 
The transition to renewables in India.pdf
The transition to renewables in India.pdfThe transition to renewables in India.pdf
The transition to renewables in India.pdf
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & Application
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
 

Cloud Computing genral for all concepts.pptx

  • 1. Introduction to Cloud Computing, services and deployment models • Introduction to Cloud Computing – Origins and Motivation • 3-4-5 rule of Cloud Computing • Types of Clouds and Services • Cloud Infrastructure and Deployment
  • 2. Motivation Powerful multi-core processors General purpose graphic processors Superior software methodologies Virtualization leveraging the powerful hardware Wider bandwidth for communication Large number of devices Explosion of domain applications 1. Web Scale Problems 2. Web 2.0 and Social Networking 3. Information Explosion 4. Mobile Web
  • 3. Evolution of Web Explosive growth in applications: biomedical informatics, space exploration, business analytics, web 2.0 social networking: YouTube, Facebook Extreme scale content generation: e-science and e-business data flood Extraordinary rate of digital content consumption: digital gluttony: Apple iPhone, iPad, Amazon Kindle, Android, Windows Phone Exponential growth in compute capabilities: multi-core, storage, bandwidth, virtual machines (virtualization) Very short cycle of obsolete in technologies: Windows 8, Ubuntu, Mac; Java versions; C  C#; Python Newer architectures: web services, persistence models, distributed file systems/repositories (Google, Hadoop), multi-core, wireless and mobile Diverse knowledge and skill levels of the workforce
  • 4. TechnologyAdvances 64-bit processor Multi-core architectures Virtualization: bare metal, hypervisor. … VM0 VM1 VMn Web-services, SOA, WS standards Services interface Cloud applications: data-intensive, compute-intensive, storage-intensive Storage Models: S3, BigTable, BlobStore, ... Bandwidth WS
  • 5. What is Cloud Computing? Cloud Computing is a general term used to describe a new class of network based computing that takes place over the Internet,  basically a step on from Utility Computing  a collection/group of integrated and networked hardware, software and Internet infrastructure (called a platform).  Using the Internet for communication and transport provides hardware, software and networking services to clients These platforms hide the complexity and details of the underlying infrastructure from users and applications by providing very simple graphical interface or API (Applications Programming Interface).
  • 6. What is Cloud Computing cont.… In addition, the platform provides on demand services, that are always on, anywhere, anytime and any place. Pay for use and as needed, elastic  scale up and down in capacity and functionalities The hardware and software services are available to  general public, enterprises, corporations and businesses markets
  • 7. Drivers for the new Platform http://blogs.technet.com/b/yungchou/archive/2011/03/03/chou-s-theories-of-cloud-computing-the-5-3-2-principle.aspx
  • 8. Cloud Summary • Shared pool of configurable computing resources • On-demand network access • Provisioned by the Service Provider
  • 9. Cloud Summary… Cloud computing is an umbrella term used to refer to Internet based development and services A number of characteristics define cloud data, applications services and infrastructure: Remotely hosted: Services or data are hosted on remote infrastructure. Ubiquitous: Services or data are available from anywhere. Commodity model: The result is a utility computing model similar to traditional that of traditional utilities, like gas and electricity - you pay for what you would want!
  • 10. Cloud Computing: Definition The US National Institute of Standards (NIST) defines cloud computing as follows: Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
  • 11. NIST specifies 3-4-5 rule of Cloud Computing 3 cloud service models or service types for any cloud platform 4 deployment models 5 essential characteristics of cloud computing infrastructure 3-4-5 rule of Cloud Computing
  • 12. Characteristics of Cloud Computing  On demand self- service  Broad network access  Resource pooling  Rapid elasticity  Measured service
  • 13. 4 Deployment Models 1. Public Cloud Mega-scale cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services.
  • 14. 4 Deployment Models 2. Private Cloud The cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on premise or off premise.
  • 15. 4 Deployment Models 3. Hybrid Cloud The cloud infrastructure is a composition of two or more clouds (private or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability
  • 16. 4 Deployment Models 4. Community Cloud Community Clouds are when an ‘infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on premise or off premise’ according to NIST. A community cloud is a cloud service shared between multiple organizations with a common tie/goal/objective. E.g. OpenCirrus
  • 17. 3 Cloud Service Models Software as a Service (SaaS) Platform as a Service (PaaS) Infrastructure as a Service (IaaS) Google App Engine SalesForce CRM LotusLive
  • 18. Software as a service features a complete application offered as a service on demand. A single instance of the software runs on the cloud and services multiple end users or client organizations. E.g. salesforce.com , Google Apps Software as a Service (SaaS)
  • 19. Platform as a service encapsulates a layer of software and provides it as a service that can be used to build higher-level services. 2 Perspectives for PaaS :- 1. Producer:- Someone producing PaaS might produce a platform by integrating an OS, middleware, application software, and even a development environment that is then provided to a customer as a service. 2. Consumer:-Someone using PaaS would see an encapsulated service that is presented to them through an API. The customer interacts with the platform through the API, and the platform does what is necessary to manage and scale itself to provide a given level of service. Platform as a Service
  • 20. Infrastructure as a service delivers basic storage and computing capabilities as standardized services over the network. Servers, storage systems, switches, routers , and other systems are pooled and made available to handle workloads that range from application components to high-performance computing applications. Infrastructure as a Service
  • 21. Service Models Summary Cloud Software as a Service (SaaS) The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure and accessible from various client devices through a thin client interface such as a Web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure, network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. Cloud Platform as a Service (PaaS) The capability provided to the consumer is to deploy onto the cloud infrastructure consumer- created applications using programming languages and tools supported by the provider (e.g., Java, Python, .Net). The consumer does not manage or control the underlying cloud infrastructure, network, servers, operating systems, or storage, but the consumer has control over the deployed applications and possibly application hosting environment configurations. Cloud Infrastructure as a Service (IaaS) The capability provided to the consumer is to rent processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly select networking components (e.g., firewalls, load balancers). .
  • 22. Key Technology is Virtualization Cloud Infrastructures Hardware Operating System App App App Traditional Stack Hardware OS App App App Hypervisor OS OS Virtualized Stack Virtualization plays an important role as an enabling technology for datacentre implementation by abstracting compute, network, and storage service platforms from the underlying physical hardware
  • 23. Distributed Management of Virtual Machines Reservation-Based Provisioning of Virtualized Resources Provisioning to Meet SLA Commitments Management of Virtualized Resources
  • 24. Distributed Management of Virtual Machines Managing VMs need to set up custom software environments for VMs, setting up and managing networking for interrelated VMs, and reducing the various overheads involved in using VMs. Thus, VI managers must be able to efficiently orchestrate all these different tasks. Reservation-Based Provisioning of Virtualized Resources Determination of the demand for resources is known beforehand so that the computational resources must be available at exactly that time to process the data produced by the equipment Provisioning to Meet SLA Commitments IaaS clouds can be used to deploy services that will be consumed by users other than the one that deployed the services. cloud providers are typically not directly exposed to the service semantics or the SLAs that service owners may contract with their end users. The capacity requirements are, thus, less predictable and more elastic.
  • 25. The key component of an IaaS cloud architecture is the cloud OS, which manages the physical and virtual infrastructures and controls the provisioning of virtual resources according to the needs of the user services A cloud OS’s role is to efficiently manage datacenter resources to deliver a flexible, secure, and isolated multitenant execution environment for user services that abstracts the underlying physical infrastructure and offers different interfaces and APIs for interacting with the cloud While local users and administrators can interact with the cloud using local interfaces and administrative tools that offer rich functionality for managing, controlling, and monitoring the virtual and physical infrastructure, remote cloud users employ public cloud interfaces that usually provide more limited functionality Cloud Infrastructure Anatomy
  • 26. The cloud operating system is responsible for: 1. managing the physical and virtual infrastructure, 2. orchestraing and commanding service provisioning and deployment 3. providing federation capabilities for accessing and deploying virtual resources in remote cloud infrastructures
  • 27. • To provide an abstraction of the underlying infrastructure, technology, the cloud OS can use adapters or drivers to interact with a variety of virtualization technologies. These include hypervisor, network, storage, and information drivers. • The core cloud OS components, including the virtual machine (VM) manager, network manager, storage manager, and information manager, rely on these infrastructure drivers to deploy, manage, and monitor the virtualized infrastructures. • In addition to the infrastructure drivers, the cloud OS can include different cloud drivers to enable access to remote providers. • OpenNebula (http://opennebula.org) is an example of an open cloud OS platform focused on datacenter virtualization that fits with the architecture as seen above. Other open cloud managers, such as OpenStack (http://openstack.org) and Eucalyptus (www.eucalyptus.com), primarily focus on public cloud features. Infrastructure and Cloud Drivers
  • 28. BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956 • Key Technology is Virtualization CloudOS–Cont’d Hardware Operating System App App App Traditional Stack Hardware OS App App App Hypervisor OS OS Virtualized Stack • Virtualization plays an important role as an enabling technology for datacentre implementation by abstracting compute, network, and storage service platforms from the underlying physical hardware • Virtualization is simulating a hardware platform, operating system, storage device or network resources
  • 29. • Cloud can exist without Virtualization, although it will be difficult and inefficient. • Cloud makes notion of “Pay for what you use”, “infinite availability- use as much you want”. • These notions are practical only if we have – lot of flexibility – efficiency in the back-end. • This efficiency is readily available in Virtualized Environments and Machines ImportanceofVirtualizationin CloudComputing 29
  • 30. • Virtualization allows multiple operating system instances to run concurrently on a single computer • It is a means of separating hardware from a single operating system. • Each “guest” OS is managed by a Virtual Machine Monitor (VMM), also known as a hypervisor • Because the virtualization system sits between the guest and the hardware, it can control the guests’ use of CPU, memory, and storage, even allowing a guest OS to migrate from one machine to another • Instead of purchasing and maintaining an entire computer for one application, each application can be given its own operating system, and all those operating systems can reside on a single piece of hardware • Virtualization allows an operator to control a guest operating system’s use of CPU, memory, storage, and other resources, so each guest receives only the resources that it needs Introduction to Virtualization 30
  • 32. Introduction to Virtualization – cont’d Before Virtualization • Single OS image per machine • Software and hardware tightly coupled • Running multiple applications on same machine often creates conflict • Underutilized resources • Inflexible and costly infrastructure After Virtualization • Hardware-independence of operating system and applications • Virtual machines can be provisioned to any system • Can manage OS and application as a single unit by encapsulating them into virtual machines 32
  • 33. • OS assumes complete control of the underlying hardware. • Virtualization architecture provides this illusion through a hypervisor/VMM. • Hypervisor/VMM is a software layer which: • Allows multiple Guest OS (Virtual Machines) to run simultaneously on a single physical host • Provides hardware abstraction to the running Guest OSs and efficiently multiplexes underlying hardware resources Virtualization Architecture 33
  • 34. BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956 A cloud OS defines the VM as the basic execution unit and the virtualized services (group of VMs for executing a multitier service ) as the basic management entity This concept helps create scalable applications because the user can either add VMs as needed (horizontal scaling) or resize a VM (if supported by the underlying hypervisor technology) to satisfy a VM workload increase (vertical scaling) Individual multitier applications are isolated from each other, but individual VMs in the same applications are not, as they all can share a communication network and services when needed A VM consists of a set of parameters and attributes, including the OS kernel, VM image, memory and CPU capacity, network interfaces etc Virtual Machine Manager
  • 35. BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956 The VM manager is responsible for managing a VM’s entire life cycle and performing different VM actions — deploy, migrate, suspend, resume, shut down — according to user commands or scheduling strategies To perform these actions, the VM manager relies on the hypervisor drivers, which expose the basic functionality of underlying hypervisors such as Xen, KVM, and VMware to avoid limiting the cloud OS to a specific virtualization technology The VM manager is also responsible for preserving the service - level agreements contracted with the users, which are usually expressed in terms of VM availability in infrastructure clouds To guarantee this availability, the VM manager should include different mechanisms for detecting VM crashes and automatically restarting the VM in case of failure Virtual Machine Manager – cont’d
  • 36. A thin layer of software that generally provides virtual partitioning capabilities which runs directly on hardware, but underneath higher-level virtualization services Sometimes referred to as a “bare metal” approach Hypervisor / VMM 36
  • 37. • Partitioning Kernel ▪ “Partition” is isolation boundary ▪ Few virtualization functions; relies on virtualization stack • Very thin layer of software ▪ Microkernel ▪ Highly reliable ▪ Basis for smaller Trusted Computing Base (TCB) • No device drivers ▪ Drivers run in a partition • Well-defined interface ▪ Allow others to create support for their OSes as guests Hypervisor 37
  • 38. Types of Virtualization Two kinds of virtualization approaches available according to VMM • Hosted When VMM runs in a operating system • Bare-metal approach runs VMM on top of hardware directly – Fairly complex to implement but good in performance Types of Virtualization 38
  • 39. A cloud OS defines the VM as the basic execution unit and the virtualized services (group of VMs for executing a multitier service ) as the basic management entity . This concept helps create scalable applications because the user can either add VMs as needed (horizontal scaling) or resize a VM (if supported by the underlying hypervisor technology) to satisfy a VM workload increase (vertical scaling). A VM consists of a set of parameters and attributes, including the OS kernel, VM image, memory and CPU capacity, network interfaces etc. Virtual Machine Manager
  • 40. The VM manager is responsible for managing a VM’s entire life cycle and performing different VM actions — deploy, migrate, suspend, resume, shut down — according to user commands or scheduling strategies. To perform these actions, the VM manager relies on the hypervisor drivers, which expose the basic functionality of underlying hypervisors such as Xen, KVM, and VMware to avoid limiting the cloud OS to a specific virtualization technology. The VM manager is also responsible for preserving the service - level agreements contracted with the users, which are usually expressed in terms of VM availability in infrastructure clouds. To guarantee this availability, the VM manager should include different mechanisms for detecting VM crashes and automatically restarting the VM in case of failure. Virtual Machine Manager…
  • 41. Virtual infrastructure (VI) management—the management of virtual machines distributed across a pool of physical resources Virtual machines require a fair amount of configuration, including preparation of the machine’s software environment and network configuration This configuration must be done on-the-fly, with as little time between the time the VMs are requested and the time they are available to the user e.g., an application requiring a Web server and a database server Virtual infrastructure manager must be capable of allocating resources efficiently, taking into account an organization’s goals (such as minimizing power consumption and other operational costs) and reacting to changes in the physical infrastructure. VI should provide best-effort provisioning and advance reservations to guarantee quality of service (QoS) for applications that require resources at specific times Resource Management using Virtualization Technologies
  • 42. • An important cloud service model called “Infrastructure as a Service” (IaaS), which enables computing, storage and Network resources to be delivered as a service. • Here users will be able to load any operating system and other software they need and execute most of the existing enterprise services without many changes. • However, the burden of maintaining the installed operating system and any middleware continues to fall on the user/customer. Ensuring the availability of the application is also the user’s job since IaaS vendors only provide virtual hardware resources. • Storage as a Service (StaaS) takes a detailed look at key Amazon Storage Services. • Amazon Web Services (AWS), from Amazon.com, has a suite of cloud service products - S3, EC2, CloudWatch. Introduction to IaaS
  • 43. ▪ Storage as a Service is one of the two major services offered by IaaS . ▪ It includes ▪ simple storage service which consists of highly reliable and available storage. Example - Amazon S3 ▪ Simple & relational database services. Example – Amazon SimpleDB and RDS (Relational Database Service) which is MySQL instance over a cloud. Storage as a Service StaaS 43
  • 44. ▪ Data storage requirement is ever increasing in the enterprise/industry. Both ▪ Structured data like the Relational database which are vital for e-commerce businesses. ▪ Unstructured data in various documents like plans, strategy etc as per the process require huge storage even in a small company. ▪ Enterprises may also have to store objects for their customers e.g. Online photo album ▪ Need to protect the data - both security and availability is to be provided as per the demand in spite of various HW, network and SW failures Data Storage Needs 44
  • 45. ▪ This is the second of the two major services provided by IaaS. ▪ It makes extensive use of virtualization technique to provide the computing resources requested by the user ▪ Typically one or more Virtual computers (networked together) are provided to the user ▪ These could be increased or decreased as per the need from time to time ▪ Sudden increase in traffic can be taken care of Compute as a service 45
  • 46. There are many CaaS providers, few of the diverse instances of Compute as a service are - 1. Amazon’s Elastic Compute Cloud – EC2 (this session) 2. HP’s - CloudSystemMatrix 3. HP lab research prototype – Cells as a Service 46 Compute as a service
  • 48. • Compute as a Service where computing is made possible by providing the virtual resources, • would need Storage to keep the results of computing, to make them persistent/ available. • It would also need virtual network for communication with the compute instances. • Thus together CaaS, StaaS and virtual network make up the Infrastructure as a Service - IaaS 48 IaaS model
  • 49. • This is highly reliable, scalable, available and fast storage in the cloud for storing and retrieving data using simple web services. • There are three ways of accessing S3 • AWS(Amazon Web Service) console • REST-ful APIs with HTTP operations like GET, PUT, DELETE and HEAD • Libraries and SDKs that abstract these operations • There are several S3 browsers available to access the storage and use it as though it’s local dir/folder Amazon Simple Storage Service S3 49
  • 50. Let’s consider that a user wants to back up (upload) some data for later need. 1. Sign up for S3 at http://aws.amazon.com/s3/ to get AWS access and secret keys similar to user-id and passwd( Note these keys are for the complete amazon solution not just S3) 2. Use these credentials to sign in to AWS Management console http://console.aws.amazon.com/s3/home 3. Create a bucket giving a name and geographical location. (Buckets can store objects/files) Using Amazon S3 50
  • 51. 4. Press upload button and follow instructions to upload the file/object. 5. Now the file is backed up and is available for use/sharing. This could also be achieved programmatically if necessary by including these steps at appropriate place(s) in the code. 51 Using Amazon S3
  • 52. Getting Started with Amazon Simple Storage Service: Simple Storage Service (S3) – Amazon S3 52
  • 53. ▪ Files are objects in S3. Objects are referred to with keys – an optional directory path name followed by object name. Objects are replicated across geographical locations in multiple places to protect against failures but the consistency is not guaranteed unless versioning is enabled. ▪ Objects can be up to 5 terabytes in size and a bucket can have unlimited number of objects. ▪ Objects have to be stored in buckets which have unique names and a location (region) associated with it. There can be 100 buckets per account Buckets, objects and keys 53
  • 54. ▪ Each object can be accessed by its key via corresponding URL path of AWS console http://<bucket name>.S3.amazonaws.com/<key> Or http://S3.amazonaws.com/<bucketname>/<key> • Note that key can be “proj1/file1” but that is just a label not the dir structure or hierarchy. There is no hierarchy in S3. • Also anyone can access the object if it is Public. Accessing objects in S3 54
  • 55. ▪ Users can set permissions for others by right clicking the object in AWS console and granting anonymous read permissions for example static read for a web site. ▪ Alternately they can select object > go to object menu and click on the “Make Public” option. ▪ They can give permission to specific users to read/modify object, by clicking on “properties” option and then mentioning the email ids of those who are allowed to access/read/write. Accessing private objects in S3 55
  • 56. • User can allow others to add/pick up objects to/from their buckets . This is specially useful when clients want some document to get modified. • Clients can put the doc/object in a bucket for modification and after it is modified, collect it back from the same or another bucket. If the object/doc is put in the same bucket , then Key is changed to differentiate modified doc/object from the earlier one. S3 access security contd 56
  • 57. ▪ There is yet another way to ensure security of S3 objects. User can turn “Logging On” for a bucket at the time of its creation or do it from AWS management console. ▪ This creates detailed access logs which allow one to see who all accessed, which objects, at what time, from which IP address and what all operations were performed. S3 access security contd 57
  • 58. • One way of ensuring against loss of data is to create replicas across multiple storage devices which helps in two replica failures also. This is the default mechanism. • User could request RRS – reduced redundancy storage for non critical data under which only two replicas are created. • S3 does not guarantee consistency of data across replicas. Versioning when enabled can take care of inadvertent data loss and also make it possible to revert to previous version. Data protection 58
  • 59. • S3 objects can be up to 5 terabytes which is more than the size of an uncompressed 1080p HD movie. • In case the need for still larger storage arises, the user will have to split it into smaller chunks, store them separately and re-compose at the application level. • Uploading large objects does take time in spite of the large bandwidth. Moreover if a failure occurs, the whole process has to be repeated. Large objects 59
  • 60. • To get over this difficulty multi-part upload is done. This is an elegant solution which not only splits the object into multiple parts (10000 parts per object in S3) to upload independently but also uses the network bandwidth optimally by parallelizing the uploads. Very efficient solution. • Since the uploads of the parts are independent any failure issue in any one part can be rectified by repeating only that part upload, thereby a tremendous saving of time! Uploading large objects 60
  • 61. Amazon Elastic compute Cloud (EC2) is a unique service provider which allows an enterprise/ user to have virtual servers with virtual storage and virtual networking to satisfy the diverse needs - • The needs of the enterprise vary among high storage and/or high end computing at different times for different applications • Networking/clustering needs as well as environment needs also vary depending on the work context Compute as a service – EC2 61
  • 62. Just as with S3, EC2 also can be accessed via Amazon Web services AWS console • EC2 console dashboard can be used to create an instance ( compute resource ), check status and also to terminate the instance • Clicking on Launch instance will take you to a list of supported OS images - Amazon Machine Images(AMI) from which one can choose • Once you choose OS, the wizard pops up to take your choice of version, whether you want it monitored etc Amazon EC2 62
  • 63. • Next a user has to create a key-value pair to securely connect to the instance once it’s operative • Create a key value pair and save to the file in a safe place. User can reuse the same for multiple instances • Now security groups for the instance need to be set so that certain ports can be kept open or blocked depending on the context • When the instance is launched you get the DNS name of the server which can be used for remote login as if it were on the same network Amazon EC2 contd 63
  • 64. • Use key value pair to login AWS console; get the Windows admin password from the AWS instance screen to remotely connect to the instance/ compute resource • For a linux m/c from the directory where the key value file is saved give the following command ssh -i <keyvaluefile>ec2-67-202-62-112.compute- 1.amazonaws.com follow a few confirmation screens and one is logged into the compute resource remotel Accessing EC2 64
  • 65. • It’s possible to access EC2 to get compute resources using command line utilities also for which you need to 1. Download the zip, unpack, set the environment variables 2. Set up security environment by getting x.509 certificate and private key; copy in appropriate dir 3. Set region for the virtual resources; the list of regions can be seen and selection made. Pricing depends upon this selection. Accessing EC2 contd 65
  • 66. • EC2 computing resources are requested in terms of EC2 Compute Unit (CU) for computing power, like we use bytes for memory • One EC2 CU is 1.0-1.2 GHz Xeon processor • There are some Standard Instances families with configuration suitable for certain needs hence recommended by Amazon • Also available are High memory instances, High CPU instances, Cluster compute instances for High performance or Graphic processing EC2 computing resources request 66
  • 67. • After getting the resources of required CU, one needs to configure OS by selecting from the available images – AMI Amazon Machine Images • In case some other software is needed, it can be installed on top of the OS image and then this can be stored as another AMI or alternately VMware image can be imported as AMI. Configuration of EC2 instance 67
  • 68. • EC2 also has regions (like S3) which need to be set( The list of regions is available to select from) • There are multiple isolated virtual data centers called availability zones corresponding to each region for protection against failures • One can have the instance placed in two availability zones of the required region to ensure availability and tolerance against failure in any one zone Region and availability 68
  • 69. ▪ Load balancing and scalability Elastic load balancer is a service EC2 cloud offers which distributes the load among the instances It can be further configured to route requests from the same client to the same server by timer and application controlled sessions It can sense the failover and spawn a new server if the load is high for other servers Load balancer also scales up or down the number of servers based on the number of requests (Hence the name Elastic) Some more Configuration of EC2 instance 69
  • 70. There are two types of block storage available for EC2 that appear as disk storage • Elastic block storage(EBS) exists independent of any instance. The size can be configured and attached to one or more EC2 instances. It is data persistent. • Instance storage is configured for EC2, which can be attached to one and only one instance. It’s not persistent, ceases to exist when instance is terminated. So if you need persistence create instance storage using S3. EC2 storage resources 70
  • 71. • Networking between EC2 instances and also with outside world via gateways/firewalls will have to happen. • EC2 instances therefore need both public and private addresses. • Private addresses are used for communication within EC2 , like intranet; for any communication between EC2 instances since these addresses can be resolved quickly using NAT- network Address Translation. EC2 Networking resources 71
  • 72. • Public addresses can be resolved with DNS server and used for communication with addresses outside the cloud which is routed via gateway. • Similarly inward communication from outside the cloud can be received by the public address, of course after passing the firewall and then gateway which routes it appropriately. EC2 N/W resources contd 72
  • 73. • Elastic IP addresses – These are network addresses available for an account (up to 5 per account) which can be dynamically associated with any instance so that it becomes that instance’s public address and earlier assigned public address gets de-assigned. • These are specially useful in cases of failure of one EC2 instance. It’s elastic IP address can be reassigned to another EC2 instance dynamically so that the requests get routed to the other instance immediately. Elastic IP address 73
  • 74. • It is an online file storage web service that provides storage for data archiving and backup. • Glacier is part of the Amazon Web Services suite of cloud computing services, and is designed for long-term storage of data that is infrequently accessed and for which retrieval latency times of 3 to 5 hours are acceptable. • Storage costs are a consistent $0.01 per gigabyte per month, which is substantially cheaper than Amazon's own Simple Storage Service (S3) service. • In Amazon Glacier, a vault is a container for storing archives, and an archive is any object, such as a photo, video, or document that you store in a vault. An archive is the base unit of storage in Amazon Glacier. • A vault is a container for storing archives. When you create a vault, you specify a vault name and a region in which you want to create the vault. • An archive is any object, such as a photo, video, or document, that you store in a vault. It is a base unit of storage in Amazon Glacier. Each archive has a unique ID and an optional description. When you upload an archive, Amazon Glacier returns a response that includes an archive ID. This archive ID is unique in the region in which the archive is stored. • An AWS account has full permission to perform all actions on the vaults in the account. However, the AWS Identity and Access Management (IAM) users don't have any permissions by default. Amazon Glacier 74
  • 75. • Amazon Elastic Block Store (Amazon EBS) provides block level storage volumes for use with EC2 instances. • EBS volumes are highly available and reliable storage volumes that can be attached to any running instance that is in the same Availability Zone. • EBS volumes that are attached to an EC2 instance are exposed as storage volumes that persist independently from the life of the instance. • With Amazon EBS, you pay only for what you use. • Amazon EBS is recommended when data changes frequently and requires long-term persistence. • EBS volumes are particularly well-suited for use as the primary storage for file systems, databases, or for any applications that require fine granular updates and access to raw, unformatted, block-level storage. • Amazon EBS is particularly helpful for database-style applications that frequently encounter many random reads and writes across the data set. Amazon Elastic Block Store - (Amazon EBS) 75
  • 76. • For simplified data encryption, you can launch your EBS volumes as encrypted volumes. • Amazon EBS encryption offers you a simple encryption solution for your EBS volumes without the need for you to build, manage, and secure your own key management infrastructure. • When you create an encrypted EBS volume and attach it to a supported instance type, data stored at rest on the volume, disk I/O, and snapshots created from the volume are all encrypted. • The encryption occurs on the servers that hosts EC2 instances, providing encryption of data-in-transit from EC2 instances to EBS storage. • You can back up the data on your EBS volumes to Amazon S3 by taking point-in-time snapshots. Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. When you delete a snapshot, only the data exclusive to that snapshot is removed. • Active snapshots contain all of the information needed to restore your data (from the time the snapshot was taken) to a new EBS volume. • If you are dealing with snapshots of sensitive data, you should consider encrypting your data manually before taking the snapshot or storing the data on a volume that is enabled with Amazon EBS encryption. AmazonElasticBlockStore - (AmazonEBS) – cont’d 76
  • 77. AWS Import/Export • AWS Import/Export supports data upload and download from Amazon S3 buckets, and data upload to Amazon Elastic Block Store (Amazon EBS) snapshots and Amazon Glacier vaults. The AWS Import/Export Getting Started steps assume you already use Amazon S3, Amazon EBS, or Amazon Glacier. • To upload or download data from Amazon S3, you need to have an existing Amazon S3 account and be familiar with Amazon S3. • To upload data to an Amazon EBS snapshot for Amazon EC2, you need to have an existing Amazon EC2 instance to associate with the data, and an Amazon S3 bucket to store AWS Import/Export log files. • To upload data to an Amazon Glacier vault, you need to have an Amazon Glacier vault to associate with the data, and an Amazon S3 bucket to store AWS Import/Export log files.
  • 78. • CloudFront is a web service that speeds up distribution of your static and dynamic web content, for example, .html, .css, .php, and image files, to end users. • CloudFront delivers your content through a worldwide network of data centers called edge locations. • When a user requests content that you're serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so content is delivered with the best possible performance. • If the content is already in the edge location with the lowest latency, CloudFront delivers it immediately. • If the content is not currently in that edge location, CloudFront retrieves it from an Amazon S3 bucket or an HTTP server (for example, a web server) that you have identified as the source for the definitive version of your content. Amazon CloudFront 78
  • 79. This concept is best illustrated by an example. Suppose you're serving the following image from a traditional web server, not from CloudFront: You're serving the image using the URL http://example.com/globe_west_540.png. Your users can easily navigate to this URL and see the image, but they probably don't know that their request was routed from one network to another—through the complex collection of interconnected networks that comprise the Internet—until the image was found. Amazon CloudFront - Example 1 79
  • 80. • Further suppose that the web server from which you're serving the image is in Seattle, Washington, USA, and that a user in Austin, Texas, USA requests the image. The traceroute list below shows one way that this request could be routed. Amazon CloudFront - Example 2 80
  • 81. • In this example, the request was routed 10 times within the United States before the image was retrieved, • which is not an unusually large number of hops. If your user were in Europe, the request would be routed through even more networks to reach your server in Seattle. • The number of networks and the distance that the request and the image must travel have a significant impact on the performance, reliability, and availability of the image. • CloudFront speeds up the distribution of your content by routing each user request to the edge location that can best serve your content. • Typically, this is the CloudFront edge location that provides the lowest latency. This dramatically reduces the number of networks that your users' requests must pass through, which improves performance. • Users get lower latency—the time it takes to load the first byte of the object—and higher data transfer rates. • You also get increased reliability and availability because copies of your objects are now held in multiple edge locations around the world. AmazonCloudFront- Example2 – cont’d 81
  • 82. • Amazon Web Services provides fully managed relational and NoSQL database services, as well as fully managed in-memory caching as a service and a fully managed petabyte-scale data-warehouse service. Or, you can operate your own database in the cloud on Amazon EC2 and Amazon EBS. AWS Database Services 82
  • 83. • This is a simple NoSQL data store interface of key-value pair, which allows storage and retrieval of attributes based on the key. A simple alternative to Relational database. • SDB is organized into domains. Each item in a domain must have a unique key provided at the time of creation. It can have up to 256 attributes in the form of name-value, similar to a row with primary key in RDBMS. But in SDB an attribute can be multi-valued and all of them together stored against the same attribute name. Amazon SimpleDB (SDB) 83
  • 84. • SDB has many features which increase it’s reliability and availability • Automatic resource addition proportional to the request rate • Automatic indexing of the attributes for quick retrieval • Automatic replicating across different locations(availability) • Fields can be added to the dataset anytime since SDB is schema-less; that makes it scalable SDB admin 84
  • 85. • RDB is a traditional DB abstraction in the cloud • MySQL instance • RDB instance can be created using the tab in the AWS management console • AWS console allows the user to manage the RDB • How often the backup should happen, how long should the backup data be available etc can be configured • Snapshots of DB can be taken from time to time • Using Amazon APIs user can build a custom tool to manage the data if needed Amazon Relational DB (RDB) 85
  • 86. • Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resizeable capacity for an industry-standard relational database and manages common database administration tasks. Why would you want a managed relational database service? • Because Amazon RDS takes over many of the difficult or tedious management tasks of a relational database. Amazon Relational Database Service - (Amazon RDS) 86
  • 87. • Amazon RDS manages backups, software patching, automatic failure detection, and recovery. • You can use the database products you are already familiar with: MySQL, PostgreSQL, Oracle, Microsoft SQL Server, and the new, MySQL-compatible Amazon Aurora DB engine. • In addition to the security in your database package, you can help control who can access your RDS databases by using AWS IAM to define users and permissions. You can also help protect your databases by putting them in a virtual private cloud. Amazon RDS - Cont’d 87
  • 88. DB Instances Regions and Availability Zones Security Groups DB Parameter Groups DB Option Groups Amazon RDS Components 88
  • 89. DB Instances • The basic building block of Amazon RDS is the DB instance. • A DB instance is an isolated database environment in the cloud. • A DB instance can contain multiple user-created databases, and you can access it by using the same tools and applications that you use with a stand-alone database instance. • You can create and modify a DB instance by using the Amazon RDS command line interface, the Amazon RDS API, or the AWS Management Console. • Each DB instance runs a DB engine. Amazon RDS currently supports the MySQL, PostgreSQL, Oracle, and Microsoft SQL Server DB engines. Amazon RDS Components - Cont’d 89
  • 90. • For each DB instance, you can select from 5 GB to 3 TB of associated storage capacity. • DB instance storage comes in three types: Magnetic, General Purpose (SSD), and Provisioned IOPS (SSD). They differ in performance characteristics and price, allowing you to tailor your storage performance and cost to the needs of your database. • You can run a DB instance on a virtual private cloud using Amazon's Virtual Private Cloud (VPC) service. When you use a virtual private cloud, you have control over your virtual networking environment: you can select your own IP address range, create subnets, and configure routing and access control lists. • The basic functionality of Amazon RDS is the same whether it is running in a VPC or not. Amazon RDS Components - Cont’d 90
  • 91. Regions and Availability Zones • Amazon cloud computing resources are housed in highly available data center facilities in different areas of the world (for example, North America, Europe, or Asia). Each data center location is called a region. • Each region contains multiple distinct locations called Availability Zones, or AZs. Each Availability Zone is engineered to be isolated from failures in other Availability Zones, and to provide inexpensive, low-latency network connectivity to other Availability Zones in the same region. • By launching instances in separate Availability Zones, you can protect your applications from the failure of a single location. • You can run your DB instance in several Availability Zones, an option called a Multi-AZ deployment. When you select this option, Amazon automatically provisions and maintains a synchronous standby replica of your DB instance in a different Availability Zone. Amazon RDS Components - Cont’d 91
  • 92. Security Groups • A security group controls the access to a DB instance. It does so by allowing access to IP address ranges or Amazon EC2 instances that you specify. • Amazon RDS uses DB security groups, VPC security groups, and EC2 security groups. DB Parameter Groups • You manage the configuration of a DB engine by using a DB parameter group. • A DB parameter group contains engine configuration values that can be applied to one or more DB instances of the same instance type. • Amazon RDS applies a default DB parameter group if you don’t specify a DB parameter group when you create a DB instance. • The default group contains defaults for the specific database engine and instance class of the DB instance. Amazon RDS Components - Cont’d 92
  • 93. DB Option Groups • Some DB engines offer tools that simplify managing your databases and making the best use of your data. • Amazon RDS makes such tools available through option groups. • Currently, option groups are available for Oracle, Microsoft SQL Server, and MySQL 5.6 DB instances. Amazon RDS Components - Cont’d 93
  • 94. There are several ways that you can interact with Amazon RDS. Available RDS Interfaces: Amazon RDS Console Command Line Interface Programmatic Interfaces Amazon RDS Console : The Amazon RDS console is a simple web-based user interface. From the console, you can perform almost all tasks you need to do from the RDS console with no programming required. To access the Amazon RDS console, sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/. Command Line Interface: Amazon RDS provides a Java-based command line interface that gives you access to much of the functionality that is available in the amazon RDS API. For more information, see the Amazon RDS Command Line Toolkit. Programmatic Interfaces: There are list of resources that you can use to access Amazon RDS programmatically. Amazon RDS - Interaction 94
  • 95. How You Are Charged for Amazon RDS When you use Amazon RDS, you pay only for what you use, and there are no minimum or setup fees. You are billed according to the following criteria. – Instance class – Running time – Storage – I/O requests per month – Backup storage – Data transfer Amazon RDS - charges 95
  • 96. • DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. • If you are a developer, you can use DynamoDB to create a database table that can store and retrieve any amount of data, and serve any level of request traffic. • DynamoDB automatically spreads the data and traffic for the table over a sufficient number of servers to handle the request capacity specified by the customer and the amount of data stored, while maintaining consistent and fast performance. • All data items are stored on solid state disks (SSDs) and are automatically replicated across multiple Availability Zones in a Region to provide built-in high availability and data durability. • If you are a database administrator, you can create a new DynamoDB database table, scale up or down your request capacity for the table without downtime or performance degradation, and gain visibility into resource utilization and performance metrics, all through the AWS Management Console. • With DynamoDB, you can offload the administrative burdens of operating and scaling distributed databases to AWS, so you don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling. Amazon DynamoDB 96
  • 97. • AWS provides a variety of computing and networking services to meet the needs of your applications. • You can provision virtual servers, set up a firewall, configure Internet access, allocate and route IP addresses, and scale your infrastructure to meet increasing demand. • You can use the compute and networking services with the storage, database, and application services to provide a complete solution for computing, query processing, and storage across a wide range of applications. Compute and NetworkingServicesforAWS 97
  • 98. The following are concepts that should be understood before using the compute and networking services. Instances and AMIs VPCs and Subnets Security Groups Amazon Route 53 Hosted Zones Auto Scaling Groups Load Balancer Compute and Networking Services forAWS – Key Concepts 98
  • 99. Instances and AMIs • Amazon Elastic Compute Cloud (Amazon EC2) provides resizeable computing capacity—literally, servers in Amazon's data centers— that you use to build and host your software systems. • An Amazon Machine Image (AMI) is a template that contains a software configuration (for example, an operating system, an application server, and applications). • From an AMI, you launch an instance, which is a copy of the AMI running as a virtual server on a host computer in Amazon's data center. • You can launch multiple instances from an AMI, as shown in the following figure. Compute and Networking Services forAWS - Architecture 99
  • 100. • When you launch an instance, you select an instance type, which determines the hardware capabilities (such as memory, CPU, and storage) of the host computer for the instance. You can access your instance using its assigned public DNS name or public IP address. • Your instances keep running until you stop or terminate them, or until they fail. If an instance fails, you can launch a new one from the AMI. • You start from an existing AMI that most closely meets your needs, log on to the instance, and then customize the instance with additional software and settings. You can save this customized configuration as a new AMI, which you can then use to launch new instances whenever you need them. Compute and Networking Services forAWS - Architecture - cont’d 100
  • 101. VPCs and Subnets • A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS cloud, providing security and robust networking functionality for your compute resources. • A VPC closely resembles a traditional network that you'd operate in your own data center, with the benefits of using the scalable infrastructure of AWS. • A subnet is a segment of a VPC's IP address range that you can launch instances into. Subnets enable you to group instances based on your security and operational needs. • To enable instances in a subnet to reach the Internet and AWS services, you must add an Internet gateway to the VPC and a route table with a route to the Internet to the subnet. • It is recommended that you launch your EC2 instances into a VPC. • Note that if you created your AWS account after 2013-12-04, you have a default VPC and you must launch EC2 instances into a default or a nondefault VPC. 101 Compute and Networking Services forAWS - Architecture - cont’d
  • 102. Security Groups • A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. • You can specify one or more security groups when you launch your instance. When you create a security group, you add rules that control the inbound traffic that's allowed, and a separate set of rules that control the outbound traffic. All other traffic is discarded. • You can modify the rules for a security group at any time and the new rules are automatically enforced. 102 Compute and Networking Services forAWS - Architecture - cont’d
  • 103. Amazon Route 53 Hosted Zones • Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. • It is designed as an extremely reliable and cost-effective way to route visitors to websites by translating domain names (such as www.example.com) into the numeric IP addresses (such as 192.0.2.1) that computers use to connect to each other. • AWS assigns URLs to your AWS resources, such as your EC2 instances. • However, you might want a URL that is easy for your users to remember. For example, you can map your domain name to your AWS resource. If you don't have a domain name, you can search for available domains and register them using Amazon Route 53. • If you have an existing domain name, you can transfer it to Amazon Route 53. • Amazon Route 53 enables you to organize your DNS records using hosted zones. • When you create a hosted zone, you receive four name servers to help ensure a high level of availability. 103 Compute and Networking Services forAWS - Architecture - cont’d
  • 104. 104 Compute and Networking Services forAWS - Architecture - cont’d
  • 105. Auto Scaling Groups • Auto Scaling supports groups of virtual servers, an Auto Scaling group that can grow or shrink on demand. 105 Compute and Networking Services forAWS - Architecture - cont’d
  • 106. Load Balancer • A load balancer distributes traffic to multiple instances. • You can achieve even higher levels of fault tolerance by using your load balancer with instances in multiple Availability Zones. • As instances are launched and terminated, the load balancer automatically directs traffic to the running instances. • Elastic Load Balancing also performs health checks on each instance. • If an instance is not responding, the load balancer can automatically redirect traffic to the healthy instances 106 Compute and Networking Services forAWS - Architecture - cont’d
  • 107. Architecture • The following diagram shows an example architecture for your compute and networking services. • There are EC2 instances in public and private subnets. • Access to the instances in the public subnets over protocols like SSH or RDP is controlled by one or more security groups. • Security groups also control whether the instances can talk to each other. • The Auto Scaling group maintains a fleet of EC2 instances that can scale to handle the current load. This Auto Scaling group spans multiple Availability Zones to protect against the potential failure of a single Availability Zone. • The load balancer distributes traffic evenly among the EC2 instances in the Auto Scaling group. • When the Auto Scaling group launches or terminates instances based on load, the load balancer automatically adjusts accordingly. Amazon Route 53 provides secure and reliable routing of your domain name to your infrastructure hosted on AWS. 107 Compute and Networking Services forAWS - Architecture - cont’d

Notes de l'éditeur

  1. XaaS is Anything as a service [http://www.definethecloud.net/cloud-types/]
  2. Virtualization is simulating a hardware platform, operating system, storage device or network resources
  3. Meaning computing and storage resources (such as servers and storage) available as a service.
  4. A RESTful API is an application program interface (API) that uses HTTP requests to GET, PUT, POST and DELETE data. A RESTful API -- also referred to as a RESTful web service -- is based on representational state transfer (REST) technology, an architectural style and approach to communications often used in web services development.
  5. http://docs.aws.amazon.com/AmazonS3/latest/gsg/AmazonS3Basics.html
  6. http://docs.aws.amazon.com/amazonglacier/latest/dev/introduction.html
  7. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html
  8. http://docs.aws.amazon.com/AWSImportExport/latest/DG/CHAP_GettingSetUp.html
  9. http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html
  10. http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html
  11. IAM – Identity Access Mgmt.
  12. http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html A NoSQL (often interpreted as Not only SQL) database provides a mechanism for storage and retrieval of data that is modeled in means other than the tabular relations used in relational databases. Motivations for this approach include simplicity of design, horizontal scaling, and finer control over availability. The data structures used by NoSQL databases (e.g. key-value, graph, or document) differ from those used in relational databases, making some operations faster in NoSQL and others faster in relational databases. The particular suitability of a given NoSQL database depends on the problem it must solve.
  13. http://docs.aws.amazon.com/gettingstarted/latest/awsgsg-intro/gsg-aws-compute-network.html#compute-network-concepts
  14. For more information on VPC refer : http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Introduction.html
  15. For more information on Amazon Route 53 Hosted Zones refer: http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/HowDoesRoute53Work.html
  16. Secure Shell, or SSH, is a cryptographic (encrypted) network protocol operating at layer 7 of the OSI Model to allow remote loginand other network services to operate securely over an unsecured network.