Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 750 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. We are the first major cloud provider that supports Intel, AMD, and Arm processors, the only cloud with on-demand EC2 Mac instances, and the only cloud with 400 Gbps Ethernet networking. We offer the best price performance for machine learning training, as well as the lowest cost per inference instances in the cloud. More SAP, high performance computing (HPC), ML, and Windows workloads run on AWS than any other cloud.
Instance Types
General Purpose
General purpose instances provide a balance of compute, memory and networking resources, and can be used for a variety of diverse workloads. These instances are ideal for applications that use these resources in equal proportions such as web servers and code repositories.
Compute Optimized
Compute Optimized instances are ideal for compute bound applications that benefit from high performance processors. Instances belonging to this category are well suited for batch processing workloads, media transcoding, high performance web servers, high performance computing (HPC), scientific modeling, dedicated gaming servers and ad server engines, machine learning inference and other compute intensive applications.
Memory Optimized
Memory optimized instances are designed to deliver fast performance for workloads that process large data sets in memory.
Accelerated Computing
Accelerated computing instances use hardware accelerators, or co-processors, to perform functions, such as floating point number calculations, graphics processing, or data pattern matching, more efficiently than is possible in software running on CPUs.
Storage Optimized
Storage optimized instances are designed for workloads that require high, sequential read and write access to very large data sets on local storage. They are optimized to deliver tens of thousands of low-latency, random I/O operations per second (IOPS) to applications.
5. Amazon Global Infrastructure
• AWS is a cloud computing platform which is globally available.
• Global infrastructure is a region around the world in which AWS is based. Global
infrastructure is a bunch of high-level IT services.
The following are the components that make up the AWS infrastructure
• Availability Zones
• Region
• Edge locations
• Regional Edge Caches
6. Availability zone as a Data Center
• An availability zone is a facility that can be somewhere in a country or in a
city. Inside this facility, i.e., the Data Centre, we can have multiple servers,
switches, load balancing, firewalls. The things that interact with the cloud sit
inside the data centers.
• An availability zone can be several data centers, but if they are close together,
they are counted as 1 availability zone.
7. Region
• A region is a geographical area. Each region consists of 2 more availability
zones.
• A region is a collection of data centers which are completely isolated from
other regions.
• A region consists of more than two availability zones connected to each
other through links.
8. Edge Locations
• Edge locations are the endpoints for AWS used for caching content.
• Edge locations consist of CloudFront, Amazon's Content Delivery Network (CDN).
• Edge locations are more than regions. Currently, there are over 150 edge locations.
• Edge location is not a region but a small location that AWS have. It is used for caching the
content.
• Edge locations are mainly located in most of the major cities to distribute the content to
end users with reduced latency.
• For example, some user accesses your website from Singapore; then this request would be
redirected to the edge location closest to Singapore where cached data can be read.
9. Regional Edge Cache
• AWS announced a new type of edge location in November 2016, known as a
Regional Edge Cache.
• Regional Edge cache lies between CloudFront Origin servers and the edge locations.
• A regional edge cache has a large cache than an individual edge location.
• Data is removed from the cache at the edge location while the data is retained at the
Regional Edge Caches.
• When the user requests the data, then data is no longer available at the edge
location. Therefore, the edge location retrieves the cached data from the Regional
edge cache instead of the Origin servers that have high latency.
10. Concepts of Zone, Region and Multi-Region
• Regions are independent areas that consist of zones. They effect the pricing,
reliability, networking, and performance of zonal resources-VM.
• A zone is a deployment area for Google Cloud resources within a region. The zone
should be considered a single failure domain within a region. To deploy fault-
tolerant applications with high availability and help protect against unexpected
failures, deploy your applications across multiple zones in a region.
• Multi-region services are designed to be able to function following the loss of a
single region. Multi-regional resources are cloud storage, Big Query, Big Tables, etc.
11. Key Points
• If a single region fails, only customers in that region are impacted. Customers who have
multi-region products are not impacted.
• Multi-region services are designed to be able to function following the loss of a single
region. Multi-regional resources are cloud storage, Big Query, Big Tables, etc.
• Region ends with number while zone ends with character.
• The fully qualified name for a zone is made up of <region><zone> for example, zone-a in
region us-central1 is us-central1-a
• There are 29 regions and 88 zones in the world and Each region contains three zones except
Iowa.
12. Amazon Elastic Compute Cloud
• Amazon EC2 presents a true virtual computing environment, allowing clients to
use a web-based interface to obtain and manage services needed to launch one or
more instances of a variety of operating systems (OSs).
• Clients can load the OS environments with their customized applications. They can
manage their network’s access permissions and run as many or as few systems as
needed.
• In order to use Amazon EC2, clients first need to create an Amazon Machine
Image (AMI). This image contains the applications, libraries, data, and associated
configuration settings used in the virtual computing environment.
13. Amazon EC2(cont..)
• Amazon EC2 offers the use of preconfigured images built with templates to get up
and running immediately.
• Once users have defined and configured their AMI, they use the Amazon EC2 tools
provided for storing the AMI by uploading the AMI into Amazon S3.
• Amazon S3 is a repository that provides safe, reliable, and fast access to a client
AMI.
• Before clients can use the AMI, they must use the Amazon EC2 web service to
configure security and network access.
14. Amazon Elastic Compute Cloud(EC2)
• Amazon EC2 provides scalable computing capacity in the AWS cloud
• Using Amazon EC2 eliminated your need to invest in Hardware and Support for different OS.
• You can use Amazon EC2 to launch as many or as few virtual servers as you need, Configure
Security and networking, and manage storage.
• Amazon EC2 enables to scale up or scale down the instance
• Instances can be launched in one or more regions and availability zones
• Preconfigure templates are available known as Amazon Machine Image
• By default when you create an instance account with Amazon, your account is limited to a
maximum of 20 instances per EC2 region with two default high I/O instances.
15. Types of EC2 Instances
• General Purpose Balanced Memory and CPU
• Compute Optimized More CPU than RAM
• Memory Optimized More RAM
• Storage Optimized Low Latency
• Accelerated Computing/ GPU Graphics Optimized
• High Memory High RAM, Nitro System
16. Using Amazon EC2 to Run Instances
• During configuration, users choose which instance type(s) and operating system they want to use.
• Available instance types come in two distinct categories, Standard or High-CPU instances. Most applications
are best suited for Standard instances, which come in small, large, and extra-large instance platforms.
• High-CPU instances have proportionally more CPU resources than random-access memory (RAM) and are
well-suited for compute-intensive applications.
• After determining which instance to use, clients can start, terminate, and monitor as many instances of their
AMI as needed by using web service Application Programming Interfaces (APIs) or a wide variety of other
management tools that are provided with the service.
• Users are able to choose whether they want to run in multiple locations and they pay only for resources
actually consumed.
• They can also choose from a library of globally available AMIs that provide useful instances. For example, if
all that is needed is a basic Linux server, clients can choose one of the standard Linux distribution AMIs.
17. Amazon EC2 Service Characteristics
There are quite a few characteristics of the EC2 service that provide significant benefits to an enterprise.
• First of all, Amazon EC2 provides financial benefits. Because of Amazon’s massive scale and large
customer base, it is an inexpensive alternative to many other possible solutions. The costs incurred to
set up and run an operation are shared over many customers, making the overall cost to any single
customer much lower than almost any other alternative. Customers pay a very low rate for the compute
capacity they actually consume.
• Security is also provided through Amazon EC2 web service interfaces. These allow users to configure
firewall settings that control network access to and between groups of instances. Amazon EC2 offers a
highly reliable environment where replacement instances can be rapidly provisioned.
• The EC2 service runs within Amazon’s proven, secure, and reliable network infrastructure and data
center locations.
18. 1. Dynamic Scalability
Amazon EC2 enables users to increase or decrease capacity in a few minutes.
Users can invoke a single instance, hundreds of instances, or even thousands of
instances simultaneously. Of course, because this is all controlled with web
service APIs, an application can automatically scale itself up or down
depending on its needs. This type of dynamic scalability is very attractive to
enterprise customers because it allows them to meet their customers’ demands
without having to overbuild their infrastructure.
19. 2.Full Control of Instances
Users have complete control of their instances. They have root access to each
instance and can interact with them as one would with any machine. Instances
can be rebooted remotely using web service APIs. Users also have access to the
console output of their instances. Once users have set up their account and
uploaded their AMI to the Amazon S3 service, they just need to boot that
instance. It is possible to start an AMI on any number of instances (or any
type) by calling the Run Instances API that is provided by Amazon.
20. 3. Configuration Flexibility
Configuration settings can vary widely among users. They have the choice of
multiple instance types, operating systems, and software packages. Amazon
EC2 allows them to select a configuration of memory, CPU, and instance
storage that is optimal for their choice of operating system and application. For
example, a user’s choice of operating systems may also include numerous Linux
distributions, Microsoft Windows Server, and even an Open Solaris
environment, all running on virtual servers.
21. Integration with Other Amazon Web Services
• Amazon EC2 works in conjunction with a variety of other Amazon web services. For
example, Amazon Simple Storage Service (Amazon S3), Amazon SimpleDB, Amazon
Simple Queue Service (Amazon SQS), and Amazon CloudFront are all integrated to provide
a complete solution for computing, query processing, and storage across a wide range of
applications.
• Amazon S3 provides a web services interface that allows users to store and retrieve any
amount of data from the Internet at any time, anywhere.
• It gives developers direct access to the same highly scalable, reliable, fast, inexpensive data
storage infrastructure Amazon uses to run its own global network of websites. The S3
service aims to maximize the benefits of scale and to pass those benefits on to developers.
22. Amazon Simple Queue Service(Amazon SQS)
Amazon SQS is a reliable, scalable, hosted queue for storing messages as they
pass between computers. Using Amazon SQS, developers can move data
between distributed components of applications that perform different tasks
without losing messages or requiring 100% availability for each component.
Any computer connected to the Internet can add or read messages without the
need for having any installed software or special firewall configurations.
Components of applications using Amazon SQS can run independently and
do not need to be on the same network, developed with the same
technologies, or running at the same time.
23. Amazon SQS
• A message queue service offers reliable and scalable hosted queues
for storing messages as they travel between Servers.
• It is a web service that gives you access to message queues that store
messages waiting to be processed.
• Using SQS, you no longer need a highly available message cluster or
the burden of running it.
• You can delete all the messages in an SQS queue without deleting
the SQS queue itself.
24. Why Amazon SQS ?
• By using Amazon SQS, you can move data between distributed components
of your applications that perform different tasks without losing messages or
requiring each component to be always available.
26. Amazon CloudFront
Amazon CloudFront is a web service for content delivery. It integrates with other
Amazon web services to distribute content to end users with low latency and high data
transfer speeds. Amazon CloudFront delivers content using a global network of edge
locations. Requests for objects are automatically routed to the nearest edge server, so
content is delivered with the best possible performance. An edge server receives a
request from the user’s computer and makes a connection to another computer called
the origin server, where the application resides. When the origin server fulfills the
request, it sends the application’s data back to the edge server, which, in turn, forwards
the data to the client computer that made the request.
27.
28.
29.
30. What is the price of Amazon CloudFront?
Amazon CloudFront charges are based on actual usage of the service in three areas:
• Data Transfer
• You will be charged for the volume of data transferred out of the Amazon CloudFront edge locations,
measured in GB.
• HTTP/HTTPS Requests
• You will be charged for number of HTTP/HTTPS requests made to Amazon CloudFront for your
content.
• Invalidation Requests
• You may invalidate up to 1,000 files each month from Amazon CloudFront at no additional charge.
Beyond the first 1,000 files, you will be charged per file for each file listed in your invalidation requests.
31. Amazon Elastic Block Store(EBS)
• Elastic Block Store
• Persistent and Network attached virtual drive
• EBS volume behave like raw, unformatted external block storage devices that you can attach
to your EC2 instances.
• EBS volume are block storage devices suitable for database style data that requires frequent
read and write
• EBS volume are attached to your EC2 instances through the AWS network, like virtual hard
drive
• An EBS volume can attach to a single EC2 instance only at a time.
• Both EBS volume and EC2 instance must be in the same Availability zones.
32. Instance store backed EC2
• Basically the virtual hard drive on the host allocated to the EC2 instance.
• Limited to 10GB per device
• Non Persistent Storage
• The EC2 instance can’t be stopped ,can only be rebooted or terminated.
Terminated will delete data.
33. Amazon SimpleDB
Amazon SimpleDB is another web-based service, designed for running queries on structured data stored with the
Amazon Simple Storage Service (Amazon S3) in real-time. This service works in conjunction with the Amazon Elastic
Compute Cloud (Amazon EC2) to provide users the capability to store, process, and query data sets within the cloud
environment. Amazon SimpleDB is a highly available NoSQL data store that offloads the work of database
administration. Developers simply store and query data items via web service requests and Amazon SimpleDB does the
rest.
Unbound by the strict requirements of a relational database, Amazon SimpleDB is optimized to provide high
availability and flexibility, with little or no administrative burden. Behind the scenes, Amazon SimpleDB creates and
manages multiple geographically distributed replicas of your data automatically to enable high availability . The service
charges you only for the resources actually consumed in storing your data and serving your requests. You can change
your data model on the fly, and data is automatically indexed for you. With Amazon SimpleDB, you can focus on
application development without worrying about infrastructure provisioning, high availability, software maintenance,
schema and index management, etc.
35. Features
Amazon SimpleDB offers a range of features that make it a powerful and flexible data storage solution:
Efficiency: SimpleDB provides us with simple and fast data retrieval and storage.
Flexibility: With SimpleDB, we can easily add new attributes without worrying about predefined data formats. As your business
changes or application evolves, you can easily reflect these changes in Amazon SimpleDB without worrying about breaking a rigid
schema or needing to refactor code – simply add another attribute to your Amazon SimpleDB data set when needed.
Budget-friendly: SimpleDB's economic model allows for payment only for the specific resources utilized, including machine
utilization, structured data storage, and data storage.
Low touch: Amazon SimpleDB automatically manages infrastructure provisioning, hardware and software maintenance,
replication and indexing of data items, etc.
High Availability: Amazon SimpleDB automatically creates multiple geographically distributed copies of each data item you store.
This provides high availability.
Secure: Amazon SimpleDB provides an https endpoint to ensure secure, encrypted communication between your application or
client and your domain.
Smooth integration: We can smoothly integrate SimpleDB with other Amazon Web Services such as EC2 and S3.
36. Benefits
• Eliminates operational complexity: We don't need to worry about provisioning servers or managing their
infrastructure, as AWS handles everything for us. This saves our time and energy so that we can work on
other essential tasks.
• No schema required for data storage: We can store data in SimpleDB without defining a schema
beforehand. This makes adding new data to our database easy without modifying its structure.
• Reduces administrative burden: Since SimpleDB is a managed service, we don't need to perform
maintenance tasks like backup and recovery or software upgrades. With AWS, our team can reduce their
administrative workload as the platform takes care of these tasks on our behalf.
• Simple API for accessing and storing data: The SimpleDB API is easy to use, allowing us to quickly
access and store data without needing to learn complex query languages or database management systems.
• Data is automatically indexed: When we store data in SimpleDB, the service indexes it for faster querying
and retrieval. This saves our time and effort, as we don't need to configure indexes manually for our
database.
37. Drawbacks
• Storage limitations: SimpleDB limits the amount of data we can store in a
single domain and limits the size of individual attributes and the number of
attributes per item. This can be a challenge for applications with large or
complex data requirements, requiring careful planning and management.
• Weaker forms of consistency: SimpleDB's eventual consistency model
means that updates to data may take time to reflect across all nodes in the
system, leading to potential data consistency. This can be a drawback for
applications requiring strong consistency guarantees.