16. What is Amazon CloudFront?
CloudFront is the AWS content delivery network
It securely delivers data, videos, applications, and APIs to
customers globally with low latency and high transfer speeds
CloudFront is integrated with AWS; physical locations are
directly connected to the AWS Global Cloud Infrastructure
and other AWS services
It features a global network of >200 points of presence (PoPs)
40. Anatomy of a Lambda functionImport sdk
Import http-lib
Import ham-sandwich
Pre-handler-secret-getter()
Pre-handler-db-connect()
Function myhandler(event, context) {
<Event handling logic> {
result = SubfunctionA()
}else {
result = SubfunctionB()
return result;
}
Your handler
41. Anatomy of a Lambda functionImport sdk
Import http-lib
Import ham-sandwich
Pre-handler-secret-getter()
Pre-handler-db-connect()
Function myhandler(event, context) {
<Event handling logic> {
result = SubfunctionA()
}else {
result = SubfunctionB()
return result;
}
Your handler
Dependencies, configuration information,
common helper functions
42. Anatomy of a Lambda functionImport sdk
Import http-lib
Import ham-sandwich
Pre-handler-secret-getter()
Pre-handler-db-connect()
Function myhandler(event, context) {
<Event handling logic> {
result = SubfunctionA()
}else {
result = SubfunctionB()
return result;
}
Function Pre-handler-secret-getter() {
}
Function Pre-handler-db-connect(){
}
Your handler
Dependencies, configuration information,
common helper functions
43. Anatomy of a Lambda functionImport sdk
Import http-lib
Import ham-sandwich
Pre-handler-secret-getter()
Pre-handler-db-connect()
Function myhandler(event, context) {
<Event handling logic> {
result = SubfunctionA()
}else {
result = SubfunctionB()
return result;
}
Function Pre-handler-secret-getter() {
}
Function Pre-handler-db-connect(){
}
Function subFunctionA(thing){
## logic here
}
Function subFunctionB(thing){
## logic here
}
Business logic sub-functions
Your handler
Dependencies, configuration information,
common helper functions
Common helper functions
49. Challenge
They experienced service admin challenges with their original
provider and wanted to scale business to the next level.
Solution
They moved from self-managed MySQL to Amazon Aurora
MySQL. They use Aurora as the primary transactional database,
Amazon DynamoDB for personalized search, and Amazon
ElastiCache as in-memory store for sub-millisecond site rendering.
Result
Initially, the appeal of AWS was the ease of managing and
customizing the stack. It was great to be able to ramp up more
servers without having to contact anyone and without having
minimum usage commitments. AWS is the easy answer for any
Internet business that wants to scale to the next level.
—Nathan Blecharczyk, Cofounder and CTO of Airbnb
“
”
MOVE TO MANAGED →
Amazon
Aurora
Amazon
ElastiCache
Amazon
DynamoDB
1/ First, it all starts with our foundation. As you look at the Gartner IaaS MQ, Gartner calls our the breadth of our offering and the strength of our infrastructure, including the unmatched reliability and availability we provide.
3/ The AWS Cloud spans 69 Availability Zones within 22 geographic Regions around the world, with announced plans for 9 more Availability Zones and four more Regions in, Cape Town, Jakarta, and Milan. global network of 191 Points of Presence (180 Edge Locations and 11 Regional Edge Caches) in 73 cities across 33 countries.
4/ Amazon CloudFront uses a global network of 187 Points of Presence (176 Edge Locations and 11 Regional Edge Caches) in 69 cities across 30 countries
5/ Our AWS geographical regions are comprised of availability zones (AZ’s) that are set of data centers isolated from failures and low latency connectivity providing natively high availability.
6/ All supported by the AWS global network which connects all of our regions. A network that's been built specifically for the cloud, and we continue to iterate on it.
When you configure a VPC, you select an IP address range to use for your virtual network. For IPv4, customers typically use a private address range, as described in RFC 1918. These CIDRs can be as large as /16 (65.6K IPs) or as small as /28 (16 IPs). You then subnet the VPC CIDR for each of the subnets you define.
Here we’ve configured 172.31.0.0/16 as the VPC CIDR and created two public subnets (172.31.0.0/24, 172.31.1.0/24) and two private subnets (172.31.128.0/24, 172.31.129.0/24).
Here we’ve configured 172.31.0.0/16 as the VPC CIDR and created two public subnets (172.31.0.0/24, 172.31.1.0/24) and two private subnets (172.31.128.0/24, 172.31.129.0/24).
Here we’ve configured 172.31.0.0/16 as the VPC CIDR and created two public subnets (172.31.0.0/24, 172.31.1.0/24) and two private subnets (172.31.128.0/24, 172.31.129.0/24).
If you’re ready to continue learning, we offer free, digital courses on Networking and Content Delivery, including 2 hour deep dive into AWS Transit Gateway
Check your knowledge and skills with the one day class on exam readiness for the AWS Certified Advanced Networking – Specialty, available online or in-person.
Then, validate your experience with an industry-recognized certification AWS Certified Advanced Networking – Specialty.
So we talked about ECS, Fargate, and Lambda and so the serverless operations model looks like this
1/ You can start at the very bottom with EC2 and have access to all the knobs you want to manage or you could go completely serverless with lambda and Fargate where you’re focusing just on your application.
2/ So the layers of abstractions available to you with AWS is super empowering because your teams have the choice to pick the layer of abstraction they’re most comfortable with and we will provide you the tools, services, and APIs necessary to help you build your application
So we talked about ECS, Fargate, and Lambda and so the serverless operations model looks like this
1/ You can start at the very bottom with EC2 and have access to all the knobs you want to manage or you could go completely serverless with lambda and Fargate where you’re focusing just on your application.
2/ So the layers of abstractions available to you with AWS is super empowering because your teams have the choice to pick the layer of abstraction they’re most comfortable with and we will provide you the tools, services, and APIs necessary to help you build your application
As I mentioned earlier, we EC2 stands for Elastic Compute Cloud.
We have racks of EC2 servers deployed across all of our regions, with each AWS regions consisting of multiple availability zones or AZs as we call then, and each AZ is typically multiple data centers.
Within these racks, we have sometimes dozens of servers that each contain Processors, Memory, Networking and sometime local storage. As part of the EC2 stack, we have an hypervisor that partitions these resources, in to virtual machines or guests, which we call as an EC2 instance.
Physical disks, local to the physical host hosting your instance
Non-persistent - only exists for the life of the instance, when you stop/terminate it’s gone. Will survive a reboot
Data not replicated by default, although you can do that on top of it if you want
No snapshot support for backups - also DIY
EBS is a distributed system.
Your EBS volume is a logical volume comprised of MANY PHYSICAL DEVICES.
Because it’s a service distributed across many physical devices, this allows EBS to deliver better performance and durability than if we were simply mapping Volume -> Disk
Gp2:
General Purpose SSD
io1: Provisioned IOPS SSD
st1: Throughput Optimized HDD
sc1: Cold HDD
Snapshots
First time you take snapshot, every modified block is copied to S3
Subsequent snapshots are incremental and only changed blocks are backed up
Deleting a snapshot only removes data exclusive to that snapshot
Point-in-time backup of modified volume blocks
Stored in S3, accessed via EBS APIs
Subsequent snapshots are incremental
Deleting snapshot will only remove data exclusive to that snapshot
Crash consistent
Back when we launched EC2 in 2006, we offered only 1 instance size. It came with 1 vCPU and 1.7 GB of system memory. Besides giving customers the easy ability to provision compute resources via a web service, there were few other core tenets that really changed the way developers provisioned and consumed compute resources. We allowed customer to only pay for what used and scale up and down quickly as needed. Until EC2, the only option was for customers to build their own data center, procure and manage hardware with long term commitment. What EC2 offered back in 2006 was considered pretty revolutionary.
The M1 instance that we started offering in 2006 was a good general-purpose instance and addressed the needs of a lot of workloads. Overtime as more customers started using EC2 we got feedback from customers that their particular instance needed a different combination of compute resources than what M1 offered.
As you might have heard, more than 90% of our product roadmap is influenced by direct customer feedback. Based on this feedback, we have innovated to provide the broadest selection of Compute resources in the market.
Show an ec2 instance
D – NVE SSD Storage
5/ Let’s start with making it easier to choose the right resources for your workload. With 270+ instances, the #1 question we hear from customers is how do you know which instance to select – which instance type, what size and what attributes do you need to power your workload most efficiently.
6/ To help address that, I am very excited to announce Mettle.
1/ Previously, you had to reference multiple data sources and test multiple instance types before selecting the best instance type for your workload. You had to repeat this selection process as workloads evolved and new EC2 instance types and features were released.
2/Now you have a single source of truth for the latest instance types, attributes, regional and zonal offerings, and pricing.
3/ You can get started by defining your hardware requirements and reviewing the set of instance types which meet these requirements. You can further compare the hardware attributes, pricing, and availability of each instance type if needed. Then you can select and launch an instance, aliased by creating an SSM parameter, or saved in a launch template to be launched later or referenced in existing automation.
4/ This new experience makes it quicker and easier for us to find and compare different instance types, project costs, and select an instance type that you are confident will give you the performance within budget
So we talked about ECS, Fargate, and Lambda and so the serverless operations model looks like this
1/ You can start at the very bottom with EC2 and have access to all the knobs you want to manage or you could go completely serverless with lambda and Fargate where you’re focusing just on your application.
2/ So the layers of abstractions available to you with AWS is super empowering because your teams have the choice to pick the layer of abstraction they’re most comfortable with and we will provide you the tools, services, and APIs necessary to help you build your application
1/ And this is what your layers of management end up looking like. You’ve this completely managed orchestration or container management layer but you also have these software management layers just to run your application.
2/ And all you really want here is to run your containers. And Fargate enables you to do just that. So if you notice here, there is no management of instances, your infra is ready to scale as you application is.
3/ There are no 2 levels of management of scale anymore. You only define the requirement of your application in terms of a task – how should the service scale, what metrics do you care about and how many more such container or task you want Fargate to launch.
So we talked about ECS, Fargate, and Lambda and so the serverless operations model looks like this
1/ You can start at the very bottom with EC2 and have access to all the knobs you want to manage or you could go completely serverless with lambda and Fargate where you’re focusing just on your application.
2/ So the layers of abstractions available to you with AWS is super empowering because your teams have the choice to pick the layer of abstraction they’re most comfortable with and we will provide you the tools, services, and APIs necessary to help you build your application
1/ AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS).
2/ Fargate makes it easy for you to focus on building your applications. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.
3/ Fargate allocates the right amount of compute, eliminating the need to choose instances and scale cluster capacity. You only pay for the resources required to run your containers, so there is no over-provisioning and paying for additional servers.
4/ Fargate runs each task or pod in its own kernel providing the tasks and pods their own isolated compute environment. This enables your application to have workload isolation and improved security by design. This is why customers such as Vanguard, Accenture, Foursquare, and Ancestry have chosen to run their mission critical applications on Fargate.
So we talked about ECS, Fargate, and Lambda and so the serverless operations model looks like this
1/ You can start at the very bottom with EC2 and have access to all the knobs you want to manage or you could go completely serverless with lambda and Fargate where you’re focusing just on your application.
2/ So the layers of abstractions available to you with AWS is super empowering because your teams have the choice to pick the layer of abstraction they’re most comfortable with and we will provide you the tools, services, and APIs necessary to help you build your application
If you’re ready to continue learning, check out our library of free digital courses, including introductory primers on a range of services
You can also take classroom training to get hands on practice and learn directly from an instructor.
Visit the learning library for the full list of courses
Databases of files
For customers running legacy databases on premises, provisioning, operating, scaling, and managing databases is tedious, time-consuming, and expensive. Customer want to spend time innovating and building new applications, and not managing infrastructure.
With AWS services, you don’t need to worry about administration tasks such as server provisioning, patching, setup, configuration, backups, or recovery. AWS continuously monitors your clusters to keep your workloads up and running with self-healing storage and automated scaling, so that you can focus on higher value application development. You focus on high value application development tasks such as schema design, query construction & optimization leaving AWS to take care of operational tasks on your behalf.
You never have to over or under provision infrastructure to accommodate application growth, intermittent spikes, and performance requirements and incur fixed capital costs which include software licensing and support, hardware refresh, and resources to maintain hardware. AWS does it all for you so you can spend time innovating and building new applications, not managing infrastructure.
Here’s an example on a customer who’s all-in on AWS. Airbnb moved away for self managing databases to fully managed AWS databases such as Aurora, DynamoDB, and ElastiCache.
https://aws.amazon.com/solutions/case-studies/airbnb/
Image source: free stock image from Pexels.com (no license fee)
AWS offers the broadest set of databases and analytics services for customers to lift and shift their database and analytics workloads to the cloud. And customers are doing this at record levels across many different areas:
1/ relational databases – For customers wanting to move away from self-managing Oracle, SQL Server, MySQL, PostgreSQL, and MariaDB databases, AWS offers Amazon RDS and Amazon Aurora.
2/ non-relational databases – For customers wanting to move away from self-managed non-relational document- and key-value stores such as MongoDB, Redis, and Memcached, AWS offers DynamoDB, DocumentDB and ElastiCache.
3/ Data Warehouses – customers want to move from their expensive, proprietary Teradata, Oracle and SQL Server Data Warehouses to Amazon Redshift.
4/ Hadoop and Spark – customers want to move from their Hadoop and Spark deployments on-premises to EMR for cost savings and having a managed service.
5/ operational analytics – customers want to move from their elasticsearch, logstash, and kibana (ELK) on-premises to Elasticsearch Service for cost savings and having a managed service.
6/ real-time analytics – customers want to move from their Apache Kafka deployments to Amazon Managed Streaming for Kafka.
If you’re ready to continue learning, we offer free digital courses for database services.
The DATABASE learning path tells you how to get started
Then, validate your experience with an industry-recognized certification in Databases.
Customers that are running commercial databases such as Oracle and SQL Server on premises often choose to first migrate to Amazon RDS, a fully managed relational database service that you can use to run your choice of database engines including open source engines as well as Oracle, and SQL Server. Amazon RDS improves database scale and performance and automates time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups.
Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases.
Amazon Aurora is up to five times faster than standard MySQL databases and three times faster than standard PostgreSQL databases. It provides the security, availability, and reliability of commercial databases at 1/10th the cost. Amazon Aurora is fully managed by Amazon Relational Database Service (RDS), which automates time-consuming administration tasks like hardware provisioning, database setup, patching, and backups.
Amazon Aurora features a distributed, fault-tolerant, self-healing storage system that auto-scales up to 64TB per database instance. It delivers high performance and availability with up to 15 low-latency read replicas, point-in-time recovery, continuous backup to Amazon S3, and replication across three Availability Zones (AZs).
The minimum storage is 10GB. Based on your database usage, your Amazon Aurora storage will automatically grow, up to 64 TB, in 10GB increments with no impact to database performance. There is no need to provision storage in advance.
[AWS is successfully in large part due to your input, ideas, and feedback.]
[Throughout the year, we deliver new or improved capabilities that directly address your input, covering cost-efficiency, higher-availability, integrations across our services, and performance to name a few].
Based on years of your input and our innovation, AWS has the broadest portfolio of file system services available today.
And our FS services complement our leadership in both BLOCK and object storage.
[Lets review a few of the new innovations and capabilities we delivered since last re:Invent]
Fast, durable, highly available key-based access to objects
[Our 1st file system launched in 2016 was Amazon Elastic File System (EFS).
Designed to provide a cloud-scale file system for the vast majority of Linux-based workloads. Today EFS serves 100,000’s of customers in 19 AWS regions.]
We built EFS to be cloud-scale (Elastic), simple (set and forget), cost-effective, performant.