In short, it is the on-demand access to IT resources with a pay-as-you-go model. You only pay for resources you use, much like consuming electricity.
TALKING POINTS
Customers have selected AWS for eight years because we have proven ourselves committed to customer success.
We believe we stand apart in the market because of six factors: Experience, Service Breadth and Depth, Pace of Innovation, Global Footprint, Pricing Philosophy, and Partner Ecosystem
The AWS Partner Network (APN) includes tens of thousands of consulting/systems integrator and technology/ISV partners, and has grown over 50% over the past 12 months.
Our leadership in compute and cloud is grounded in customer obsession. We innovate on behalf of you and your needs. Today, 90% of our product roadmap is based on customer feedback – what you’ve told us that you need. Its our customers which are pushing on the boundaries of what is possible today and driving us to enable new scenarios and capabilities.
In 2016, we launched 1,017 new services and features. We launched 1,430 new features and services in 2017. In 2018, we launched 1,957 new features and services.
Talk Track:
Flywheel around innovation and our roadmap. As we continue to innovate, more customers leverage the services, driving more of our roadmap, which drives more innovation, and continues to attract more customers.
As a leading Life Sciences organization, you need a partner that can keep up with your pace of innovation, your changing business climate, and your tomorrow’s requirements.
How is AWS able to continue to deliver innovation at such a staggering rate? Two pizza teams: enabling small groups to work with customers and understand their needs so they can quickly build solutions that address customer requirements.
Ecosystem of Partners that can help make the transition to the cloud. Consultants on ISV and MSP and there is no ecosystem of partners that is comparable to AWS
Marketplace stats as of March, 2019
Yes, it is a bit of an eye chart, but it provides insight into the number of services AWS provides.
Today we will focus on the Core Services in the lower left – Network, Compute, and Storage, along with a high level overview of Security.
What are we trying to articulate on this slide:
This slide gives the customer an overview of our scale and reach, and ability to consistently serve them as they move into different markets.
Talking points:
1 million customers in 190 countries
To support global businesses, we maintain 19 physical regions across the globe (as of 11/2018), in U.S., Brazil, Europe, Japan, Singapore, Australia, and China. Additional regions in Sweden, Milan, Bahrain, Hong Kong are expected to come online over the next 12 – 18 months.
Each region is made up of clusters of datacenters known as Availability Zones.
This reach is important because:
You can replicate your infrastructure in any region in the world in minutes
Data stays in the region you place it in—we do not move your data which is important in countries like Germany where locality is essential to doing business
You can architect your applications to span availability zones and regions to take advantage for both availability and low-latency performance for your users across the globe
Relevant customer examples:
This is an opportunity to pull in several customer names from across our customer base.
Conversations topics:
Where are your customers located?
What is your current infrastructure footprint?
Other tips:
Not all customers have a global need. Focusing these customers on the ability to build applications across AZs, discussing edge locations and other customer types here will be key.
You can choose to deploy and run your applications in multiple physical locations within the AWS cloud.
Our data center footprint is global, spanning 5 continents with highly redundant clusters of data centers in each region.
Amazon Web Services are available in geographic Regions that are independent and separate as much as possible for data sovereignty and as much as possible offer the same services.
When you use AWS, you can specify the Region in which your data will be stored, instances run, queues started, and databases instantiated.
Within each Region are Availability Zones (AZs).
Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones and provide inexpensive, low latency network connectivity to other Availability Zones in the same Region. By launching instances in separate Availability Zones, you can protect your applications from a failure (unlikely as it might be) that affects an entire zone. Regions consist of one or more Availability Zones, are geographically dispersed, and are in separate geographic areas or countries. The Amazon EC2 service level agreement commitment is 99.99% availability for each Amazon EC2 Region.
4 9’s = 52 minutes/yr
5 9’s = 5.2 minutes/yr
Our footprint is expanding continuously as we increase capacity, redundancy and add locations to meet the needs of our customers around the world.
AWS maintains Regions, which are major geographic areas, and Availability Zones (AZ), which are individual data centers, or clusters of data centers that make up a Region. Independent and separate that as much as possible offer the same services. But they have isolation as much as possible for data sovertenty.
Today, AWS operates 9 Regions around the world. Each Region has a minimum of 2 Azs (separate power, flood planes, etc) to allow customers to set up high availability architectures and data redundancy. An abstraction of a datacenter with fault isolation but close enough to build high availability architectures.
In addition to Regions, AWS maintains edge locations that supporting Route 53 DNS and Amazon CloudFront (CDN) points of presence.
You can choose to deploy and run your applications in multiple physical locations within the AWS cloud.
Our data center footprint is global, spanning 5 continents with highly redundant clusters of data centers in each region.
Amazon Web Services are available in geographic Regions that are independent and separate as much as possible for data sovereignty and as much as possible offer the same services.
When you use AWS, you can specify the Region in which your data will be stored, instances run, queues started, and databases instantiated.
Within each Region are Availability Zones (AZs).
Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones and provide inexpensive, low latency network connectivity to other Availability Zones in the same Region. By launching instances in separate Availability Zones, you can protect your applications from a failure (unlikely as it might be) that affects an entire zone. Regions consist of one or more Availability Zones, are geographically dispersed, and are in separate geographic areas or countries. The Amazon EC2 service level agreement commitment is 99.95% availability for each Amazon EC2 Region.
Our footprint is expanding continuously as we increase capacity, redundancy and add locations to meet the needs of our customers around the world.
AWS maintains Regions, which are major geographic areas, and Availability Zones (AZ), which are individual data centers, or clusters of data centers that make up a Region. Independent and separate that as much as possible offer the same services. But they have isolation as much as possible for data sovertenty.
Today, AWS operates 9 Regions around the world. Each Region has a minimum of 2 Azs (separate power, flood planes, etc) to allow customers to set up high availability architectures and data redundancy. An abstraction of a datacenter with fault isolation but close enough to build high availability architectures.
In addition to Regions, AWS maintains edge locations that supporting Route 53 DNS and Amazon CloudFront (CDN) points of presence.
In many of these edge locations we also have our Direct Connect locations, allowing for private connectivity from your on-premises locations – data centers, branch offices, remote sites – back to an AWS region. Being in an edge location, these Direct Connect locations have access to the AWS backbone for redundant capacity back to the AWS Region.
1 “Local Region” – Osaka with a single AZ
It is our approach that is different, not that they don’t have availability zones. Historically we only deploy regions when we have two or more availability zones per region. This is our philosophy so high-availability can be achieved. Microsoft has long had a practice of deploying regions even if they were a single data center. They are attempting to match our availability zones and high-availability concepts, and they are beginning to deploy a similar footprint in certain regions, but certainly not all. AWS will continue to innovate while they attempt to catch up.
This different approach also speaks to how we design our services. From the beginning, our services take advantage of our highly-available footprint because we don’t have any regions designed without the multiple availability zone concept.
10/24/2019 8:41 PM
Any specific questions or topics folks want to learn about today?
We’ve helped customers like Netflix, that built their entire infrastructure on AWS, to General Electric, who moved 9000 applications from on-premises to AWS, to Pinterest, that started on AWS and continues to build on top of it today. That is why AWS storage is so broad and deep – it’s feedback from over a decade of use at scale that drives 90-95% of our roadmap. And, we’ve learned a lot. I wanted to share some observations of adoption patterns we see for how customers have taken this cloud journey with storage.
Let’s talk about re-host first. This is typically “lift and shift” where you take your workloads, typically delivered on virtual machines, and migrate it as-is onto AWS. It’s also a good way to get some quick wins. GE didn’t start moving to the cloud by saying we’re going to move 9000 applications. GE started their cloud journey with an aggressive tops-down goal of moving 50 applications in 30 days. When you take a strong leadership position like that, your organization will find a way to get those quick wins, whether it’s rehosting or any of these other patterns that I’m going to talk about.
When Lionsgate Entertainment decided to move their SharePoint and SAP deployments to AWS, they were able to reduce their time-to-deployment from weeks to days or hours. This quicker turnaround has been a win for their IT department and opened the doors to additional workloads moving from test and development to production. Using AWS has allowed them to avoid investing in a new data center, saving $1M+ in three years. They estimate that AWS will save them 50% over a traditional hosting facility.
Re-platforming is lifting an existing application to the cloud and adopting bits and pieces of the new AWS platform to change your application. In the database world, you could spin up EC2 instances, install MySQL, and then migrate a database. However, you could re-platform by using our RDS service and let AWS manage that database for you. In the storage world, that might mean taking an existing application that depends on an underlying shared file system, that an on-premises Network-Attached Storage system might provide, and spinning up a new Amazon EFS file system that can be a drop-in replacement for the application’s storage layer. A fully managed file system like EFS removes all of the provisioning and administration overhead of an on-premises product and lets you focus on migrating the application itself to the AWS Cloud. This allowed the BBC to migrate their Red Button service to the AWS Cloud without having to spend the time or money to re-write core components that depend on a POSIX-compliant file system, and for less cost than running their on-prem NFS server. This approach also allows you to get those quick wins. You’re still using similar technologies but have moved over to managed implementations like EFS. It’s an easy optimization.
And finally re-architecting, which is building your application to take full advantage of AWS cloud services. This also applies to any new application that you are writing today. There is no reason you shouldn’t be writing your new applications in the cloud. You get the advantages from the start. But it’s also game changing for your existing business critical applications. Many times this is used as a way to modernize applications you have been running for years. FINRA is the regulatory body of the US stock exchanges. Five years ago, when FINRA was first moving to the cloud, they looked at different options for their cloud journey. They could have started with any of the options you see here, but decided to go with re-architecting their core application to detect fraud within 24 hours of market close. They did that because it was too important not to. FINRA believed that they needed the cloud for the core of their mission which was to be the watchdog of the consumer investor, and the only way to do that was to achieve the maximum agility that they could possibly get. And now FINRA is doing 500 billion validation checks on trades daily, all running on AWS.
And it’s fairly common to use one or more of these patterns together when charting your course and journey to the cloud!
To fit all of these use cases, we’ve developed the broadest portfolio of storage services in the cloud, spanning file, block, and object storage services, as well as hybrid cloud and data transfer solutions. And we’re constantly adding new services, like:
Amazon FSx for Windows File Server to provide you a fully-managed, native windows file system
Amazon FSx for Lustre to provide you a high-performance file system for major data processing workflows
Or the new AWS Backup service that centrally manages and automates backup processes for AWS resources like EBS volumes, RDS databases, DynamoDB tables, EFS file systems, and AWS Storage Gateway volumes.
…but we don’t just provide you the largest number of services. Much of our innovation has been poured into the depth of features and functionality in those storage services…
Like a new storage class for Amazon EFS that saves you 85% on infrequently accessed data
Like a new Deep Archive storage class for S3 Glacier that lets you store data for about a dollar per terabyte per month.
Or Like a new S3 storage class that automatically moves your data between two storage tiers to save you money when your storage access patterns change.
Assuming one provisions for a little more (10%) of peak anticipation
72 possible squares
44 Used
120 possible squares
61 Used
120 possible squares
24 Reserved
30 on demand
Accelerate compute: Run parallel tasks on a multitude of
instance types, maximizing performance, meeting business SLAs and reducing time to market
Further Reduce Costs: Access instances at up to a 90% discount vs. On-Demand pricing, enabling workloads at the lowest possible cost
Build for Scale: Quickly ramp up short-lived but massive data jobs on unused EC2 compute capacity at a low cost
Innovation requires change, and the easiest kind of change to drive is lots of incremental innovation, and lots of experiments. You can learn faster by making lots of little changes, rather than waiting to take one big leap.
That’s where microservices become so valuable – you can iterate on each microservice to deliver whatever pace of innovation you choose.
And you can also structure teams around those services. So each engineer has both the knowledge and the authority to make decisions about their service. You end up with the ability to make decisions really quickly – because their decisions impact only their individual service.
----
Finer-grained architectures: do one thing, do it well
Componentization: small, easily identifiable pieces with well-defined interfaces
Represent the real world (service boundaries = business boundaries) Related behavior sits together (high cohesion)
Cohesive system made of many small parts that work together
Monoliths
Single monolithic app | Must deploy entire app | One database for entire app | Organized around technology layers | State in each runtime instance | One technology stack for entire app | In-process calls locally, SOAP externally
When you're working with a monolithic app, you have many developers all pushing changes through a shared release pipeline, which causes frictions at many points of the lifecycle.
Upfront during development, engineers need to coordinate their changes to make sure they're not making changes that will break someone else's code.
If you want to upgrade a shared library to take advantage of a new feature, you need to convince everyone else to upgrade at the same time – good luck with that!
And if you want to quickly push an important fix for your feature, you still need to merge it in with everyone else's in process changes. This leads to "merge Fridays", or worse yet "merge weeks", where all the developers have to compile their changes and resolve any conflicts for the next release.
Even after development, you also face overhead when you're pushing the changes through the delivery pipeline
You need to re-build the entire app, run all of the test suites to make sure there are no regressions, and re-deploy the entire app
To give you an idea of this overhead, Amazon had a central team whose sole job it was to deploy this monolithic app into production
Even if you're just making a one-line change in a tiny piece of code you own, you still need to go through this heavyweight process and wait to catch the next train leaving the station
For a fast growth company trying to innovate and compete, this overhead and sluggishness iss unacceptable
Microservices are minimal function services that are deployed separately but can interact together to achieve a broader use case.
Many smaller minimal function microservices | Can deploy each independently | Each has its own datastore | Organized around business capabilities | State is externalized | Choice of technology for each microservice | REST calls over HTTP, messaging
When monoliths become too big to scale efficiently we make a couple of big changes - One was architectural, and the other was organizational
The teams were decoupled and they had the tools necessary to efficiently release on their own
Teams independently architect, develop, deploy and maintain each microservice
Ownership is key - every team service has an owner. Owners architect, owners implement, owners support in production, owners can fix things, and owners care
Each microservice often has its own datastore, and each microservice is fully decentralized – no ESBs, no single database, no top-down anything
Any given instance of a microservice is stateless – state, config and data pushed externally
Microservices support polyglot – each microservice team is free to pick the best technology
DevOps principles – automated setup and developers owning production support
Use of containers, which allow for simple app packaging and fast startup time
Use of cloud for elasticity, platform and software services
There are lots of ways to build microservices, but there are 3 key patterns that we work well, especially in a serverless way: API driven, request response patterns, event driven patterns, and data streaming patterns.
---
Getting integration right is critically important. Do it well, and your microservices retain their autonomy, allowing you to change and release them independent of the whole.
Avoid breaking changes: The whole point of a microservice is being able to make a change to one service and deploy it, without needing to change any other part of the system.
Keep APIs technology-agnostic: Services expose an application programming interface (API) and collaborating services communicate with us via those APIs. Avoid integration technology that dictates what technology stacks we can use to implement our microservices. Picking a small number of defined interface technologies helps integrate new consumers
All communication between the services themselves are via network calls, to enforce separation between the services and avoid the perils of tight coupling.
Hide internal implementation detail: if we change our microservice, it cannot affect our consumers. Otherwise we can avoid making breaking changes to avoid upgrading consumers, creating technical debt.
And you can integrate with existing systems easily.
APIs and decoupled communications enable automation and improves reliability
OLD
So how do we communicate now that we are distributed?
Sharing datastores across microservices introduces coupling which we don’t want, because it creates synchronous communication (where the sender waits for a response). That creates dependencies between microservices which in turn creates performance and availability problems.
Instead, we aim for all data exchange between microservices has to be through an API layer or messaging
Messaging can really help us achieve scalability, fault tolerance, high availability, consistency and distributed transaction management by providing a mechanism for:
Reliable durable, fault tolerant delivery of messages
Creating unidirectional – non-blocking operations
Decreasing the dependencies they have on each other
Enabling focus on logical decomposition of systems and increase autonomy of our components
You probably want to messaging services to do this
What does this mean for software you buy from key suppliers and vendors? You want to seek vendors that embrace APIs.
We build A LOT of software at AWS, our services consist of a lot of source code, but even the installation and update routines are managed as code themselves. We use APIs to deploy and manage our software where ever possible and drive our service teams very hard to build, then leverage APIs whenever changes are needed or when you need information from the runtime environment.
How can you effectively automate if the installation requires command-line access, typically very privileged access, or relies on a graphical interface where you click next, next, next, next, finish? You can’t.
As use of the cloud has dramatically increased, the industry has seen a lot of vendors integrate with the APIs of cloud providers hosting the solution. It’s important that these solutions have their own APIs so you can continue to automate not only the resources offered by the underlying cloud provider, but also programmatically get at the data from the solution itself.
Need more RAM
This is a traditional constraint commonly encountered as application performance decreases with increasing demand, and the traditional solution is to throw more (expensive) RAM into the application server. The constraint is hardly limited to RAM: processor upgrades, disk upgrades (we need 15k RPM SCSCI!) and expensive network interconnects are traditional examples of scaling vertically, often at a very high cost.
Re-think this costly architectural constraint by distributing load across a number of commodity instances. Rather than upgrading one server from 8GB to 64GB of RAM, run your application on four instances with 4GB of RAM and evaluate performance. This approach allows you to scale horizontally: simply add more 4GB nodes as demand requires rather than figuring out (and paying for) that next big RAM upgrade and install.
Need better IOPS for database
When you encounter constraints with database performance, there are a few traditional approaches to remediate: 1) schema surgery, where you painstakingly rework a relational schema to squeeze out a few extra IOPS by optimizing a join, for example; and 2) vertical scaling, e.g., adding more RAM, upgrading disks, etc (just as described above).
The preferred solution to this type of constraint is similar to the “Need more RAM” constraint: scale horizontally by spreading the load around. Consider creating a read replica for your database and modifying your application to separate database read from writes. Isolating read/write traffic often provides considerable performance gains. Also consider sharding
Hardware failed or config got corrupted.
Things fail. Hardware fails. Even virtual hardware, like EBS volumes, fail. Rather than wasting valuable time and resources diagnosing problems and replacing components, favor a “rip and replace” approach: simply decommission the entire component and spin up a fully-function replacement. This could be an AMI with the app stack baked in, etc.
Any specific questions or topics folks want to learn about today?