This summary covers the key updates from AWS over a one month period from March 5th to April 8th 2020 across compute, data/storage, analytics, and machine learning services:
- AWS App Mesh launched support for end-to-end encryption. Applications using Amazon SNS can now be hosted in Asia Pacific (Mumbai) and Europe (Frankfurt) regions.
- Amazon Connect added phone numbers in twelve new countries. Amazon Personalize Optimizer was introduced using Amazon Pinpoint events.
- EC2 Batch now supports FSx for Lustre file systems. Bottlerocket, a new open-source Linux OS purpose-built for containers, was announced.
- Athena added work
6. •AWS App Mesh launches support for end to end encryption
•Applications using Amazon SNS to send SMS can now be hosted in the Asia Pacific (Mumbai) and Europe (Frankfurt) regions
Application Integration
•Amazon Connect Adds Phone Numbers in Twelve New Countries
•Introducing Amazon Personalize Optimizer Using Amazon Pinpoint Events
Customer Engagement
•Execute Chef recipes on Linux with AWS Systems Manager
•Amazon VPC Flow Logs Now Support Resource Tagging and Tag-on-Create
Amazon VPC NAT Gateway Now Supports Tag-on-Create
•Introducing Customizations for AWS Control Tower solution
•AWS AppConfig announces integration with Amazon S3
•AWS CloudFormation Drift Detection and Resource Import now available in seven additional AWS regions
Management & Governance
•AV1 Encoding Now Available with AWS Elemental MediaConvert
•HDR to SDR Tone Mapping Now Available with AWS Elemental MediaConvert
Media Services
•API Gateway offers private integrations with AWS ELB and AWS CloudMap as part of HTTP APIs GA release
Mobile
•Amazon GuardDuty Price Reduction
AWS Security Hub adds new fields and resources to the AWS Security Finding Format
Security, Identity, & Compliance
•New AWS Certification validates expertise in AWS databases
•Announcing the AWS Game Tech Learning Path
Training & Certifications
ThingsI’m not
going to cover
👉
7. That leavesus with 4 areas:
Compute
Storage&Data
A n a y t i c s
mAcHiNe lEaRnInG
11. Autoscaling groups
“create new EC2’s to distribute load”
Placement groups
“put these EC2’s in the same AZ so they run extra fast"
Data center a.k.a. “availability zone”
Room #1 Room #2 Room #3 Room #4 Room #5
Amazon ECS supports in Preview updating Placement Strategy and Constraints for existing ECS Services
19. When you realize your application’s PostgreSQL database
AND Redis cache are both globally-replicated in AWS
( ElastiCache for Redis + Amazon Aurora with PostgreSQL)
Last year I wrote about Amazon FSx for Lustre and told you how our customers can use it to create pebibyte-scale, highly parallel POSIX-compliant file systems that serve thousands of simultaneous clients driving millions of IOPS (Input/Output Operations per Second) with sub-millisecond latency.
As a managed service, Amazon FSx for Lustre makes it easy for you to launch and run the world’s most popular high-performance file system. Our customers use this service for workloads where speed matters, including machine learning, high performance computing (HPC), and financial modeling.
Now, this works w/ AWS Batch
A Clustered Placement Group is where you would want all of your instances within one of those rooms, giving the lowest network latency and highest throughput possible between your instances, which is essential where you need very high-performance computing (HPC). However, if something goes wrong in AWS, more of your instances may be impacted concurrently. In traditional virtualisation like VMware, we would call these "Affinity Groups".
A Spread Placement Group is the opposite, where we would want each of the instances to be in different rooms, making them more resilient to failures. The connection between instances would still be single-digit millisecond latency, and in the Gbps worth of throughput, but not to the same extreme degree. Again, these might otherwise be known as "Anti-Affinity Groups". There are also Partition Placement Groups, which are a mix of both of these.
Today, Amazon Web Services (AWS) announced the public preview of Bottlerocket, a new open source Linux-based Operating System (OS) that is purpose-built to run containers. Bottlerocket comes with a single-step update mechanism and includes only the essential software to run containers. These properties enable customers to use container orchestrators to manage OS updates with minimal disruptions, enabling better uptime for containerized applications and lower operational cost. Currently, Bottlerocket is supported for use with Amazon EKS. Amazon ECS will also be supported soon.
Most containers today are run on general-purpose OSes, which are built to support applications packaged in a variety of formats, including containers. Updates to these general-purpose OSes are applied on a package-by-package basis. The complex dependencies among their packages can result in errors, making the OS update process challenging to automate. By contrast, updates to Bottlerocket can be applied and rolled back in a single step which makes them easy to automate, reducing management overhead and improving uptime for containerized applications.
You can get started with Bottlerocket by launching Amazon EC2 instances with the Bottlerocket AMI, and joining them to an Amazon EKS cluster by following the instructions here. Bottlerocket is developed as an open source project on GitHub. AWS-provided builds of Bottlerocket are covered under AWS Support plans. To learn more, visit the Bottlerocket page.
Kubernetes is rapidly evolving, with frequent feature releases and bug fixes. The Kubernetes 1.15 release focuses on stability and maturity of the core feature set. Additional 1.15 highlights include support for configuring TLS termination on NLB load balancers, improved support for CustomResourceDefinitions, as well as NodeLocal DNSCache graduating to beta. Learn more about Kubernetes version 1.15 in the Kubernetes project release notes.
As of today, Kubernetes version 1.12 is deprecated in EKS, and will no longer be supported on May 11th, 2020. On this day, you will no longer be able to create new 1.12 clusters and all EKS clusters running Kubernetes version 1.12 will be updated to the latest available platform version of Kubernetes version 1.13.
AWS Local Zones are a new type of AWS infrastructure deployment that places AWS compute, storage, database, and other select services closer to large population, industry, and IT centers where no AWS Region exists today. With AWS Local Zones, you can easily run latency-sensitive portions of applications local to end-users and resources in a specific geography, delivering single-digit millisecond latency for use cases such as media & entertainment content creation, real-time gaming, reservoir simulations, electronic design automation, and machine learning.
Each AWS Local Zone location is an extension of an AWS Region where you can run your latency-sensitive applications using AWS services such as Amazon Elastic Compute Cloud, Amazon Virtual Private Cloud, Amazon Elastic Block Store, Amazon FSx, and Amazon Elastic Load Balancing in geographic proximity to end-users. AWS Local Zones provide a high-bandwidth, secure connection between local workloads and those running in the AWS Region, allowing you to seamlessly connect back to your other workloads running in AWS and to the full range of in-region services through the same APIs and tool sets.
Amazon ElastiCache for Redis announces Global Datastore
Amazon Aurora with PostgreSQL Compatibility supports Amazon Aurora Global Database
Cassandra or Kassandra, was a woman in Greek mythology cursed to utter true prophecies, but never to be believed. In modern usage her name is employed as a rhetorical device to indicate someone whose accurate prophecies are not believed. Cassandra was reputed to be a daughter of King Priam and Queen Hecuba of Troy.
When using Amazon S3 Batch Operations you can now assign tags to jobs to label and manage access to create and edit permissions. S3 Batch Operations is an S3 feature that lets you perform repetitive or bulk actions like copying objects or running AWS Lambda functions across millions of objects with a single request. You provide the list of objects, and S3 Batch Operations handles the repetitive work, including managing retries and displaying progress.
Amazon SageMaker Ground Truth now supports multi-label image and text classification. Ground Truth helps you build highly accurate training datasets by using your own or third-party human labelers. It provides labelers with built-in workflows and user interfaces for common labeling tasks. Built-in workflows are provided for image and text classification, for assigning class labels to an image or a text selection, along with workflows for other computer vision (CV) and natural language processing (NLP) tasks.
You can now assign tags to Amazon Lex bots, aliases and channel associations. Tags allow you to categorize your resources in different ways, such as by cost center or owner, which simplifies cost allocation in your organization. You can also use tags to control creation, modification or deletion of tagged resources.
Tags are key value pairs that can be used to manage, search, and filter resources. IAM policies support tag-based conditions, enabling you to constrain IAM permissions based on specific tags or tag values. For example, you can tag your bots to ensure only selected groups of users have access based on those tags.
The AWS Deep Learning AMIs are available on Ubuntu 18.04, Ubuntu 16.04, Amazon Linux 2, and Amazon Linux with TensorFlow (1.15.2 & 2.1.0), PyTorch 1.4.0, MXNet 1.6.0. The PyTorch EI environment has been updated to 1.3.1.
AWS Deep Learning AMIs also support other interfaces such as Keras, Chainer, and Gluon — pre-installed and fully-configured for you to start developing your deep learning models in minutes while taking advantage of the computation power and flexibility of Amazon EC2 instances. When you activate a Conda environment, the Deep Learning AMIs automatically deploy higher-performance builds of frameworks, optimized for the EC2 instance of your choice. For a complete list of frameworks and versions supported by the AWS Deep Learning AMI, see release notes.
Reduce ML inference costs on PyTorch with Amazon Elastic Inference
PyTorch models in SageMaker/EC2/ECS -> already available in the Deep Learning containers & AMI’s
Use the inference API to access just the right amount of compute