10. Machine Learning at Amazon.com
R E TA I L
Demand Forecasting
Vendor Lead Time Prediction
Pricing
Packaging
Substitute Prediction
C U S TO M E R S
Recommendation
Product Search
Product Ads
Shopping Advice
Customer Problem
Detection
S E L L E R S
Fraud Detection
Predictive Help
Seller Search & Crawling
C ATA LO G U E
Browse-Node Classification
Meta-data Validation
Review Analysis
Product Matching
T E X T
In-Book Search
Named-entity Extraction
Summarization/X-ray
Plagiarism Detection
I M A G E S
Visual Search
Product Image
Enhancement
Brand Tracking
32. 137 Language Pairs
• English
• Spanish
• Portuguese
• German
• French
• Arabic
• Simplified Chinese
• Japanese
• Russian
• Italian
• Traditional Chinese
• Turkish
• Czech
.Coming soon : Danish, Dutch, Finnish, Hebrew, Polish, and Swedish
53. Julien Simon
Principal Evangelist, Artificial Intelligence & Machine Learning
@julsimon
https://ml.aws
https://aws.amazon.com/blogs/machine-learning
https://medium.com/@julsimon
https://youtube.com/juliensimonfr
Notes de l'éditeur
18 Regions, 55 Azs
5 Regions coming: Bahrain, Cape Town, Hong Kong, Stockholm, and a second GovCloud Region in the US.
Helping recommend what might interest you, by learning from other customers who have purchased this item have also liked.
Amazon Echo is a hands-free speaker you control with your voice. Echo connects to the Alexa Voice Service to play music, make calls, send and receive messages, provide information, news, sports scores, weather, and more—instantly. All you have to do is ask.
Amazon Robotics was founded in 2003 on the notion that in order to meet consumer demands in eCommerce, a better approach to order fulfillment solutions was necessary. Amazon Robotics empowers a smarter, faster, more consistent customer experience through automation
automates fulfilment center operations using various methods of robotic technology including autonomous mobile robots, sophisticated control software, language perception, power management, computer vision, depth sensing, machine learning, object recognition, and semantic understanding of commands.
Amazon Prime Air is a service that will deliver packages up to 2.5 kg in 30 minutes or less using small drones and relies extensively on visual object recognition.
We have Prime Air development centers in the United States, the United Kingdom, Austria, France and Israel.
Amazon Go is a new kind of store with no checkout required. We created the world’s most advanced shopping technology so you never have to wait in line. With our Just Walk Out Shopping experience, simply use the Amazon Go app to enter the store, take the products you want, and go! No lines, no checkout. (No, seriously.)
No lines, no checkout
Our checkout-free shopping experience is made possible by the same types of technologies used in self-driving cars: computer vision, sensor fusion, and deep learning. Our Just Walk Out Technology automatically detects when products are taken from or returned to the shelves and keeps track of them in a virtual cart. When you’re done shopping, you can just leave the store. Shortly after, we’ll charge your Amazon account and send you a receipt.
This is just a sample of the range of AI-related services that we use across Amazon.com to help build better experiences for our customers. Mandy of which you don’t ever *SEE* as a customer. Our order fulfillment services, how we pack our trucks, and all of the logistics from the time you place your order until it shows up on your doorstep is completely directed by our AI advancements.
Up to 100 faces
Recognizing clients
User Generated Content
You can use the ‘MinConfidence’ parameter in your API requests to balance detection of content (recall) vs the accuracy of detection (precision).
You can use the ‘MinConfidence’ parameter in your API requests to balance detection of content (recall) vs the accuracy of detection (precision).
Polly also support Speech Synthesis Markup Language (SSML) Version 1.0
The Voice Browser Working Group has sought to develop standards to enable access to the Web using spoken interaction.
…Amazon Comprehend, a Natural Language Processing service that enables customers to discover insights from text.
1/ Without provisioning a server, Comprehend can understand documents, social network posts, articles, and any other data in AWS
2/ Simply provide text stored in data lake in S3 via Comprehend API, and Comprehend uses NLP to give you highly accurate info about what it contains in 4 categories:
a/ entities (people, places, dates, brands, qtys)
b/ key phrases that provide significance to the text
c/ language being used
d/ sentiment
First, you need to collect and prepare your training data to discover which elements of your data set are important. Then, you need to select which algorithm and framework you’ll use. After deciding on your approach, you need to teach the model how to make predictions by training, which requires a lot of compute. Then, you need to tune the model so it delivers the best possible predictions, which is often a tedious and manual effort. After you’ve developed a fully trained model, you need to integrate the model with your application and deploy this application on infrastructure that will scale. All of this takes a lot of specialized expertise, access to large amounts of compute and storage, and a lot of time to experiment and optimize every part of the process. In the end, it's not a surprise that the whole thing feels out of reach for most developers.
SageMaker makes it easy to build ML models and get them ready for training by providing everything you need to quickly connect to your training data, and to select and optimize the best algorithm and framework for your application. Amazon SageMaker includes hosted Jupyter notebooks that make it is easy to explore and visualize your training data stored in Amazon S3. You can connect directly to data in S3, or use AWS Glue to move data from Amazon RDS, Amazon DynamoDB, and Amazon Redshift into S3 for analysis in your notebook.
To help you select your algorithm, Amazon SageMaker includes the 10 most common machine learning algorithms which have been pre-installed and optimized to deliver up to 10 times the performance you’ll find running these algorithms anywhere else. Amazon SageMaker also comes pre-configured to run TensorFlow and Apache MXNet, two of the most popular open source frameworks, or you have the option of using your own framework.
You can begin training your model with a single click in the Amazon SageMaker console. The service manages all of the underlying infrastructure for you and can easily scale to train models at petabyte scale. To make the training process even faster and easier, Amazon SageMaker can automatically tune your model to achieve the highest possible accuracy.
Once your model is trained and tuned, SageMaker makes it easy to deploy in production so you can start generating predictions on new data (a process called inference). Amazon SageMaker deploys your model on an auto-scaling cluster of Amazon EC2 instances that are spread across multiple availability zones to deliver both high performance and high availability. It also includes built-in A/B testing capabilities to help you test your model and experiment with different versions to achieve the best results.
For maximum versatility, we designed Amazon SageMaker in three modules – Build, Train, and Deploy – that can be used together or independently as part of any existing ML workflow you might already have in place.
Assume a guest, Jessica Yu, already has a reservation. Prior to her arrival, she gets a pre-arrival notification with opportunities for her to upgrade her room and/ or select amenities she might like. The data on her reservation and her broader profile info is in the CRM – Revinate in this case. Room rates come from Duetto, the Revenue Management System. This integration is already live but one place where in the future it can become even more powerful is through targeted upgrades. Leveraging machine learning, we can predict which room upgrades and which amenities are most likely to resonate with her. This makes life better for her because she doesn’t have to sort through what at some fancier hotels and resorts might be dozens of options. And it’s also great for the hotel because revenue is optimized through both higher conversion (based on showing Jessica the right thing) and better rate (dynamic based on season, availability, and many other possible factors).
SageMaker is going to make it much easier for everyday developers to build machine-learning models. But, people and developers are still really interested in learning more about how they can use machine learning. They want to do it, so they're reading all kinds of literature, and there are some code samples they can play around with. But, for any of us who've had to learn something new that has any kind of complexity, there's no substitute for hands-on training and application.
And so we thought about: What can we do that would allow our builders and our developers to get this hands-on training? Our teams worked on this problem and developed AWS DeepLens, which is the world's first wireless deep-learning-enabled video-camera for developers.
AWS DeepLens is a high-definition camera with on-board compute that is optimized for deep learning. It comes with computer-vision models that we've already built that you can use right on the camera, or you can build your own in SageMaker and import them over the air via the console with a few clicks to DeepLens.
It has Greengrass in it. So in addition to writing the models, you can program Greengrass to run various Lambda triggers.
There's lots of tutorials and prebuilt models for you, so you can get started right away. In fact, we believe that you'll be able to get started running your first deep-learning computer-vision model in 10 minutes from the time that you unbox the camera. You can provide and you program this thing to do almost anything you can imagine. So for instance, you could imagine programming the camera with computer-vision models where, if you recognize a license plate coming into your driveway, it will open the garage door. Or you could program it to send you an alert when your dog gets on the couch.
Really, you can do almost anything. And it's going to give you an opportunity to get learning very quickly in a way that you haven't been able to do before.
EACH OF THESE ARE AVAILBLE - TODAY WE WILL DIVE INTO OBJECT DETECTION AS A WAY TO GET YOU STARTED… AFTER THIS WORKSHOP YOU THEN EXPLORE THESE OTHER SAMPLES, CREATE CUSTOM FUNCTIONALITY OR START YOUR OWN PROEHCT FROM SCRATCH
Learn the basics of machine learning through hands on examples and sample projects
sample projects of varying difficulty available for use: object detection, artistic style transfer, face recognition, hot dog Not hot dog, cat vs dog, license plate detection
Use existing sample projects or extend the sample project with your own custom functionality (example detect when your dog is sitting on the couch and send an sms) or create your own project
Go deeper through integrations with Sage Maker, Greengrass, and other AWS services
So far, we've discussed the bottom and middle layers of the machine learning stack – first we talked about the frameworks and the deep learning AMI for expert practitioners. Then, SageMaker and DeepLens in the middle layer to bring ML capabilities to all developers. Now, at the top of the stack, we serve developers and companies who want to add solution-oriented intelligence to their applications through an API call rather than developing and training their own models. These are services that exhibit artificial intelligence that emulates a human’s cognitive skills. Last year, we announced three services in this area: Amazon Rekognition (image analysis), Amazon Polly (text-to-speech), and Amazon Lex (conversational applications).