In this workshop, we step through the process of deploying and hosting machine learning (ML) models with AWS Lambda and get on-demand inferences. Given a demonstrative dataset, we build and train a simple ML classification model with Amazon SageMaker. Then, we host this model in an AWS Lambda function and expose an inference endpoint through Amazon API Gateway. Finally, we build a pipeline for automating model deployment to Lambda leveraging AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline.