One of the main issues with ML and DL deployment is finding the right way to train and operationalize the model within the company. Serverless approach for deep learning provides simple, scalable, affordable yet reliable architecture. The challenge of this approach is to keep in mind certain limitations in CPU, GPU and RAM, and organize training and inference of your model. My presentation will show how to utilize services like Amazon SageMaker, AWS Batch, AWS Fargate, AWS Lambda, AWS Step Functions and SageMaker Pipelines to organize deep learning workflows. My talk will be beneficial for machine learning engineers and platform engineers.
OPEN TALK (AI): Serverless for Machine Learning Pipelines
Rustem Feyzkhanov is a machine learning engineer at Instrumental, where he works on analytical models for the manufacturing industry, and AWS Machine Learning Hero. Rustem is passionate about serverless infrastructure (and AI deployments on it) and is the author of the course and book "Serverless Deep Learning with TensorFlow and AWS Lambda" and "Practical Deep Learning on the Cloud". Also, he is the main contributor to the open-source repository for serverless packages https://github.com/ryfeus/lambda-packs.