Wednesday, October 28, 2020

PRO SESSION (AI): Serving Very Large Numbers of Low Latency ML Models
Join on Hopin
Manoj Agarwal
Manoj Agarwal
Salesforce, Distributed Systems Architect and a Continuous Learner

Serving machine learning models is a scalability challenge at many companies. Most of the applications require a small number of machine learning models (often <100) to serve predictions. On the other hand, cloud platforms that support model serving, though they support hundreds of thousands of models, provision separate hardware for different customers. Salesforce has a unique challenge that only very few companies deal with, Salesforce needs to run hundreds of thousands of models sharing the underlying infrastructure for multiple tenants for cost effectiveness. In this talk we will explain how Salesforce hosts hundreds of thousands of models on a multi-tenant infrastructure, to support low-latency predictions.