DeveloperWeek Global 2020 DeveloperWeek Global 2020
Get your ticket or log in to build your agenda.

ML/AI Service Mesh Made Easy With API Management

Hopin 17
Join on Hopin

Rakesh Talanki
Google, Principal Architect

Rakesh Talanki is an accomplished Technology Architect with extensive experience in strategy, architecture and managing large scale high volume high throughput critical solutions. Rakesh works for Google Cloud and is considered an expert in API Management.

Kaz Sato
Google, Staff Developer Advocate

Kaz Sato is Staff Developer Advocate at Google Cloud team, Google Inc. Focusing on Machine Learning and Data Analytics products, such as TensorFlow, Cloud ML and BigQuery. Kaz has been invited to major events including Google Cloud Next SF, Google I/O, Strata NYC etc., authoring many GCP blog posts, and supporting developer communities for Google Cloud for over 8 years. He is also interested in hardwares and IoT, and has been hosting FPGA meetups since 2013.

The digital transformation in next decade will be empowered by what we call it as "ML/AI Service Mesh". Even though many companies are now generating features from raw data and extracting business insights with ML models, the challenge has been to share the valuable asset for internal and external consumption at scale. Each project or department in enterprises are siloed in the most of ML/AI projects; building features from raw data, training ML models, extracting embeddings, building prediction microservices, and use it internally. There is no standardized way to share the valuable assets and microservices with cross-functional groups and divisions.

API management is the missing link for building the service mesh quickly. By introducing a standardized and established way of securing services, enabling service discovery and observability, Operations teams don't have to spend much resources on exposing the assets to enable the ML/AI Service Mesh across the enterprise. This approach will democratize the ML assets for faster and scalable enterprise-wide consumption.

Solution: AI Platform + Apigee Edge
In this session, we will take a ML model built in the Cloud Machine learning engine and look at ways on how to consume this model from an internal consumer and an external consumer perspective. We will use Apigeeā€™s API Management solution to expose the models. We will also touch upon how to build "ML/AI Service Mesh" where enterprises can build a collection of microservices that exposes the features.

The demo will provide:
- Serving predictions with scalability, performance, and availability in mind
- Authentication, authorization services depending on who the user is
- Managing the life cycle of API keys
- Granting access to your ML APIs with an approval process
- Rolling out new model versions as models are updated
- Self-service consumption using Portal without any DevOps involved
- Monitoring and Analyzing Analytics
- Monetizing the ML Models