AI OPEN Talks

Wednesday, October 26, 2022

- PDT
OPEN TALK (AI): Lessons Learned Building Natural Language Systems in Healthcare
David Talby
David Talby
John Snow Labs, CTO

This session reviews case studies from real-world projects that built AI systems that use Natural Language Processing (NLP) in healthcare. These case studies cover projects that deployed automated patient risk prediction, automated diagnosis, clinical guidelines, and revenue cycle optimization.

We will cover why and how NLP was used, what deep learning models and libraries were used, how transfer learning enables tuning accurate models from small datasets, and what was productized and achived. Key takeaways for attendees will include applicable best practices for NLP projects including how to build domain-specific healthcare models and using NLP as part of larger machine learning and deep learning pipelines. 

- PDT
OPEN TALK (AI): Deep Dive on Creating a Photorealistic Talking Avatar
Sebastiano Galazzo
Sebastiano Galazzo
Synapsia.ai, Artificial intelligence researcher

Creating a photorealistic avatar speaking any sentence starting from a written input text.

Focusing on autoencoders, we will do a journey from the beginning (Of the speaker experience), mistakes and tips learned along the path.
Will be showcased:

- Intro, the timeline from beginning to nowadays
- Is NOT a deepfake
- Audio processing techniques: STFT (Short Term Fourier Transform), MELs and custom solutions
- Deeplearning models and architecture
- The technique, inspired to inpaiting, used to animate the mouth
- Masks and convolution
- Landmarks extraction
- Morphing animation technique based on autoencoders features
- Microsoft Azure Speech services used to support audio and animation processing
- Putting all together 

- PDT
OPEN TALK (AI): How To Build An AI Based Knowledge Graph for Customers in Fintech
Gautam Gupta
Gautam Gupta
Intuit, Technology leader

In this session, we’d go through our journey to build an AI based Customer Knowledge graph. We’d share the insights & knowhow required to create this scalable & polyglot data platform. Join us to learn the design patterns & best practices that we have developed over time to create an intelligent solution based on AI & Graph technologies for an ever increasing list of product lines and customers. 

- PDT
OPEN TALK (AI): Patenting Artificial Intelligence– How AI Companies Can Identify and Protect AI Inventions
Steve Bachmann
Steve Bachmann
Bachmann Law Group PC, President, Silicon Valley Patent Attorney

Artificial intelligence is becoming one of the most widespread and useful technologies in use today. From data collection to model training, language processing to predictive models, deep networks to AI frameworks, there are many categories and implementations of AI, all with protectable features and important business applications. Protecting cutting edge AI technology helps companies achieve business goals and support their AI innovation.
This presentation will identify key strategies to identify which aspects of AI are patentable and which aspects are not. The discussed strategies will be supplemented with practical real-world examples of patenting different areas of the AI process, from data collection to model training and model implementation to output applications, as well as distinct types of AI systems.
Attendees will also learn about AI patent trends and the most common use cases in which different AI companies build valuable patent portfolios around their AI technology. 

- PDT
OPEN TALK (AI): Scalable, Explainable and Unsupervised Anomaly Detection for Telecom
Ivan Caramello de Andrade
Ivan Caramello de Andrade
Encora Brazil Division, Innovation Leader and Tech Lead

In developing and implementing a telecommunications network, one of the most oppressive challenges that these companies deal with are anomalies that occur within the network showing that something strange (usually an attack, a fraud or an error) is happening. Detecting these anomalies is a challenge because they may appear in different places and formats and require the observation of multiple metrics over hundreds of thousands of events to tell regular behaviors from anomalous ones. Ivan Carmello De Andrade, would like to explain how detecting these anomalies with higher accuracy may be possible with the technology and machine learning capabilities of today.

In his technical session, Ivan will explain how he and his team were able to customize and adapt a Robust Random Cut Forest model to identify and explain anomalies in an unsupervised and scalable way. He and his team will explain the process behind creating this solution as well as the challenges they overcame in development, such as extracting behaviors from individual events. He will also explain the benefit of this model to the user which include:

• The user does not need to understand which behaviors are regular or anomalous nor which features are relevant to describe and identify them
• The model provides accountability, because the user can identify and understand which factors lead to an event being identified as an anomaly
• Scalability in general, the model can be implemented on many different scales with a highly distributable structure and configurable levels of detail 

- PDT
OPEN TALK (AI): Pushing Deepfakes to the Limit - Fake Video Calls with AI
Martin Förtsch
Martin Förtsch
TNG Technology Consulting GmbH, Principal Consultant
Thomas Endres
Thomas Endres
TNG Technology Consulting GmbH, Partner
Jonas Mayer
Jonas Mayer
TNG Technology Consulting GmbH, Senior Consultant

Today's real-time Deepfake technology makes it possible to create indistinguishable doppelgängers of a person and let them participate in video calls. Since 2019, the TNG Innovation Hacking Team has intensively researched and continuously developed the AI around real-time Deepfakes. The final result and the individual steps towards photorealism will be presented in this talk.

Since its first appearance in 2017, Deepfakes have evolved enormously from an AI gimmick to a powerful tool. Meanwhile different media outlets such as "Leschs Kosmos", Galileo and other television formats have been using TNG Deepfakes.

In this talk we will show the different evolutionary steps of the Deepfake technology, starting with the first Deepfakes and ending with real-time Deepfakes of the entire head in high resolution. Several live demos will shed light on individual components of the software. In particular, we focus on various new technologies to improve Deepfake generation, such as Tensorflow 2 and MediaPipe, and the differences in comparison to our previous implementations. 

- PDT
OPEN TALK (AI): Democratizing Deep Learning with Vector Similarity Search
Nava Levy
Nava Levy
Redis, AI/ML Developer Advocate

Deep learning is responsible for most of the breakthroughs we have seen in AI/ML in recent years, yet most companies' models in production use classic or traditional ML. In this talk we will explore how deep learning is being democratized today, thanks to the rising use and availability of vector embeddings from giant pre-trained neural networks. We will see how these embeddings can be combined together with vector similarity search to address different use cases covering any modality and applied to any type of object. Finally, we will discuss the many opportunities this presents as well as the tools that are required to successfully deploy these applications into production. 

Thursday, October 27, 2022

- PDT
OPEN TALK (AI): Shift Left Strategy to Enable Autonomous Data Science
Manish Modh
Manish Modh
Andromeda 360 AI, Founder & CEO

Data Science is hard, achieving ROI from your AI projects is even harder. Data Scientists spend more time wrangling data and slinging models to software and devops engineers than time developing and analyzing their ML models. The solution is to enable a culture shift similar to the DevOps movement where developers manage software quality in production - data scientists should manage ML model performance in production environments. Dedicated ML Engineers are helping to bridge this transition, but they struggle with the tools and automations required to enable scale with autonomy.

Join Manish Modh, Founder & CEO of Andromeda 360 AI on this journey to envision a world of autonomous data science and how Data Scientists and ML Engineers are empowered to own the development, deployment, operations, and performance of their machine learning use cases. Experience the challenges data science teams face today and why most AI projects fail. Learn the art of the possible that leverages all of the wisdom gathered over 20 years of technology evolution from Big Data, Cloud, DevSecOps, AI/ML, and Edge computing 

- PDT
OPEN TALK (AI): Operationalizing AI with a Shift from Research to Product Orientation
Yotam Oren
Yotam Oren
Mona, CEO & Cofounder

Many AI programs fail to deliver sustained value despite great research, due to insufficient operational tools, processes and practices. These days, more and more data science teams are going through a major shift, from research orientation, to product orientation. Key factors to successfully transition to a product-oriented approach to AI include empowering data scientists to take end to end accountability for model performance, and going beyond the model - gaining a granular understanding of the behavior of the entire AI-driven process. In this talk, Yotam will discuss the importance of empowering data science teams to successfully make the transition from research oriented to product oriented. 

- PDT
OPEN TALK (AI): Scaling AIaaS: from DALL-E to Uber
Daniel Siryakov
Daniel Siryakov
Comet, Senior Product Manager

As companies begin to embrace AI in key parts of their businesses, they want to explore and scale AI at minimal costs. However developing in-house AI-based solutions for every problem is a complex process and requires huge capital investment. The industry is now embracing AI as a service wherein third party tools can fill in the gaps. In this talk, Daniel will walk through the current landscape, trends, and technical challenges. He will also feature a few customer stories and a proposed modular solution to help your team jumpstart on this journey. 

- PDT
OPEN TALK (AI): Conversational AI Solutions for the Metaverse of Work
Samuel Eniojukan
Samuel Eniojukan
VoiceWorx.ai, Chief Technology Officer

Is your enterprise ready to engage its customers and employees in new immersive experiences powered by web3 and the Metaverse. With Facebook's Horizons and Microsoft's Teams making significant product investments into creating underlying Metaverse Platforms for enterprises to launch both employee and customer-facing experiences, organizations would need tailored conversational strategies and specialized tools to drive effective engagement on these evolving Metaverse platforms . This session will explore the critical role of Conversational AI technologies in creating effective Metaverse solutions and experiences, and also address the key considerations for conversational AI in applications of Metaverse technologies for improving work productivity, deploying interactive learning environments, and powering e-commerce. 

- PDT
OPEN TALK (AI): Level Up Your Data Lake - to ML and Beyond
Vinodhini SD
Vinodhini SD
Treeverse, Developer Advocate

A data lake is primarily two things: an object store and the objects being stored. Even with the most basic setup, data lakes are capable of supporting BI, Machine Learning, and operational analytics use cases. This flexibility speaks to the strength of object stores, particularly their flexibility in integrating with a diverse set of data processing engines.

As data lakes exploded in adoption, a number of improvements were made to the first architectures. The first and most obvious improvement was to file formats, which led to the development of analytics-optimized formats like parquet, and eventually modern table formats.

An even newer improvement has been the emergence of data source control tools that bring new levels of manageability across an entire lake! In this talk, we'll cover how to incorporate these technologies into your data lake, and how they simplify workflows critical to ML experimentation, deployment of datasets, and more! 

- PDT
OPEN TALK (AI): Reducing Latency and Resource Consumption for Offline Feature Generation
Dhaval Patel
Dhaval Patel
Netflix, Machine Learning Infrastructure

Personalization is one of the key pillars of Netflix as it enables each member to experience the vast collection of content tailored to their interests. Our personalization system is powered by various machine learning models. We constantly innovate by adding new features to our personalization models and running A/B tests to improve recommendations for our members. We also continue to see that providing larger training sets to our models helps make better predictions. Our ML fact store has enabled us to provide larger training sets where the training set spans over a long time window. While a great success, the ML fact store architecture has its limitations. For example, features computed while generating recommendations must be recomputed by offline feature generation pipelines. This talk is about those limitations and how we enhanced our architecture to run optimized offline feature generation pipelines. 

- PDT
OPEN TALK (AI): Bringing Life and Motion to AI Explainability
Joao Nogueira
Joao Nogueira
Optum, Senior AI Engineer
Pietro Mascolo
Pietro Mascolo
Optum Ireland, Data Scientist

SHAP is a great tool to help developers and users understand black box models. To push it to the next level, we will show how to leverage on Dash, SHAP, gifs, and auto-encoders to generate interactive dashboards with animations and visual representations to understand how different AI models learn and change their minds while progressively trained with growing amounts of data.

Animations will help developers understand how frequently AI models tweak their population and local importance factors during training and how they compare across competing AI models, adding an extra layer to AI safety. Auto-encoders and LSTM will be used to generate 2-dimensional embedding representations of explainability paths at individual level, allowing developers to interactively detect algorithm decision making similarity across time and visually debug mislabeled AI predictions at each point in time.

We will show this application in the context of Chronic Kidney Disease prediction and broader Healthcare AI. 

Wednesday, November 2, 2022

- PDT
[#VIRTUAL] OPEN TALK (AI): Lessons Learned Building Natural Language Systems in Healthcare
Join on Hopin
David Talby
David Talby
John Snow Labs, CTO

This session reviews case studies from real-world projects that built AI systems that use Natural Language Processing (NLP) in healthcare. These case studies cover projects that deployed automated patient risk prediction, automated diagnosis, clinical guidelines, and revenue cycle optimization.

We will cover why and how NLP was used, what deep learning models and libraries were used, how transfer learning enables tuning accurate models from small datasets, and what was productized and achived. Key takeaways for attendees will include applicable best practices for NLP projects including how to build domain-specific healthcare models and using NLP as part of larger machine learning and deep learning pipelines. 

- PDT
[#VIRTUAL] OPEN TALK (AI): How To Build An AI Based Knowledge Graph for Customers in Fintech
Join on Hopin
Gautam Gupta
Gautam Gupta
Intuit, Technology leader

In this session, we’d go through our journey to build an AI based Customer Knowledge graph. We’d share the insights & knowhow required to create this scalable & polyglot data platform. Join us to learn the design patterns & best practices that we have developed over time to create an intelligent solution based on AI & Graph technologies for an ever increasing list of product lines and customers. 

- PDT
[#VIRTUAL] OPEN TALK (AI): It’s an AI Product Manager’s Job to Help an Organization Succeed with Predictive Machine Learning
Join on Hopin
Paul Ortchanian
Paul Ortchanian
Bain Public, Founder, CEO, Head of Product, Data and Strategy

In short, AI is a lifecycle that requires the integration of data, machine learning models, and the software around it. It covers everything from scoping and designing to building and testing all the way through to deployment — and eventually requires frequent monitoring. Product managers need to ensure that data scientists are delivering results in efficient ways so business counterparts can understand, interpret, and use it to learn from. This includes everything from the definition of the problem, the coverage and quality of the data set and its analysis, to the presentation of results and the follow-up. 

- PDT
[#VIRTUAL] OPEN TALK (AI): Patenting Artificial Intelligence– How AI Companies Can Identify and Protect AI Inventions
Join on Hopin
Steve Bachmann
Steve Bachmann
Bachmann Law Group PC, President, Silicon Valley Patent Attorney

Artificial intelligence is becoming one of the most widespread and useful technologies in use today. From data collection to model training, language processing to predictive models, deep networks to AI frameworks, there are many categories and implementations of AI, all with protectable features and important business applications. Protecting cutting edge AI technology helps companies achieve business goals and support their AI innovation.
This presentation will identify key strategies to identify which aspects of AI are patentable and which aspects are not. The discussed strategies will be supplemented with practical real-world examples of patenting different areas of the AI process, from data collection to model training and model implementation to output applications, as well as distinct types of AI systems.
Attendees will also learn about AI patent trends and the most common use cases in which different AI companies build valuable patent portfolios around their AI technology. 

- PDT
[#VIRTUAL] OPEN TALK (AI): Scalable, Explainable and Unsupervised Anomaly Detection for Telecom
Join on Hopin
Ivan Caramello de Andrade
Ivan Caramello de Andrade
Encora Brazil Division, Innovation Leader and Tech Lead

In developing and implementing a telecommunications network, one of the most oppressive challenges that these companies deal with are anomalies that occur within the network showing that something strange (usually an attack, a fraud or an error) is happening. Detecting these anomalies is a challenge because they may appear in different places and formats and require the observation of multiple metrics over hundreds of thousands of events to tell regular behaviors from anomalous ones. Ivan Carmello De Andrade, would like to explain how detecting these anomalies with higher accuracy may be possible with the technology and machine learning capabilities of today.

In his technical session, Ivan will explain how he and his team were able to customize and adapt a Robust Random Cut Forest model to identify and explain anomalies in an unsupervised and scalable way. He and his team will explain the process behind creating this solution as well as the challenges they overcame in development, such as extracting behaviors from individual events. He will also explain the benefit of this model to the user which include:

• The user does not need to understand which behaviors are regular or anomalous nor which features are relevant to describe and identify them
• The model provides accountability, because the user can identify and understand which factors lead to an event being identified as an anomaly
• Scalability in general, the model can be implemented on many different scales with a highly distributable structure and configurable levels of detail 

- PDT
[#VIRTUAL] OPEN TALK (AI): Pushing Deepfakes to the Limit - Fake Video Calls with AI
Join on Hopin
Thomas Endres
Thomas Endres
TNG Technology Consulting GmbH, Partner
Martin Förtsch
Martin Förtsch
TNG Technology Consulting GmbH, Principal Consultant
Jonas Mayer
Jonas Mayer
TNG Technology Consulting GmbH, Senior Consultant

Today's real-time Deepfake technology makes it possible to create indistinguishable doppelgängers of a person and let them participate in video calls. Since 2019, the TNG Innovation Hacking Team has intensively researched and continuously developed the AI around real-time Deepfakes. The final result and the individual steps towards photorealism will be presented in this talk.

Since its first appearance in 2017, Deepfakes have evolved enormously from an AI gimmick to a powerful tool. Meanwhile different media outlets such as "Leschs Kosmos", Galileo and other television formats have been using TNG Deepfakes.

In this talk we will show the different evolutionary steps of the Deepfake technology, starting with the first Deepfakes and ending with real-time Deepfakes of the entire head in high resolution. Several live demos will shed light on individual components of the software. In particular, we focus on various new technologies to improve Deepfake generation, such as Tensorflow 2 and MediaPipe, and the differences in comparison to our previous implementations. 

Thursday, November 3, 2022

- PDT
[#VIRTUAL] OPEN TALK (AI): Deep Dive on Creating a Photorealistic Talking Avatar
Join on Hopin
Sebastiano Galazzo
Sebastiano Galazzo
Synapsia.ai, Artificial intelligence researcher

Creating a photorealistic avatar speaking any sentence starting from a written input text.

Focusing on autoencoders, we will do a journey from the beginning (Of the speaker experience), mistakes and tips learned along the path.
Will be showcased:

- Intro, the timeline from beginning to nowadays
- Is NOT a deepfake
- Audio processing techniques: STFT (Short Term Fourier Transform), MELs and custom solutions
- Deeplearning models and architecture
- The technique, inspired to inpaiting, used to animate the mouth
- Masks and convolution
- Landmarks extraction
- Morphing animation technique based on autoencoders features
- Microsoft Azure Speech services used to support audio and animation processing
- Putting all together 

- PDT
[#VIRTUAL] OPEN TALK (AI): Shift Left Strategy to Enable Autonomous Data Science
Join on Hopin
Manish Modh
Manish Modh
Andromeda 360 AI, Founder & CEO

Data Science is hard, achieving ROI from your AI projects is even harder. Data Scientists spend more time wrangling data and slinging models to software and devops engineers than time developing and analyzing their ML models. The solution is to enable a culture shift similar to the DevOps movement where developers manage software quality in production - data scientists should manage ML model performance in production environments. Dedicated ML Engineers are helping to bridge this transition, but they struggle with the tools and automations required to enable scale with autonomy.

Join Manish Modh, Founder & CEO of Andromeda 360 AI on this journey to envision a world of autonomous data science and how Data Scientists and ML Engineers are empowered to own the development, deployment, operations, and performance of their machine learning use cases. Experience the challenges data science teams face today and why most AI projects fail. Learn the art of the possible that leverages all of the wisdom gathered over 20 years of technology evolution from Big Data, Cloud, DevSecOps, AI/ML, and Edge computing 

- PDT
[#VIRTUAL] OPEN TALK (AI): Scaling AIaaS: from DALL-E to Uber
Join on Hopin
Daniel Siryakov
Daniel Siryakov
Comet, Senior Product Manager

As companies begin to embrace AI in key parts of their businesses, they want to explore and scale AI at minimal costs. However developing in-house AI-based solutions for every problem is a complex process and requires huge capital investment. The industry is now embracing AI as a service wherein third party tools can fill in the gaps. In this talk, Daniel will walk through the current landscape, trends, and technical challenges. He will also feature a few customer stories and a proposed modular solution to help your team jumpstart on this journey. 

- PDT
[#VIRTUAL] OPEN TALK (AI): Conversational AI Solutions for the Metaverse of Work
Join on Hopin
Samuel Eniojukan
Samuel Eniojukan
VoiceWorx.ai, Chief Technology Officer

Is your enterprise ready to engage its customers and employees in new immersive experiences powered by web3 and the Metaverse. With Facebook's Horizons and Microsoft's Teams making significant product investments into creating underlying Metaverse Platforms for enterprises to launch both employee and customer-facing experiences, organizations would need tailored conversational strategies and specialized tools to drive effective engagement on these evolving Metaverse platforms . This session will explore the critical role of Conversational AI technologies in creating effective Metaverse solutions and experiences, and also address the key considerations for conversational AI in applications of Metaverse technologies for improving work productivity, deploying interactive learning environments, and powering e-commerce. 

- PDT
[#VIRTUAL] OPEN TALK (AI): Level Up Your Data Lake - to ML and Beyond
Join on Hopin
Vinodhini SD
Vinodhini SD
Treeverse, Developer Advocate

A data lake is primarily two things: an object store and the objects being stored. Even with the most basic setup, data lakes are capable of supporting BI, Machine Learning, and operational analytics use cases. This flexibility speaks to the strength of object stores, particularly their flexibility in integrating with a diverse set of data processing engines.

As data lakes exploded in adoption, a number of improvements were made to the first architectures. The first and most obvious improvement was to file formats, which led to the development of analytics-optimized formats like parquet, and eventually modern table formats.

An even newer improvement has been the emergence of data source control tools that bring new levels of manageability across an entire lake! In this talk, we'll cover how to incorporate these technologies into your data lake, and how they simplify workflows critical to ML experimentation, deployment of datasets, and more! 

- PDT
[#VIRTUAL] OPEN TALK (AI): Reducing Latency and Resource Consumption for Offline Feature Generation
Join on Hopin
Dhaval Patel
Dhaval Patel
Netflix, Machine Learning Infrastructure

Personalization is one of the key pillars of Netflix as it enables each member to experience the vast collection of content tailored to their interests. Our personalization system is powered by various machine learning models. We constantly innovate by adding new features to our personalization models and running A/B tests to improve recommendations for our members. We also continue to see that providing larger training sets to our models helps make better predictions. Our ML fact store has enabled us to provide larger training sets where the training set spans over a long time window. While a great success, the ML fact store architecture has its limitations. For example, features computed while generating recommendations must be recomputed by offline feature generation pipelines. This talk is about those limitations and how we enhanced our architecture to run optimized offline feature generation pipelines.