Artificial Intelligence Dev Conference
Monday, February 7, 2022
This is the story of using Virtual Assistants like Alexa, Google Assistant, or Bixby alongside Voice and Video AI on mobile and web devices for good!
The building of Project Enabled Play - a platform built in .NET that enables users to turn their voice into gaming controller on any platform they have access to. Come learn about scaling applications in .NET to over a dozen different platforms and channels while building for accessibility to even the playing field. Gain an understanding of voice and conversational platforms, real-time communication technology, and best practices for sharing code and going from PoC to product.
Try your shot at landing a win in Call of Duty, Fall Guys, Minecraft, and more using your voice, then leave with a working knowledge of other ways you can use .NET and the tools you're familiar with.
PRO WORKSHOP: Develop, Deploy and Govern AI: Building an Huggingface End to End Sentiment AI Solution
Learn how to develop, fine tune and deploy an end to end AI application. Focused on an NLP solution architecture that incorporates the latest advancements in NLP from Huggingface as well as optimized Tensorflow and Pytorch containers from from Intel and Nvidia into a robust automated pipeline capable of accounting for Data drift and Model drift while providing Inference API’s that support an interactive application and real time Kafka inferencing on live Twitter streams. Using cnvrg.io Metacloud every aspect of this hybrid solution can be developed collaboratively in a single control plane which manages execution across all of your on-premise investments, while also allowing you to dynamically leverage cloud based compute resources.
This workshop will give you an end to end example that will help you solve your next NLP problem, and strategies to maintain your model in production.
You can find talks demonstrating how some security tools work in isolation, but what about a closer to life scenario showing how to introduce security throughout development, deployment and runtime? This is the demo that will finally fill that gap! Attendees will be able to take back knowledge and get a head start in introducing security everywhere in their SDLC. We’ll see a hands on demonstration of how to use a variety of tools under the CNCF to dramatically enhance the security of any environment: - In-Toto will help us ensure the integrity of our software from development to deployment - Kyverno will allow us to define policies in our environment to guarantee compliance - We’ll use Notary to sign our dockerimages and finally - Falco we’ll notify us if any threats are identified in the runtime of our kubernetes cluste
We can easily trick a classifier into making embarrassingly false predictions. When this is done systematically and intentionally, it is called an adversarial attack. Specifically, this kind of attack is called an evasion attack. In this session, we will examine an evasion use case and briefly explain other forms of attacks. Then, we explain two defense methods: spatial smoothing preprocessing and adversarial training. Lastly, we will demonstrate one robustness evaluation method and one certification method to ascertain that the model can withstand such attacks.
Tuesday, February 8, 2022
Conversation Intelligence (CI) APIs enables to build applications that go beyond basic speech to text, creating a new array of sophisticated AI-driven experiences and functionalities. Basic speech recognition is designed to recognize or respond to explicit words and phrases, while conversation intelligence is capable of contextual comprehension of any human conversations to effectively extract key insights, identify user intent, surface actionable insights, detect sentiment, and more.
Conversation Intelligence has given a rise to a new generation of AI driven applications and platforms across various verticals such as revenue intelligence, tele-health, call centers and customer support, collaboration and productivity platforms and more…
Need to harness the power of AI but not a data scientist? No problem. In this presentation, we’ll show you how to consume prebuilt and custom AI models even if you don’t have data science expertise. We’ll also introduce Oracle’s take on machine learning and AI—and how we’ve rearchitected the AI experience to be more streamlined, efficient, and developer-friendly. Come ready to see demos that span capabilities such as language understanding, computer vision, speech, and many more. No data science background required!
Join us as Pau Labarta Bajo, Data Scientist and ML Engineer with over eight years of experience will show us how to break multi-million dollar computer vision models using adversarial examples.Computer vision models based on neural networks have become so good in the last 10 years that nowadays serve as the “eyes” behind many mission-critical systems, like self-driving cars, automatic video surveillance, or face recognition systems in airports. What you probably do not know is that there are easy methods to fool them, forcing them to produce wrong predictions. These methods are theoretically simple and computational feasible and open the door to potentially critical security issues.
OPEN TALK: Low Latency and High Throughput Chat Moderation on a CPUOPEN TALK: Low Latency and High Throughput Chat Moderation on a CPU
Transformer-based models have been dominant in the NLP landscape due to their state of the art performance on a wide variety of benchmarks and tasks. However, deploying such large models at scale can be quite difficult and costly. Learn about the techniques that we've utilized at Stream to overcome these challenges and moderate real-time chat messages efficiently on relatively inexpensive hardware. While this talk will focus on the BERT and its offshoots, many of these techniques can also be applied to other models.
Almost all AI problems worth solving are made difficult by the challenge of a “long-tail” – where the frequency of data is sparse yet critical. In this talk, Scale AI’s Head of Nucleus Russell Kaplan will discuss why performance on the long tail is a make-or-break situation for AI systems, proactive strategies for identifying long-tail scenarios and how machine learning practitioners can target their experiments to “tame” the long tail, achieve strong performance on rare edge cases and improve model performance.
Cricket, a game of bat and ball is one of the most popular game and played in varied formats(I. Its a game of numbers with each match generating plethora of data about players and match. This data is used by analysts and data scientists to uncover meaningful insights and forecast about matches and players performance. In this session, I'll be performing some analytics and prediction on the cricket data using Microsoft ML.Net framework and C#.
Wednesday, February 9, 2022
OPEN TALK: Fake Your Data: Mimicking Production to Maximize Testing, Shorten Sprints, and Release 5x Faster
Raise your hand if you’ve ever written a script or built a tool to generate test data for your staging environment. Keep your hand up if it was fun. And easy. And still works. If your hand (and shoulders and morale) fell, rest assured you’re not alone. Now for the good news: help is here.
With the increasing complexity of today’s data ecosystems and the expanding reach of privacy regulations, generating useful, safe test data has become more difficult and riskier than ever. An effective test data solution must work across a variety of database types and de-identify production in a way that ensures privacy. Challenging? Yes. Attainable? That, too.
Technologies now exist that integrate directly into your data ecosystem to create test data that looks, acts, and behaves just like your production data. By hydrating QA and staging with useful, safe, fake data, dev teams are upleveling testing, catching bugs faster, and shortening their development cycles by as much as 60%. Data mimicking sets a new standard of quality test data generation that combines the best aspects of anonymization, synthesis, and subsetting.
Explore these technologies in a live demo and discover how to use them to:
- Maintain consistency in your test data across tables and across databases
- Subset your data from PB down to GB without breaking referential integrity
- Achieve mathematical guarantees of data privacy
- Increase your team’s efficiency by 50%
- Realize 5x more releases per day
As data drives new and evolving IoT opportunities across all segments of the market, the role of the developer becomes increasingly important in being able to utilize existing tools to drive new ways to create Edge AI solutions. However, solving for Edge AI can be a complex design and development process as it requires determining the right selection of sensors, hardware, deep learning frameworks, or deciding how to deploy the unique use case.
By democratizing access to AI and simplifying development, organizations can enable their developers to quickly experiment with different algorithms, processors and optimization techniques or prototype and customize without having to spend weeks obtaining and setting up development boards. In this session, Bill will discuss how organizations can achieve this and empower their developers to build innovative Edge AI solutions – solutions that will improve lives and transform industries.
Ryan McMichael, Sr Manager of Sensors and Systems Engineer for Advanced Hardware, walks through the various sensors available for autonomous driving, and evaluates the pros and cons of each to enable the optimal field of vision for autonomous vehicles.
PRO TALK (CloudWorld): How an AI Driven Approach Reduces Cloud Cost and Makes Your Kubernetes Infrastructure Autonomous
Measuring and controlling costs in cloud environments is often complex. But it does not need to be. In this session, we will discuss how an AI driven approach renders your cloud native applications on Kubernetes fully autonomous and rightsizes your cluster in sub-minute intervals the cloud compute resources. We will go over an experiment with the deployment of an application, and apply autonomous techniques that fiercely controls and optimizes the cluster.
We will discuss how to control and optimize in minutes the cost of your AWS EKS, Google GKE and Azure AKS applications. Instantly. You will learn about powerful -yet simple- strategies to rightsize your clusters: automated scaling up and scaling down to zero your nodes and pods, smart selection of VM shapes, and the automated use of spot instances.