Tuesday, October 26, 2021
Kubernetes has become the de-facto tool for orchestrating containerized workloads, and AI workloads are no different. Built to provide isolated environments and simplify reproducibility and portability, it’s an obvious choice for data science, and an ecosystem of data science tools has been built around containers and K8s. But can an orchestrator built for services meet the needs of research experimentation? Can IT easily incorporate K8s into their workflows? Join Guy Salton of Run:AI for a crash course in Kubernetes for AI. Learn what’s working, what’s not, and some fixes for supporting research environments with K8s.
PRO WORKSHOP (AI): Evolution of Conversational AI: From Rules to Transformers Such as BERT and GPT-3
Conversational AI has been transforming various industries such as Automation, Contact Center, Assistants, and eCommerce. It has undergone several phases of research and development. Prior to the 1990s, most systems were purely based on rules. Then came machine learning based systems, however, it was still hard to manage multiple domains and scenarios. To address these issues, "Skills-based" and "Domain-Intent-Slot" based systems were proposed. Post 2013, Transfer learning and Deep Learning based systems further enhanced the performance substantially by scaling the system to millions of users across a variety of applications. Despite significant progress in the past decade, most systems rely on large amounts of data annotation for Language Understanding, configurations for Dialog Management, and templates for Language Generation. Within the last two years, Transformers based models such as BERT and GPT-3 have been used to depict the power of unsupervised learning and generative systems across all aspects of Conversational AI: Speech Recognition, Language Understanding, Dialog Management, and Language generation. In this talk, I will showcase how Conversational AI has evolved from rules to unsupervised and generative systems and what we can expect in the short and long-term future.
While AI has plenty of potential, some of the earliest AI-based consumer experiences gave the technology a less-than-stellar reputation, and rightfully so. Too often, we see AI that provides de-personalized experiences, harms people due to bias, and lacks the human touch that powers good business, let alone good for humankind.
But, when done right, AI has the potential to not only transform the customer experience and enterprise, but to create positive change and make life easier for millions of people.
In this session, Alex Spinelli will offer a compelling case for how AI can make the future more human.
MORE HUMAN BUSINESS. As he explores why and how companies must capitalize on today’s unprecedented opportunity to be connected to customers in new ways, Alex will delve into why companies that focus on empathetic conversational AI experiences will own the future of commerce, leveraging case studies from household name brands. For example, when brides-to-be faced a myriad of pandemic-era wedding challenges - including dresses trapped in closed storefronts, cancelled appointments, and postponed event dates - David’s Bridal leveraged AI-driven messaging to transition customers from in-store associates to fully-online conversations that drove the bridal experience. AI-driven messaging, seamlessly orchestrated by their virtual assistant Zoey, accomplished everything from answering questions and making recommendations to facilitating the buying experience online, and the brand’s e-commerce revenue skyrocketed.
A MORE HUMAN WORLD. Alex will also explore why fighting bias in AI is more critical than ever, arguing that it isn’t enough for AI to help us be smarter, faster and more productive – it also needs to be a force for good in the world. Given AI’s growing role in high-stakes decision-making, companies need to expand their use of tools and technologies capable of fighting AI bias, going further than just standards and talk. Every company will soon have its own conversational AI to create more human connections with their customers, rather than rely on the Alexas, Siris, and Cortanas of the world that exist to keep them within the walls of Big Tech.
PRO WORKSHOP (AI): Making Apps Listen and React with LUIS (Language Understanding Intelligent Service)
Language Understanding Intelligence Service (LUIS) is part of Azure's Cognitive Services. It's built on the interactive machine learning and language understanding research from Microsoft Research. Luis provides the capability to understand a person’s natural language and respond with actions specified by application code. In this session we'll examine how this powerful feature can be integrated into applications, offering a more natural interaction with a device.
Wednesday, October 27, 2021
FEATURED TALK: (AI): Responsible AI into Practice - Deliver Trust in Artificial Intelligence Solution
AI has been a key driver in innovation in every industry Organizations have ramped up their effort on leveraging AI to gain a competitive advantage. However, AI solution comes with its own challenges and risk, particularly in regulated industries. There have been numerous instances when AI introduced bias. Organizations must use a balanced approach to accelerating the adoption of AI and prioritize AI governance to ensure trust in the AI system. While AI regulation landscape is still evolving, now is the time for organizations to start taking steps to understand and mitigate AI risks. Responsible AI framework provides guidelines around AI governance for building fair, transparent, ethical, and accountable AI solutions. In this session you will learn about how organizations can follow Responsible AI guidelines and operationalize trust in AI solutions by incorporating AI governance throughout the AI/ML life cycle.
How does a machine classify different species of animals just by looking at an image? Computer Vision is the branch of Machine learning that does the magic and deep learning helps in achieving it. In this session, I will cover an introduction to Computer Vision, Deep Neural Networks and show how to build a serverless image classification application using Microsoft Azure Functions and ML.Net framework. The implementation will be in C# language.
A hands-on deep dive on using Apache Kafka, Kafka Streams, Apache NiFi + Edge Flow Manager + MiniFi Agents with Apache MXNet, OpenVino, TensorFlow Lite, and other Deep Learning Libraries on the actual edge devices including Raspberry Pi with Movidius 2, Google Coral TPU and NVidia Jetson Nano. We run deep learning models on the edge devices and send images, capture real-time GPS and sensor data. With our low coding IoT applications providing easy edge routing, transformation, data acquisition and alerting before we decide what data to stream real-time to our data space. These edge applications classify images and sensor readings real-time at the edge and then send Deep Learning results to Kafka Streams and Apache NiFi for transformation, parsing, enrichment, querying, filtering and merging data to various Apache data stores including Apache Kudu and Apache HBase.https://www.datainmotion.dev/2019/08/updating-machine-learning-models-at.html
Enterprises with AI experience face an upward struggle, as research demonstrates that only 53% of AI projects make it from prototype to production. This issue can largely be attributed to difficulties navigating the cumbersome deep learning lifecycle given that new features and use cases are stymied by limited hardware availability, slow and ineffective models, wasted time during development cycles, and financial barriers. AI developers need better tools that examine and address the algorithms themselves; otherwise, they will keep getting stuck. However, there just is not one tool available on the market that gives developers production-grade performance while still being flexible and user-friendly. In this talk, Yonatan presents an innovative and unique solution to this problem- using AI to craft the next generation of AI. Yonatan developed an Automated Neural Architecture Construction engine (AutoNAC), the first commercially viable Neural Architecture Search (NAS) technology set to unlock a whole set of AI opportunities for cloud, on-prem, edge deployments, and more. His engine is capable of crafting state-of-the-art deep neural networks that can outperform top-notch open-source neural nets currently available on the market.
Application of AI within image Processing for defect detection and classification which brings significant reduction in time and resources for the manufacturer - use case in semiconductor mask manufacturing
This presentation will cover options to run Tensorflow model inference on WebAssembly. We will start from the unique challenges of deploying AI inference models in production, and how Rust + WebAssembly could help. Using a Mobilenet image classification task as an example, we will discuss the pros and cons of the plain JS approach, Tensorflow.js, pure Rust crates for Tensorflow compiled to Wasm, and WASI-like Tensorflow Wasm extensions that run on specialized inference chips. We will go through the journey of 60,000x performance gain over different WebAssembly approaches. We will also discuss what’s the future for WebAssembly-based AI on the edge cloud.
Through an innovative project, reducing CO2 emissions and all other air pollution induced by the mobility in cities by 30% by deploying a solution for a real-time automatic emission-based road traffic micro-regulation, we managed to use the best of AI technologies. Indeed, AI is the key enabler addressing the complexity of real-time analysis of mobility in crossroads and local air pollution with the trend predictions that leads to recommendation to how to regulate road traffic to decrease air pollution and apply these recommendations directly to traffic lights. Using embedded AI at local camera level was instrumental to allow detecting the different road users (vehicle, public transportation, pedestrian, cyclist…) in real-time, while respecting privacy and GDPR, in order to apply mobility strategies for the optimal mobility management with minimum pollution impact. This last part is combining two AI engines with 5 models. This project, [AI]Roads, is an European awarded project and the outcome is tested in some major cities in EU. Beyond technical challenge, we will share some key advantages of combining AI and embedded AI, which might become the mainstream for some applications, and how we offered a scalable solution to a complex problem: the automatic and best trade-off between air pollution and mobility.
As AI continues to transform our world, people are becoming more accustomed to conversing with voice assistants and chatbots to accomplish an increasing number of tasks - naturally, that includes search. Think about how fast people speak versus how they type. 41% of adults, and 55% of teens, use voice search daily.
Because intelligent assistants are able to decipher natural language, people are using voice search far more conversationally than typed search. Today’s natural language processing technologies are enabling rapid and continuous improvement to the speed and accuracy that intelligent assistants process user queries and deliver results, making voice search a better user experience..
This advancement will reshape customer service and support as well as information search over the next five years.
In this talk, Alex Farr will explain how easy it is for any business user to build and deploy an intelligent chatbot including highlighting several use cases where it has improved customer experience and accessibility, including:
PRO TALK (AI): Social Media Data for AI Applications: Unprecedented Opportunities and Ethical Considerations
Social media platforms like Twitter, Instagram, and Facebook are a hotbed of activity and therefore provide unprecedented opportunities for the gathering of big data. While there are certainly ethical and privacy considerations, the availability of these large quantities of data, in both imagery and text forms, allow for the use of artificial intelligence approaches for analytics. Natural language processing techniques can yield insights into sentiment analysis around important topics such as breaking news and disease diagnoses. Computer vision-based methods can harness the wide range of photos posted on social networks to inform disaster relief pipelines in crisis situations and analyze the change in urban areas over time. In any case, the unique quality of social media as vehicles for the widespread and easily facilitated dissemination of data means that there will continue to be exciting AI-driven innovations in the future. These innovations will help to save lives, optimize energy and create a more sustainable society, and enable solutions that tackle mental health issues. Of course, the challenges associated with this work include the privacy of the data posted on the platforms and legal issues regarding web scraping. Another challenge is deploying certain social media analytics technologies into the real world for use by everyday folks. Making sure that the technology is accessible and interpretable is key, especially when talking about typical "black box" models like convolutional neural networks (CNNs). Regardless, social media's applications in the field of AI are exciting, and we discuss current and future ramifications in this session.
Starting with ML tutorials seems easy. But how do you scale your ML models from detecting cats and dogs to a full scale business ML model?
The quest for a practical AI solution for automated ECG diagnostic is motivated by desire to reduce the human and financial resources required for patient monitoring and to enable more ubiquitous remote outpatient monitoring. Today, Deep Neural Networks (DNN) are considered the building blocks of all AI solutions. Yet, DNNs are not widely adopted in hospitals for automatic diagnostic for the following reasons. First, doctors do not have the time nor the desire to be “mechanical Turks” who label row-by-row millions of ECG patient records for model training. Second, doctors do not trust black box diagnostic predictions as humans need reasons to support deliberate actions. Third, “the right-to-know” regulation included in GDPR requires organizations to provide to stakeholders explanations for any automatic decision making. We overcomes these challenges with an innovative, patent-pending variant of the Neural Networks. The LNN by Trendalyze in cooperation with LaTrobe University (Australia) and St. Ekaterina University Hospital (Bulgaria). It achieved the best performance of 100% within-patient accuracy in recognizing atrial fibrillation in 12-lead ECG recordings and showed robustness with respect to the wide variations of ECG patterns among different patients.
The Agile Metrics are important to track the health of your projects. They help in tracking the project progress. There are other advanced metrics equally important, like Customer Satisfaction, Employee Satisfaction, and Innovation? Tracking these statistics many times is not easy and straightforward.Did you ever think of applying AI (Artificial Intelligence) to measure these and come up with actionable evidence? The AI-powered with NLP (Natural language Processing) and statistical models not just help in getting a good project insight, it can also help in course corrections, and increase the rate of project success. It can help companies to understand their core strengths, weaknesses, and how to position themselves in the market.Rohit will talk and demonstrate how you can digitally transform your Agile Program Management with AI and NLP. How it enables organizations to take proactive measures that not only make projects successful but also help companies stay competitive and thrive in the market.
In recent years interest in sparse neural networks has steadily increased, accelerated by NVIDIA’s inclusion of dedicated hardware support in their recent Ampere GPUs. Sparse networks feature both limited interconnections between the neurons and restrictions on the number of neurons that are permitted to become active. By introducing this weight and activation sparsity, significant simplification of the computations required to both train and use the network is achieved. These sparse networks can achieve equivalent accuracy to their traditional ‘dense’ counterparts but have the potential to outperform the dense networks by an order of magnitude or more. In this presentation we start by discussing the opportunity associated with sparse networks and provide an overview of the state-of-the-art techniques used to create them. We conclude by presenting new software algorithms that unlock the full potential of sparsity on current hardware platforms, highlighting 100X speedups on FPGAs and 20X on CPUs and GPUs.
AI is a term that has been thrown around in the cybersecurity industry for quite some time. The common components typically referenced when talking about AI are Machine learning and Deep Learning, but what are the differences? When it comes to cybersecurity, AI can be a huge leap forward in combating cyberattacks, but not all solutions are the same. If AI could be the silver bullet, why are today's AI solutions not working? Many of the traditional Machine Learning cybersecurity solutions currently available are causing massive operational challenges as they are not adequately combating the ever-evolving and sophisticated threats. Detection and response-based solutions (EDR) are insufficient because they typically can take 10 minutes or more to identify a threat detected in the environment. It takes sub 3 seconds to infect and start encrypting a system; that is why time is of the essence. You have to prevent the infection and damage it can inflict before it takes root, executes, and spreads. One important item of note is the emerging trend of adversarial machine learning being leveraged by cybercriminals; how can this be combated? Executives and security leaders need to start adopting a preventative approach to cybersecurity utilizing the latest in cutting-edge security solutions, which is only made possible through the use of AI and, more importantly, the use of Deep Learning.The great news is that AI technologies are advancing. Deep learning is proven to be the most effective prevention cybersecurity solution to date, resulting in unmatched prevention rates with proven lowest false positive rates. As organizations evaluate new technologies, a firm understanding of the differences, challenges, and benefits of all AI solutions is a must. Therefore, educational advancements in machine learning and deep learning are well warranted.
From search engine results to social media feeds, the applications powered by AI are ubiquitous in our day to day lives. However, there are many dangers of using AI, from amplifying historical biases to making decisions that we cannot interpret. With the rise of AI-based solutions, the need for us to understand the motivation behind these black-box models is imperative. In this session, we explore real scenarios that show the perils of using AI in the wild and understand why simply optimizing for accuracy or performance is not enough. Learn how these risks can be addressed through the use of various techniques throughout the model development and deployment process.
One of the main issues with ML and DL deployment is finding the right way to train and operationalize the model within the company. Serverless approach for deep learning provides simple, scalable, affordable yet reliable architecture. The challenge of this approach is to keep in mind certain limitations in CPU, GPU and RAM, and organize training and inference of your model. My presentation will show how to utilize services like Amazon SageMaker, AWS Batch, AWS Fargate, AWS Lambda, AWS Step Functions and SageMaker Pipelines to organize deep learning workflows. My talk will be beneficial for machine learning engineers and platform engineers.
Thursday, October 28, 2021
NLP is a key component in many data science systems that must understand or reason about text. This hands-on tutorial uses the open-source Spark NLP library to explore advanced NLP in Python. Spark NLP provides state-of-the-art accuracy, speed, and scalability for language understanding by delivering production-grade implementations of some of the most recent research in applied deep learning. It's the most widely used NLP library in the enterprise today. You'll edit and extend a set of executable Python notebooks by implementing these common NLP tasks: named entity recognition, sentiment analysis, spell checking and correction, document classification, and multilingual and multi domain support. The discussion of each NLP task includes the latest advances in deep learning used to tackle it, including the prebuilt use of BERT embeddings within Spark NLP, using tuned embeddings, and 'post-BERT' research results like XLNet, ALBERT, and roBERTa. Spark NLP builds on the Apache Spark and TensorFlow ecosystems, and as such it's the only open-source NLP library that can natively scale to use any Spark cluster, as well as take advantage of the latest processors from Intel and Nvidia. You'll run the notebooks locally on your laptop, but we'll explain and show a complete case study and benchmarks on how to scale an NLP pipeline for both training and inference.
Natural Language Processing(NLP) is an interesting and challenging field. It becomes even more interesting and challenging when we take into consideration more than one human language. when we perform an NLP on a single language there is a possibility that the interesting insights from another human language might be missed out. The interesting and valuable information may be available in other human languages such as Spanish, Chinese, French, Hindi, and other major languages of the world. Also, the information may be available in various formats such as text, images, audio, and video.
In this talk, I will discuss techniques and methods that will help perform NLP tasks on multi-source and multilingual information. The talk begins with an introduction to natural language processing and its concepts. Then it addresses the challenges with respect to multilingual and multi-source NLP. Next, I will discuss various techniques and tools to extract information from audio, video, images, and other types of files using PyScreenshot, SpeechRecognition, Beautiful Soup, and PIL packages. Also, extracting the information from web pages and source code using pytessaract. Next, I will discuss concepts such as translation and transliteration that help to bring the information into a common language format. Once the language is in a common language format it becomes easy to perform NLP tasks. Next, I will explain with the help of a code walkthrough generating a summary from multi-source and multi-lingual information into a specific language using spacy and stanza packages.
1. Introduction to NLP and concepts (05 Minutes)
2. Challenges in Multi source multilingual NLP (02 Minutes)
3. Tools for extracting information from various file formats (04 Minutes)
4. Extract information from web pages and source code (04 Minutes)
5. Methods to convert information into common language format (05 Minutes)
6. code walkthrough for multi-source and multilingual summary generation (10 Minutes)
We will begin with key stats from Gartner and then ask the panel/co-moderators series of questions to initiate the conversation. During the panel, we will also use online polls to engage the attendees. We will also try to answer attendees' questions as well.The covid-19 pandemic has put a lot of strain on the helpdesk because the majority of the organizations had to start working remotely even if they were not ready for it. We will discuss how conversational AI is assisting helpdesks to navigate these challenges.
Too often, “AI-capable” refers to marketing claims instead of practical value add. For this reason, developers tend to be skeptical about AI-driven development. Slapdash application of AI ends up diminishing developer’s creativity and effectiveness. When implemented in inventive, unique ways, AI dramatically improves the productivity of developers and opens up new opportunities for creativity – especially when applied to cloud app development. Beyond the initial development process, AI has the potential to completely transform the entire application lifecycle by eliminating guesswork and repetitive tasks. AI ensures teams are better equipped to manage application dependencies and ensure that regardless of what changes are made, applications never break and are able to seamlessly adapt to inevitable change. AI-supported development democratizes access to advanced tech, making it possible for any IT team – even the lean, mean ones – to build serious apps. Essentially, AI in the DevOps cycle enables developers to shift-left the quality assurance in a more guided and automated way by assisting them at critical phases in the application building process. Instead of finding problems in production, developers are able to identify them while in the midst of the development lifecycle, so they can remain focused on innovating the best solution rather than the intricacies of hand-coding. Pairing AI with visual, model-driven development allows guidance to be both more powerful and less obtrusive and can compress CI/CD pipelines into days or even hours, instead of weeks. As the Head of AI at OutSystems, António has seen firsthand how quickly developers can change their minds after experiencing the speed and creativity AI enables as a complement to traditional development. In this session, he will provide insight on the three most fundamental design decisions regarding integrating AI into an application platform based on OutSystems’ experience analyzing models based on tens of millions of application graphs and flows and explore the implications for improving cloud development productivity by 100x. OutSystems serves enterprise customers like Randstad, which built an ML algorithm to link job applicants to positions, and Deloitte, which developed a voice-to-text tool with deep analysis integrated to capture more accurate notes between advisors and their clients.
Session will focus on defining Machine Learning (ML) operational models and how enterprises can leverage it through a framework of governance and model risk management to unlock value. Operationalization is essential to realizing the business value of ML models. We will also overlay the paradigm of DevOps on ML lifecycle management including infusing automated validation of model, removing bias and measurement using KPI's. Example framework and architecture of an ML operational model in action will be showcased, including a starter toolkit.
KEYNOTE (AI): Modzy -- Crossing the AI Valley of Death: Deploying and Monitoring Models in Production at Scale
It’s happened again. You built another AI model that will never see the light of day because it won’t make it past the AI “valley of death” – the crossover of model development to model deployment across your enterprise. The handoff between data science and engineering teams is fraught with friction, outstanding questions around governance and accountability, and who is responsible for different parts of the pipeline and process. Even worse? The patchwork approach when building an AI pipeline leaves many organizations open to risks because of a lack of a holistic approach to security and monitoring.Join us to learn about approaches and solutions for configuring a MLOps pipeline that’s right for your organization. You’ll discover why it’s never too early to plan for operationalization of models, regardless of whether your organization has 1, 10, 100, or 1,000 models in production.The discussion will also reveal the merits of an open container specification that allows you to easily package and deploy models in production from everywhere. Finally, new approaches for monitoring model drift and explainability will be revealed that will help manage expectations with business leaders all through a centralized AI software platform called Modzy®.
Anyone building enterprise level machine learning pipelines understands how challenging managing dependencies can be, and that's exactly why Conda works its magic. However, these dependencies can come with security vulnerabilities that are becoming increasingly exploited with malware as hackers target popular open source libraries. In this session, we're cover the most common next generation of cyber attacks, like the cryto-mining typo-squatting on Matplotlib, as well as what tools and best practices you can put into place to protect your MLOps pipelines from cybersecurity attacks.
You know the AI models deployed in production will need to be monitored and updated. It probably does not surprise you that not everyone does so, and that some large bank with thousands of production models doesn’t quite know where all its AI models are, let alone monitor them. But MLOps goes beyond monitoring models to data engineering to driving business objectives. In this session, we will see how Big Tech Cloud and AI players like Azure and AWS enable MLOps today, and what more we can expect to see.