Tuesday, October 26, 2021
Kubernetes has become the de-facto tool for orchestrating containerized workloads, and AI workloads are no different. Built to provide isolated environments and simplify reproducibility and portability, it’s an obvious choice for data science, and an ecosystem of data science tools has been built around containers and K8s. But can an orchestrator built for services meet the needs of research experimentation? Can IT easily incorporate K8s into their workflows? Join Guy Salton of Run:AI for a crash course in Kubernetes for AI. Learn what’s working, what’s not, and some fixes for supporting research environments with K8s.
PRO WORKSHOP (AI): Evolution of Conversational AI: From Rules to Transformers Such as BERT and GPT-3
Conversational AI has been transforming various industries such as Automation, Contact Center, Assistants, and eCommerce. It has undergone several phases of research and development. Prior to the 1990s, most systems were purely based on rules. Then came machine learning based systems, however, it was still hard to manage multiple domains and scenarios. To address these issues, "Skills-based" and "Domain-Intent-Slot" based systems were proposed. Post 2013, Transfer learning and Deep Learning based systems further enhanced the performance substantially by scaling the system to millions of users across a variety of applications. Despite significant progress in the past decade, most systems rely on large amounts of data annotation for Language Understanding, configurations for Dialog Management, and templates for Language Generation. Within the last two years, Transformers based models such as BERT and GPT-3 have been used to depict the power of unsupervised learning and generative systems across all aspects of Conversational AI: Speech Recognition, Language Understanding, Dialog Management, and Language generation. In this talk, I will showcase how Conversational AI has evolved from rules to unsupervised and generative systems and what we can expect in the short and long-term future.
While AI has plenty of potential, some of the earliest AI-based consumer experiences gave the technology a less-than-stellar reputation, and rightfully so. Too often, we see AI that provides de-personalized experiences, harms people due to bias, and lacks the human touch that powers good business, let alone good for humankind.
But, when done right, AI has the potential to not only transform the customer experience and enterprise, but to create positive change and make life easier for millions of people.
In this session, Joe Bradley will offer a compelling case for how AI can make the future more human.
MORE HUMAN BUSINESS. As he explores why and how companies must capitalize on today’s unprecedented opportunity to be connected to customers in new ways, Joe will delve into why companies that focus on empathetic conversational AI experiences will own the future of commerce, leveraging case studies from household name brands. For example, when brides-to-be faced a myriad of pandemic-era wedding challenges - including dresses trapped in closed storefronts, cancelled appointments, and postponed event dates - David’s Bridal leveraged AI-driven messaging to transition customers from in-store associates to fully-online conversations that drove the bridal experience. AI-driven messaging, seamlessly orchestrated by their virtual assistant Zoey, accomplished everything from answering questions and making recommendations to facilitating the buying experience online, and the brand’s e-commerce revenue skyrocketed.
A MORE HUMAN WORLD. Joe will also explore why fighting bias in AI is more critical than ever, arguing that it isn’t enough for AI to help us be smarter, faster and more productive – it also needs to be a force for good in the world. Given AI’s growing role in high-stakes decision-making, companies need to expand their use of tools and technologies capable of fighting AI bias, going further than just standards and talk. Every company will soon have its own conversational AI to create more human connections with their customers, rather than rely on the Alexas, Siris, and Cortanas of the world that exist to keep them within the walls of Big Tech.
PRO TALK (AI): How Artificial Intelligence Will Redefine the Leadership in the Software Management Indus
I do believe that today’s business leaders will need to radically change the way they lead and manage teams within their organizations as AI affects all business sectors. Successful leadership will not be driven by the same fundamental ‘human’ traits and characteristics we see in influential, successful leaders today anymore. Leaders in the Artificial Intelligence age need to be more open and willing to learn and seek input and knowledge from everyone within the hierarchy of the organization, regardless of their role. Effective and wise leaders in the age of Artificial Intelligence already recognize that some of the most valuable contributions and ideas for AI implementation may come from employees with much less experience than themselves. Leaders need to also create and foster a strong culture of innovation within their teams and be ready to respond to any technological opportunities and threats as they appear. Being able to effectively communicate the opinions of the team members to relevant stakeholders and being flexible enough to quickly adapt, as required, in this new ‘as yet unwritten’ era of commerce, should be perceived as key strengths that will certainly improve their commercial decision making. In this session, I will engage the audience in new ideas about the human traits and characteristics of successful leadership in the era of AI and Machine learning.
PRO WORKSHOP (AI): Making Apps Listen and React with LUIS (Language Understanding Intelligent Service)
Language Understanding Intelligence Service (LUIS) is part of Azure's Cognitive Services. It's built on the interactive machine learning and language understanding research from Microsoft Research. Luis provides the capability to understand a person’s natural language and respond with actions specified by application code. In this session we'll examine how this powerful feature can be integrated into applications, offering a more natural interaction with a device.
Wednesday, October 27, 2021
How does a machine classify different species of animals just by looking at an image? Computer Vision is the branch of Machine learning that does the magic and deep learning helps in achieving it. In this session, I will cover an introduction to Computer Vision, Deep Neural Networks and show how to build a serverless image classification application using Microsoft Azure Functions and ML.Net framework. The implementation will be in C# language.
Enterprises with AI experience face an upward struggle, as research demonstrates that only 53% of AI projects make it from prototype to production. This issue can largely be attributed to difficulties navigating the cumbersome deep learning lifecycle given that new features and use cases are stymied by limited hardware availability, slow and ineffective models, wasted time during development cycles, and financial barriers. AI developers need better tools that examine and address the algorithms themselves; otherwise, they will keep getting stuck. However, there just is not one tool available on the market that gives developers production-grade performance while still being flexible and user-friendly. In this talk, Yonatan presents an innovative and unique solution to this problem- using AI to craft the next generation of AI. Yonatan developed an Automated Neural Architecture Construction engine (AutoNAC), the first commercially viable Neural Architecture Search (NAS) technology set to unlock a whole set of AI opportunities for cloud, on-prem, edge deployments, and more. His engine is capable of crafting state-of-the-art deep neural networks that can outperform top-notch open-source neural nets currently available on the market.
This presentation will cover options to run Tensorflow model inference on WebAssembly. We will start from the unique challenges of deploying AI inference models in production, and how Rust + WebAssembly could help. Using a Mobilenet image classification task as an example, we will discuss the pros and cons of the plain JS approach, Tensorflow.js, pure Rust crates for Tensorflow compiled to Wasm, and WASI-like Tensorflow Wasm extensions that run on specialized inference chips. We will go through the journey of 60,000x performance gain over different WebAssembly approaches. We will also discuss what’s the future for WebAssembly-based AI on the edge cloud.
As AI continues to transform our world, people are becoming more accustomed to conversing with voice assistants and chatbots to accomplish an increasing number of tasks - naturally, that includes search. Think about how fast people speak versus how they type. 41% of adults, and 55% of teens, use voice search daily.
Because intelligent assistants are able to decipher natural language, people are using voice search far more conversationally than typed search. Today’s natural language processing technologies are enabling rapid and continuous improvement to the speed and accuracy that intelligent assistants process user queries and deliver results, making voice search a better user experience..
This advancement will reshape customer service and support as well as information search over the next five years.
In this talk, Alex Farr will explain how easy it is for any business user to build and deploy an intelligent chatbot including highlighting several use cases where it has improved customer experience and accessibility, including:
PRO TALK (AI): Social Media Data for AI Applications: Unprecedented Opportunities and Ethical Considerations
Social media platforms like Twitter, Instagram, and Facebook are a hotbed of activity and therefore provide unprecedented opportunities for the gathering of big data. While there are certainly ethical and privacy considerations, the availability of these large quantities of data, in both imagery and text forms, allow for the use of artificial intelligence approaches for analytics. Natural language processing techniques can yield insights into sentiment analysis around important topics such as breaking news and disease diagnoses. Computer vision-based methods can harness the wide range of photos posted on social networks to inform disaster relief pipelines in crisis situations and analyze the change in urban areas over time. In any case, the unique quality of social media as vehicles for the widespread and easily facilitated dissemination of data means that there will continue to be exciting AI-driven innovations in the future. These innovations will help to save lives, optimize energy and create a more sustainable society, and enable solutions that tackle mental health issues. Of course, the challenges associated with this work include the privacy of the data posted on the platforms and legal issues regarding web scraping. Another challenge is deploying certain social media analytics technologies into the real world for use by everyday folks. Making sure that the technology is accessible and interpretable is key, especially when talking about typical "black box" models like convolutional neural networks (CNNs). Regardless, social media's applications in the field of AI are exciting, and we discuss current and future ramifications in this session.
The quest for a practical AI solution for automated ECG diagnostic is motivated by desire to reduce the human and financial resources required for patient monitoring and to enable more ubiquitous remote outpatient monitoring. Today, Deep Neural Networks (DNN) are considered the building blocks of all AI solutions. Yet, DNNs are not widely adopted in hospitals for automatic diagnostic for the following reasons. First, doctors do not have the time nor the desire to be “mechanical Turks” who label row-by-row millions of ECG patient records for model training. Second, doctors do not trust black box diagnostic predictions as humans need reasons to support deliberate actions. Third, “the right-to-know” regulation included in GDPR requires organizations to provide to stakeholders explanations for any automatic decision making. We overcomes these challenges with an innovative, patent-pending variant of the Neural Networks. The LNN by Trendalyze in cooperation with LaTrobe University (Australia) and St. Ekaterina University Hospital (Bulgaria). It achieved the best performance of 100% within-patient accuracy in recognizing atrial fibrillation in 12-lead ECG recordings and showed robustness with respect to the wide variations of ECG patterns among different patients.
In recent years interest in sparse neural networks has steadily increased, accelerated by NVIDIA’s inclusion of dedicated hardware support in their recent Ampere GPUs. Sparse networks feature both limited interconnections between the neurons and restrictions on the number of neurons that are permitted to become active. By introducing this weight and activation sparsity, significant simplification of the computations required to both train and use the network is achieved. These sparse networks can achieve equivalent accuracy to their traditional ‘dense’ counterparts but have the potential to outperform the dense networks by an order of magnitude or more. In this presentation we start by discussing the opportunity associated with sparse networks and provide an overview of the state-of-the-art techniques used to create them. We conclude by presenting new software algorithms that unlock the full potential of sparsity on current hardware platforms, highlighting 100X speedups on FPGAs and 20X on CPUs and GPUs.
Microsoft Azure Cognitive Services offers various commoditized yet powerful and sophisticated services to work with visual content through Computer Vision.Currently those services can- detect and recognize Faces,- classify images or find objects in an image- run spatial analytics on video, - understand rich information in an image,- process whole videos and get insights from the video- recognize handwriting and ink in application - provide OCR (Optical Character Recognition) on documents - automatically parse a scan of a paper form This fast paced session will introduce the currently available set of services and explain through sample code and demonstrations how to use a selection of them.