Bots & Language Processing
Tuesday, October 26, 2021
PRO WORKSHOP (AI): Evolution of Conversational AI: From Rules to Transformers Such as BERT and GPT-3Join on Hopin
Conversational AI has been transforming various industries such as Automation, Contact Center, Assistants, and eCommerce. It has undergone several phases of research and development. Prior to the 1990s, most systems were purely based on rules. Then came machine learning based systems, however, it was still hard to manage multiple domains and scenarios. To address these issues, "Skills-based" and "Domain-Intent-Slot" based systems were proposed. Post 2013, Transfer learning and Deep Learning based systems further enhanced the performance substantially by scaling the system to millions of users across a variety of applications. Despite significant progress in the past decade, most systems rely on large amounts of data annotation for Language Understanding, configurations for Dialog Management, and templates for Language Generation. Within the last two years, Transformers based models such as BERT and GPT-3 have been used to depict the power of unsupervised learning and generative systems across all aspects of Conversational AI: Speech Recognition, Language Understanding, Dialog Management, and Language generation. In this talk, I will showcase how Conversational AI has evolved from rules to unsupervised and generative systems and what we can expect in the short and long-term future.
While AI has plenty of potential, some of the earliest AI-based consumer experiences gave the technology a less-than-stellar reputation, and rightfully so. Too often, we see AI that provides de-personalized experiences, harms people due to bias, and lacks the human touch that powers good business, let alone good for humankind.
But, when done right, AI has the potential to not only transform the customer experience and enterprise, but to create positive change and make life easier for millions of people.
In this session, Joe Bradley will offer a compelling case for how AI can make the future more human.
MORE HUMAN BUSINESS. As he explores why and how companies must capitalize on today’s unprecedented opportunity to be connected to customers in new ways, Joe will delve into why companies that focus on empathetic conversational AI experiences will own the future of commerce, leveraging case studies from household name brands. For example, when brides-to-be faced a myriad of pandemic-era wedding challenges - including dresses trapped in closed storefronts, cancelled appointments, and postponed event dates - David’s Bridal leveraged AI-driven messaging to transition customers from in-store associates to fully-online conversations that drove the bridal experience. AI-driven messaging, seamlessly orchestrated by their virtual assistant Zoey, accomplished everything from answering questions and making recommendations to facilitating the buying experience online, and the brand’s e-commerce revenue skyrocketed.
A MORE HUMAN WORLD. Joe will also explore why fighting bias in AI is more critical than ever, arguing that it isn’t enough for AI to help us be smarter, faster and more productive – it also needs to be a force for good in the world. Given AI’s growing role in high-stakes decision-making, companies need to expand their use of tools and technologies capable of fighting AI bias, going further than just standards and talk. Every company will soon have its own conversational AI to create more human connections with their customers, rather than rely on the Alexas, Siris, and Cortanas of the world that exist to keep them within the walls of Big Tech.
Wednesday, October 27, 2021
As AI continues to transform our world, people are becoming more accustomed to conversing with voice assistants and chatbots to accomplish an increasing number of tasks - naturally, that includes search. Think about how fast people speak versus how they type. 41% of adults, and 55% of teens, use voice search daily.
Because intelligent assistants are able to decipher natural language, people are using voice search far more conversationally than typed search. Today’s natural language processing technologies are enabling rapid and continuous improvement to the speed and accuracy that intelligent assistants process user queries and deliver results, making voice search a better user experience..
This advancement will reshape customer service and support as well as information search over the next five years.
In this talk, Alex Farr will explain how easy it is for any business user to build and deploy an intelligent chatbot including highlighting several use cases where it has improved customer experience and accessibility, including:
Thursday, October 28, 2021
NLP is a key component in many data science systems that must understand or reason about text. This hands-on tutorial uses the open-source Spark NLP library to explore advanced NLP in Python. Spark NLP provides state-of-the-art accuracy, speed, and scalability for language understanding by delivering production-grade implementations of some of the most recent research in applied deep learning. It's the most widely used NLP library in the enterprise today. You'll edit and extend a set of executable Python notebooks by implementing these common NLP tasks: named entity recognition, sentiment analysis, spell checking and correction, document classification, and multilingual and multi domain support. The discussion of each NLP task includes the latest advances in deep learning used to tackle it, including the prebuilt use of BERT embeddings within Spark NLP, using tuned embeddings, and 'post-BERT' research results like XLNet, ALBERT, and roBERTa. Spark NLP builds on the Apache Spark and TensorFlow ecosystems, and as such it's the only open-source NLP library that can natively scale to use any Spark cluster, as well as take advantage of the latest processors from Intel and Nvidia. You'll run the notebooks locally on your laptop, but we'll explain and show a complete case study and benchmarks on how to scale an NLP pipeline for both training and inference.
OPEN TALK (AI): Making the World Smaller with NLP: Using AI to Link Data and Make it Easier for Machines (and Humans) to UnderstandJoin on Hopin
Linked Data and the Semantic Web have come a long way in helping to achieve a world that is more understandable to computers, but unstructured data can still be especially challenging when trying to extract concepts and metadata into standardized concepts. In this presentation, you will learn about the background of Linked Data (JSON-LD in particular) and how natural language processing can be used to help take advantage of this increasingly important effort. From more easily enhancing the SEO of a website, to making your application more interoperable, natural language processing can make your projects better understood by humans and machines alike.
Natural Language Processing(NLP) is an interesting and challenging field. It becomes even more interesting and challenging when we take into consideration more than one human language. when we perform an NLP on a single language there is a possibility that the interesting insights from another human language might be missed out. The interesting and valuable information may be available in other human languages such as Spanish, Chinese, French, Hindi, and other major languages of the world. Also, the information may be available in various formats such as text, images, audio, and video.
In this talk, I will discuss techniques and methods that will help perform NLP tasks on multi-source and multilingual information. The talk begins with an introduction to natural language processing and its concepts. Then it addresses the challenges with respect to multilingual and multi-source NLP. Next, I will discuss various techniques and tools to extract information from audio, video, images, and other types of files using PyScreenshot, SpeechRecognition, Beautiful Soup, and PIL packages. Also, extracting the information from web pages and source code using pytessaract. Next, I will discuss concepts such as translation and transliteration that help to bring the information into a common language format. Once the language is in a common language format it becomes easy to perform NLP tasks. Next, I will explain with the help of a code walkthrough generating a summary from multi-source and multi-lingual information into a specific language using spacy and stanza packages.
1. Introduction to NLP and concepts (05 Minutes)
2. Challenges in Multi source multilingual NLP (02 Minutes)
3. Tools for extracting information from various file formats (04 Minutes)
4. Extract information from web pages and source code (04 Minutes)
5. Methods to convert information into common language format (05 Minutes)
6. code walkthrough for multi-source and multilingual summary generation (10 Minutes)
We will begin with key stats from Gartner and then ask the panel/co-moderators series of questions to initiate the conversation. During the panel, we will also use online polls to engage the attendees. We will also try to answer attendees' questions as well.The covid-19 pandemic has put a lot of strain on the helpdesk because the majority of the organizations had to start working remotely even if they were not ready for it. We will discuss how conversational AI is assisting helpdesks to navigate these challenges.