Big Data & Artificial Intelligence
Wednesday, November 11, 2020
Breakthroughs in artificial intelligence (AI), machine learning (ML) and natural language processing (NLP) have helped customers and call agents alike, to get more done in less time. It draws on multiple data sources to anticipate customer and company needs, handles interactions on its own where possible, and provides in-call support where needed.
The future of AI in the contact center is one where software tools make humans more efficient and allow the customers to have natural conversations with a bot via voice, webchat, social messaging app or other channels, handling requests, retrieving information and delivering answers to frequently asked questions. In short, creating the ultimate customer experience.
During this session, Noam Fine will discuss how enterprises with limited machine learning expertise can leverage communications APIs to unlock simple, secure and flexible solutions to deploy AI in their contact centers, elevating issues to experienced agents when needed to ensure personalized, emotive CX. He will draw on his experience to explain how enterprises can automate their agent-based live chats and streamline their support channels and operations, while offering a personalized human-like interaction. Most importantly, he will discuss how to find the right balance between seamless, intelligent self-service and efficient human intervention using integrated AI-driven communications - applications, APIs and the best of both.
We all love the conventional uses of CI/CD platforms, from automating unit tests to multi-cloud service deployment. But most CI/CD tools are abstract code execution engines, meaning that we can also leverage them to do non-deployment-related tasks. In this session, we'll explore how GitHub Actions can be used to train a machine learning model, then run predictions in response to file commits, enabling an untrained end-user to predict the value of their home by simply editing a text file. As a bonus, we'll leverage Apple's CoreML framework, which normally only runs in an OSX or iOS environment, without ever requiring the developer to lay their hands on an Apple device.
Artificial intelligence (AI) is increasingly being used in systems affecting our everyday lives. These systems are often referred to as “black box” systems because we do not know how the data is processed, we are just given a result. The advent and widespread adoption of deep neural networks, while providing impressive results, made this even more important since it is quite hard for a human to interpret how information is processed within thousands of neurons. In such scenarios, how can we trust the decisions that have been made? This is especially important when considering critical systems like diagnosis tools for doctors, where patients lives are at risk. So how can Open Source help us trust AI?
In this session, we will explore the many ways in which open source can create trust for AI systems. By leveraging the ideas of peers, open source can give greater opportunity to innovate and create features that people support. The transparency of open source helps to improve the relationship users have with the algorithms and implementations behind these systems.
We will also investigate different open source projects that help to explain “black box” models, relating this to how it increases user understanding and trust. These projects help to promote responsible AI, ensuring systems (as mentioned above) can be trusted and applied to real-world situations.
This session is intended for anyone with a keen interest in open source or AI and will give an insight into:
-How open source supports trust in AI systems.
-Open source projects that enable explanations of black-box models.
-Why we need to trust AI systems in real-world applications.