DeveloperWeek Global: Enterprise 2020 DeveloperWeek Global: Enterprise 2020

Wednesday, November 11, 2020

- PST
How Can Open Source Help to Support User Trust in Artificial Intelligence?
Join on Hopin
Rebecca Whitworth
Rebecca Whitworth
Red Hat, Associate Manager

Artificial intelligence (AI) is increasingly being used in systems affecting our everyday lives. These systems are often referred to as “black box” systems because we do not know how the data is processed, we are just given a result. The advent and widespread adoption of deep neural networks, while providing impressive results, made this even more important since it is quite hard for a human to interpret how information is processed within thousands of neurons. In such scenarios, how can we trust the decisions that have been made? This is especially important when considering critical systems like diagnosis tools for doctors, where patients lives are at risk. So how can Open Source help us trust AI?

In this session, we will explore the many ways in which open source can create trust for AI systems. By leveraging the ideas of peers, open source can give greater opportunity to innovate and create features that people support. The transparency of open source helps to improve the relationship users have with the algorithms and implementations behind these systems.

We will also investigate different open source projects that help to explain “black box” models, relating this to how it increases user understanding and trust. These projects help to promote responsible AI, ensuring systems (as mentioned above) can be trusted and applied to real-world situations.

This session is intended for anyone with a keen interest in open source or AI and will give an insight into:

-How open source supports trust in AI systems.
-Open source projects that enable explanations of black-box models.
-Why we need to trust AI systems in real-world applications.