DeveloperWeek Global: Enterprise 2020 DeveloperWeek Global: Enterprise 2020
Get your ticket or log in to build your agenda.

How Can Open Source Help to Support User Trust in Artificial Intelligence?

- PST
Session Stage
Join on Hopin

Rebecca Whitworth
Red Hat, Associate Manager

Rebecca Whitworth is an associate manager at Red Hat and is part of the TrustyAI initiative. Here she is in charge of prioritising work, meeting goals and managing the team and its vision. TrustyAI is an open source project that aims to create explainability and trust in AI-infused systems and to help support the decision making process for companies using these technologies. She completed a PhD at Newcastle University, in which she developed a platform for scalable, geospatial and temporal analysis of Twitter data. After this she moved to a small startup company as a Java developer creating solutions to improve performance for a CV analyser.


Artificial intelligence (AI) is increasingly being used in systems affecting our everyday lives. These systems are often referred to as “black box” systems because we do not know how the data is processed, we are just given a result. The advent and widespread adoption of deep neural networks, while providing impressive results, made this even more important since it is quite hard for a human to interpret how information is processed within thousands of neurons. In such scenarios, how can we trust the decisions that have been made? This is especially important when considering critical systems like diagnosis tools for doctors, where patients lives are at risk. So how can Open Source help us trust AI?

In this session, we will explore the many ways in which open source can create trust for AI systems. By leveraging the ideas of peers, open source can give greater opportunity to innovate and create features that people support. The transparency of open source helps to improve the relationship users have with the algorithms and implementations behind these systems.

We will also investigate different open source projects that help to explain “black box” models, relating this to how it increases user understanding and trust. These projects help to promote responsible AI, ensuring systems (as mentioned above) can be trusted and applied to real-world situations.

This session is intended for anyone with a keen interest in open source or AI and will give an insight into:

-How open source supports trust in AI systems.
-Open source projects that enable explanations of black-box models.
-Why we need to trust AI systems in real-world applications.