Building Trust and Confidence in AI

Things to keep in mind when baking explainability into your AI systems


M3 ONLINE

Machine learning and AI systems are often described as black boxes: you feed them data, something happens, you get a result. But with ML making its way into sensitive areas such as health and finance, “something” doesn’t quite cut it if you want people to trust your decision making processes - after all, lives and livelihoods are potentially on the line.

For that reason it doesn’t come as a surprise that trust and explainability are common themes as best practise and legislation worldwide seeks to constrain AI for the common good. The resulting guidelines are usually at a very high level, though, and do not cater to the needs of practitioners. There are also many myths around AI explainability that are perpetuated by the community that are holding us all back. Understanding what you can do and what you should do is not well defined.

In episode three of the MCubed webcast series, Napier’s Chief Data Scientist Dr Janet Bastiman will therefore take a look at where the legislation is headed, how we got to this point in the first place and what we can do to make sure that our end users have trust in our systems. After all, it’s not just them who profit from transparent systems, we also have a lot to gain (just think about testing and debugging!).