Podcast: Play in new window | Embed
Subscribe: Apple Podcasts | Google Podcasts | Spotify | Amazon Music | Email | TuneIn | Deezer | RSS
AI systems have the potential to provide great value. But also the potential to cause great harm. Knowing how to build or use AI systems is simply not going to be enough. You need to know how to build, use, and interact with these systems ethically and responsibly. Additionally you need to understand that Trustworthy AI is a spectrum that addresses various aspects relating to societal, systemic, and technical areas.
In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer go over at a high level the Explainable & Interpretable AI layer. Relying on black box technology can be dangerous. Without understandability, we don’t have trust. To trust these systems, humans want accountability and explanation. We discuss what Explainable & Interpretable AI is and why it’s important for AI systems. We also discuss the main elements that need to be addressed in the Explainable & Interpretable AI layer, and discuss what considerations and questions you need to address as you’re implementing responsible AI.
Show Notes:
- The Steps for a Machine Learning Project
- Trustworthy AI Group Workshop
- FREE Intro to CPMAI mini course
- CPMAI Training and Certification
- AI Today Podcast: Trustworthy AI Series: The Layers of Trustworthy AI
- AI Today Podcast: Trustworthy AI Series: Why are trustworthy, ethical and responsible AI systems necessary?