Podcast: Play in new window | Embed
Subscribe: Apple Podcasts | Google Podcasts | Spotify | Amazon Music | Email | TuneIn | Deezer | RSS
AI system transparency is about how we manage and run our AI systems. Transparency needs to address how you plan to provide visibility or transparency into how AI systems are created or applied. In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer discuss the transparent layer of AI as part of Cognilytica’s Trustworthy AI Framework.
What does AI transparency mean?
It’s one thing to hear people talk about transparency, but what does it really mean in the context of AI systems? AI system transparency provides visibility into all the aspects of what went into building an AI system so users can understand the full context of how an AI system is built and used. After all, you can’t ask people to trust something when they have absolutely no idea how it was built. And what actually goes into training that system. You don’t want to spend lots of time and money and deploy resources at a solution only to have people not trust the system and not use it.
What is a transparent AI system?
A core theme to this podcast episode is what needs to be addressed in the Transparent Layer of AI in Cognilytica’s Trustworthy AI Framework. This includes AI system transparency, bias measurement and mitigation, open systems, and disclosure and consent.
Show Notes
- The Layers of Trustworthy AI
- Free Intro to Trustworthy AI
- Trustworthy AI Framework Training & Certification
- FREE Intro to CPMAI mini course
- CPMAI Training and Certification
- AI Today Podcast: Trustworthy AI Series: Responsible AI
- AI Today Podcast: Trustworthy AI Series: The Layers of Trustworthy AI
- Trustworthy AI Series: Responsible AI Concepts [AI Today Podcast]