How can AI be trustworthy?
AI systems have the potential to provide great value, but also the potential to cause great harm. Knowing how to build or use AI systems is simply not going to be enough for organizations. You need to know how to build, use, and interact with these systems ethically and responsibly. Additionally you need to understand that Trustworthy AI is a spectrum that addresses various aspects relating to societal, systemic, and technical areas.
In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer revisit at a high level the five layers and what it takes to have responsible and ethical AI systems.
What are the five pillars of trust?
The layers including: Ethical AI, Responsible AI, Transparent AI, Governed AI, and Interpretable & Explainable AI. Each level is explained in this episode including the various areas you need to address in each layer.
What are the foundations of trustworthy AI?
Artificial Intelligence is a pivotal technology reshaping industries and redefining the way we work and live. For organizations who want to use AI, understanding the foundations of trustworthy AI is paramount.
This also means ethical and responsible AI is not just a policy statement or a press release. There are real, substantial liabilities and risks to untrustworthy, unethical, and irresponsible AI. Organizations need to be adopting a Framework that incorporates the five layers of Trustworthy AI outlined in this podcast.