Not all algorithms are explainable. So does that mean that it’s ok to not provide any explanation on how your AI system got to the decision it did if you’re using one of those “black box” algorithms? The answer should obviously be no. So, what do you do then when creating Ethical and Responsible AI systems to address this issue around explainable and interpretable AI? In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer discuss issues related to explainable and interpretable AI, especially in the context of your Ethical and Responsible AI framework development.