One of the biggest challenges with AI is making sense of the decisions made by black box machine learning algorithms. In this episode of the AI Today podcast, hosts Kathleen Walch and Ronald Schmelzer interview Steve Eglash, Executive Director of Strategic Research Initiatives in the Computer Science Department at Stanford University to discuss Explainable AI, Responsible AI, and how to get AI systems to behave in a more testable, provable, and error-free way.
- Research Papers referenced in the podcast:
- Katz et al. “Reluplex: An efficient SMT solver for verifying deep neural networks,”
- Koh and Liang Understanding black-box predictions via influence functions.
- Zou and Schiebeinger Design AI so that it’s fair.
- Bolukbasi et al. [Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings.
- Hashimoto et al. [Fairness without demographics in repeated loss minimization
- Kim et al. Michael P. Kim, Omer Reingold and Guy N. Rothblum, Fairness Through Computationally-Bounded Awareness
- Ro Khanna – Spread the Digital Wealth