The more we involved AI in our daily lives, the more we need to be able to trust the decisions that these systems make. Right now, too much of what AI systems do are a “black box”. We have little visibility into how decisions are being made, conclusions drawn, objects identified, and more. As the consequences of mistakes become more significant, it becomes more important to have visibility into the inner workings of AI decision-making, or in other words, AI Explainability.
In this podcast, we interview AI parallel entrepreneur and expert Mark van Rijmenam, who is CEO and founder of dscvr.it, CEO of Datafloq, faculty member of Blockchain Institute, and PhD candidate. He’s doing research on the topics of AI Explainability and responsible AI, as well as the convergence of Big Data, Blockchain, and AI into what’s being known as the Distributed Autonomous Organization (DAO).
Fiverr is a marketplace for creative and digital freelance service. We use Fiverr for quite a few needs at Cognilytica including podcast editing, transcription, and more. Use promo code ‘AITODAY’ for 15% off your first purchase on Fiverr.com. Offer valid until December 31, 2018