The more we involve AI in our daily lives, the more we need to be able to trust the decisions that autonomous systems make. Right now, too much of what AI systems do are a “black box”. We have little visibility into how decisions are being made, conclusions drawn, objects identified, and more. AI black boxes are scary when the decisions they make impact people personally. In addition, these AI black boxes lack transparency, making their decisions too secretive causing issues of trust and accountability.
As the consequences of mistakes and certain decisions become more significant, it becomes more important to have visibility into the inner workings of AI decision-making, or in other words, Explainable AI (XAI).
In this podcast, we explore the topic of XAI, what research is being done on it now, and what issues enterprises and vendors need to be concerned about as they put their AI systems increasingly in areas requiring oversight, trust, and predictability.
Fiverr is a marketplace for creative and digital freelance service. We use Fiverr for quite a few needs at Cognilytica including podcast editing, transcription, and more. Use promo code ‘AITODAY’ for 15% off your first purchase on Fiverr.com. Offer valid until December 31, 2018