simMachines Briefing Note

Document ID: CGBN106 Abstract The more we involve AI in our daily lives, the more we need to be able to trust the decisions that autonomous systems make. However, it’s becoming harder and harder to understand how these systems arrive at their decisions. Cognilytica believes that Explainable AI (XAI) is an absolutely necessary part of …

Read moresimMachines Briefing Note

Kyndi Briefing Note

Document ID: CGBN107 | Last Updated: Jan. 24, 2018 Abstract Right now, too much of what AI systems do are a “black box”. We have little visibility into how decisions are being made, conclusions drawn, objects identified, and more. An emerging area of AI called Explainable AI (XAI) aims to address the black-box decision making …

Read moreKyndi Briefing Note

Cognilytica’s AI Predictions for 2018

“Prediction is very difficult, especially if it’s about the future.” –Nils Bohr, Nobel laureate in Physics Prediction 1: No AI Winter in 2018 The AI train continues to steam along. What, if anything can stop it? Certainly in 2018, we’re not expecting any slowing. Will issues of Responsible AI rear their heads? Will new AI …

Read moreCognilytica’s AI Predictions for 2018

AI Today Podcast #016: Explainable AI (XAI)

The more we involve AI in our daily lives, the more we need to be able to trust the decisions that autonomous systems make. Right now, too much of what AI systems do are a “black box”. We have little visibility into how decisions are being made, conclusions drawn, objects identified, and more. AI black …

Read moreAI Today Podcast #016: Explainable AI (XAI)

AI Today Podcast #015: AI Explainability & More – Interview with Mark van Rijmenam

The more we involved AI in our daily lives, the more we need to  be able to trust the decisions that these systems make. Right now, too much of what AI systems do are a “black box”. We have little visibility into how decisions are being made, conclusions drawn, objects identified, and more. As the …

Read moreAI Today Podcast #015: AI Explainability & More – Interview with Mark van Rijmenam