The concept of AI systems that can provide understandable explanations of how outcomes were arrived from input data. Some machine learning algorithms, such as decision trees, are easily explainable while others, such as neural networks, are difficult to explain and thus considered to be “black boxes”. The goal of XAI is to provide verifiable explanations of how machine learning systems, especially the so-called black box algorithms, make decisions and keep humans in the loop.