The ability to provide a reasonable understanding of how an algorithm arrived at its result, even if the machine learning algorithm is not explainable. Interpretable systems provide an understanding of the cause-and-effect relationship for algorithmic decision-making.