All models are wrong and when they are wrong they create financial or non-financial harm. Understanding, testing and managing potential model failures and their unintended consequences are the key focus of model risk management, particularly for mission critical or regulated applications. This is a challenging task for complex machine learning models and having an explainable model is a key enabler. Machine learning explainability has become an active area of academic research and an industry in its own right. Despite all the progress that has been made, machine learning explainers are still fraught with weakness and complexity. In this talk, I will argue that what we need is an interpretable machine learning model, one that is self-explanatory and inherently interpretable. I will discuss how to make sophisticated machine learning models such as Neural networks (Deep Learning) as self-explanatory models.
What we need is interpretable and not explainable machine learning
Zorroa’s no-code ML integration platform makes process automations with machine learning APIs from GCP, AWS, and Azure accessible in under an hour. Its platform enables media technologists to stand up rapid-cycle experiments and scale their ML projects without code, data prep, or vendor lock-in.