All models are wrong and when they are wrong they create financial or non-financial harm. Understanding, testing and managing potential model failures and their unintended consequences are the key focus of model risk management, particularly for mission critical or regulated applications. This is a challenging task for complex machine learning models and having an explainable model is a key enabler. Machine learning explainability has become an active area of academic research and an industry in its own right. Despite all the progress that has been made, machine learning explainers are still fraught with weakness and complexity. In this talk, I will argue that what we need is an interpretable machine learning model, one that is self-explanatory and inherently interpretable. I will discuss how to make sophisticated machine learning models such as Neural networks (Deep Learning) as self-explanatory models.
What we need is interpretable and not explainable machine learning
Thu. January 28, 2021 @ 09:00 ET

About this Session
Session Resources
About this Session
Session Resources
Featured Presenters
Event Sponors

Pactera Edge
Pactera EDGE is a global digital and technology services company. We design, build and optimize human-centric intelligent digital platforms.

SS&C Blue Prism
As the leading provider of Intelligent Automation, SS&C Blue Prism helps the intelligence community accelerate their data centric mission

Zorroa
Zorroa’s no-code ML integration platform makes process automations with machine learning APIs from GCP, AWS, and Azure accessible in under an hour. Its platform enables media technologists to stand up rapid-cycle experiments and scale their ML projects without code, data prep, or vendor lock-in.