What we need is interpretable and not explainable machine learning

Thu. January 28, 2021 @ 09:00 ET

All models are wrong and when they are wrong they create financial or non-financial harm. Understanding, testing and managing potential model failures and their unintended consequences are  the key focus of model risk management, particularly for mission critical or regulated applications. This is a challenging task for complex machine learning models and having an explainable model is a key enabler. Machine learning explainability has become an active area of academic research and an industry in its own right. Despite all the progress that has been made, machine learning explainers are still fraught with weakness and complexity. In this talk, I will argue that what we need is an interpretable machine learning model, one that is self-explanatory and inherently interpretable. I will discuss how to make sophisticated machine learning models such as Neural networks (Deep Learning) as self-explanatory models.

Login Or Register

cropped-CogHeadLogo.png

Register to View Event

cropped-CogHeadLogo.png

Get The What we need is interpretable and not explainable machine learning

cropped-CogHeadLogo.png

AI Best Practices

Get the Step By Step Checklist for AI Projects

login

Login to register for events. Don’t have an account? Just register for an event and an account will be created for you!