Many AI projects fall short of expectations due to poor model performance or the unintended consequences of inaccurate AI decisions. What if there was a universal way for MLOps/AIOps to evaluate and monitor the performance and behavior of AI models, both pre-deployment and ongoing, no matter the vendor or features used? In this session, we will review the pitfalls of opaque AI models, and discover how to evaluate, compare, and monitor performance and behavior across AI models, for better AI model trust and explainability.
Databricks is the data and AI company. Thousands of organizations worldwide rely on Databricks’ open and unified platform for data engineering, machine learning and analytics. Founded by the original creators of Apache Spark™, Delta Lake and MLflow, Databricks is on a mission to solve the world’s toughest problems.
DataRobot is the leader in enterprise AI, delivering trusted AI technology and ROI enablement services to global enterprises. DataRobot’s enterprise AI platform democratizes data science with end-to-end automation for building, deploying, and managing machine learning models.