Podcast: Play in new window | Embed
Subscribe: Apple Podcasts | Google Podcasts | Spotify | Amazon Music | Email | TuneIn | Deezer | RSS
When it comes to building ML models, you want to make a model simple enough so that it can handle a wide range of real-world data on the one hand, but not too simple that it overgeneralizes or underfits the available data. In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer define the terms Overfitting, Underfitting, Bias, Variance, and Bias/Variance Tradeoff, and explain how they relate to AI and why it’s important to know about them.
Show Notes: