Podcast: Play in new window | Embed
Subscribe: Apple Podcasts | Google Podcasts | Spotify | Amazon Music | Email | TuneIn | Deezer | RSS
For a number of reasons, it can be important to reduce the number of variables or identified features in input training data so as to make training machine learning models faster and more accurate. But what are the techniques for doing this? In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer define the terms Feature Reduction, Principal Component Analysis (PCA), and t-SNE, explain how they relate to AI and why it’s important to know about them.
Show Notes:
- FREE Intro to CPMAI mini course
- CPMAI Training and Certification
- AI Glossary
- AI Glossary Series – Machine Learning, Algorithm, Model
- Glossary Series: Machine Learning Approaches: Supervised Learning, Unsupervised Learning, Reinforcement Learning
- Glossary Series: Classification & Classifier, Binary Classifier, Multiclass Classifier, Decision Boundary
- Glossary Series: Clustering, Cluster Analysis, K-Means, Gaussian Mixture Model
- Glossary Series: Dimension, Curse of Dimensionality, Dimensionality Reduction
- Glossary Series: (Artificial) Neural Networks, Node (Neuron), Layer
- Glossary Series: Feature, Feature Engineering