A method for training and validating a machine learning model by taking a prepared data set and splitting it into equal sized chunks. For each of these chunks you validate on that chunk and train on the other chunks in iteration. This helps ensure your model is properly trained without any implicit overfitting or underfitting of selected data. Common approaches are K-Fold and Nested Cross-Validation. For K-fold validation, you can take the full data set and split it into “k” number of groups. For each of the k groups, holdout that k group to validate and train on the other groups in iteration, Calculate the error for each iteration, and then calculate the average error. Nested Cross-Validation also lets us test multiple Hyperparameter settings, with an outer loop that tests and validates the overall algorithm, and inner loop tests different hyperparameter values.