A method for determining the accuracy and precision of a classifier model by determining the total number of true positives, true negatives, false positives, and false negative results and placing the results in a grid table. The confusion matrix provides key evaluation statistics including recall, precision, accuracy, specificity, sensitivity, F1 score, and other measures of model performance. It’s important to determine which measure has the most value for a particular model. Unfortunately there’s no easy way to compare algorithms purely on one measure, as there are are many tradeoffs!