One of the core measurements of model performance that indicates how good a ML model is at predicting a certain category. Specifically, precision is calculated as the percentage of the true positive instances correctly classified in the class over all the times the model classified data into that class whether correct or not (true positives divided by the sum of true positives and false positives). A precision of 1 means that the classifier correctly classified all data in the positive class without misclassifying something into that class.