Model scores don't always tell the whole story. It is much easier to interpret the outputs of machine learning models when the scores are well-calibrated probabilities. When a model's scores match probabilities, it is said that that model is well-calibrated.
Read More ›