Platt scaling
Platt scaling is a method used to convert the raw output scores of a machine learning model into probabilities. It is particularly useful for models that do not inherently provide probability estimates, such as support vector machines. By fitting a logistic regression model to the scores, Platt scaling adjusts them to better reflect the likelihood of a positive class.
This technique involves training the logistic regression on a validation set, where the model's scores are paired with true binary labels. The resulting probabilities can then be used for decision-making, improving the interpretability and performance of the model in classification tasks.