Cross-Entropy
Cross-Entropy is a measure from information theory that quantifies the difference between two probability distributions. It is commonly used in machine learning to evaluate how well a predicted probability distribution aligns with the actual distribution of data. Lower cross-entropy values indicate better performance, as they signify that the predicted probabilities are closer to the true labels.
In the context of classification tasks, Cross-Entropy loss is often used as a loss function. It calculates the penalty for incorrect predictions, guiding the optimization process during training. By minimizing this loss, models improve their accuracy in predicting the correct classes for given inputs.