Kullback-Leibler Divergence
Kullback-Leibler Divergence (often abbreviated as KL Divergence) is a statistical measure that quantifies how one probability distribution differs from a second, reference probability distribution. It is commonly used in fields like information theory and machine learning to assess the difference between two distributions, often denoted as P and Q. The value of KL Divergence is always non-negative, with a value of zero indicating that the two distributions are identical.
The formula for KL Divergence is given by the sum of the probabilities of P multiplied by the logarithm of the ratio of P to Q. This measure is not symmetric, meaning that the divergence from P to Q is not necessarily the same as from Q to P. As a result, KL Divergence is useful for tasks like model evaluation and optimization, where understanding the difference between distributions is crucial.