Cohen's kappa
Cohen's kappa is a statistical measure used to assess the agreement between two raters or observers when classifying items into categories. It accounts for the possibility of agreement occurring by chance, providing a more accurate representation of the level of agreement than simple percentage calculations.
The value of Cohen's kappa ranges from -1 to 1, where 1 indicates perfect agreement, 0 indicates no agreement beyond chance, and negative values suggest less agreement than expected by chance. This measure is commonly used in fields like psychology, medicine, and social sciences to evaluate the reliability of assessments and classifications.