Cohen's Kappa
Cohen's Kappa is a statistical measure used to assess the agreement between two raters or observers who classify items into categories. It accounts for the possibility of agreement occurring by chance, providing a more accurate representation of inter-rater reliability than simple percentage agreement. The value of Cohen's Kappa ranges from -1 to 1, where 1 indicates perfect agreement, 0 indicates no agreement beyond chance, and negative values suggest less agreement than expected by chance.
To calculate Cohen's Kappa, a confusion matrix is created, showing the counts of each category assigned by both raters. The formula considers both the observed agreement and the expected agreement based on the raters' overall category distributions. This makes Cohen's Kappa a valuable tool in fields like psychology, medicine, and social sciences for evaluating the consistency of subjective assessments.