L2 Regularization
L2 Regularization is a technique used in machine learning to prevent overfitting, which occurs when a model learns noise in the training data instead of the underlying patterns. It adds a penalty to the loss function based on the square of the magnitude of the model's coefficients. This encourages the model to keep the coefficients small, leading to simpler models that generalize better to new data.
The penalty term in L2 Regularization is calculated as the sum of the squares of all coefficients multiplied by a regularization parameter, often denoted as lambda. By adjusting lambda, practitioners can control the strength of the regularization, balancing model complexity and performance.