L2 regularization
L2 regularization is a technique used in machine learning to prevent overfitting, which occurs when a model learns noise in the training data instead of the underlying patterns. It adds a penalty to the loss function based on the square of the magnitude of the model's coefficients. This encourages the model to keep the coefficients small, leading to simpler models that generalize better to new data.
By incorporating L2 regularization, the model balances fitting the training data and maintaining smaller weights. This helps improve performance on unseen data, making it a popular choice in algorithms like linear regression and neural networks.