Bagging, short for bootstrap aggregating, is a machine learning technique used to improve the accuracy and stability of models. It works by creating multiple subsets of the training data through random sampling with replacement. Each subset is used to train a separate model, and the final prediction is made by averaging the results (for regression) or voting (for classification) from all the models.
This method helps reduce overfitting, which occurs when a model learns noise in the training data rather than the underlying patterns. Bagging is commonly applied in ensemble methods like Random Forests, where multiple decision trees are combined to enhance performance and robustness.