Feature Scaling
Feature scaling is a technique used in data preprocessing to standardize the range of independent variables or features in a dataset. This is important because many machine learning algorithms, such as k-nearest neighbors and support vector machines, are sensitive to the scale of the data. If features have different scales, the algorithm may give more weight to larger values, leading to biased results.
There are two common methods of feature scaling: min-max scaling and standardization. Min-max scaling rescales the features to a fixed range, usually [0, 1], while standardization transforms the data to have a mean of 0 and a standard deviation of 1. Both methods help improve the performance and convergence speed of machine learning models.