Explain me l2 normalization
L2 normalization, also known as Euclidean normalization, is a process of rescaling a vector so that its magnitude (or L2 norm) is equal to 1. It is a type of normalization used in machine learning and deep learning to scale the inputs to a model to a standard range of values.

The L2 norm of a vector x is defined as the square root of the sum of squares of its elements:

||x||_2 = sqrt(x1^2 + x2^2 + ... + xn^2)

L2 normalization of a vector x is performed by dividing the vector by its L2 norm:

x_normalized = x / ||x||_2

This normalization helps to reduce the impact of large values in the input vector, and can improve the performance of certain machine learning algorithms, such as neural networks. It is also useful in computer vision tasks, where the features extracted from images need to be normalized to facilitate their use in machine learning models.