Implement Batch Normalization

0
medium

Given a two-dimensional tensor (matrix) `data` (where each row represents a sample and each column represents a feature) and epsilon, implement batch normalization without using built-in functions specifically designed for batch normalization. Normalize across the batch (i.e., across rows). Return the normalized data.

Note: By default, torch.var() applies Bessel's correction, using N-1 instead of N as the denominator. The 'Source of Bias' section in the Wikipedia article on Bessel's correction ( https://en.wikipedia.org/wiki/Bessel%27s_correction ) provides a good example to help develop intuition behind this.

Xnorm=Xμσ2+ϵX_{norm} = \frac{X - \mu}{\sqrt{\sigma^2 + \epsilon}}
Where:
  • XX: Input data matrix.
  • mumu: Mean of the data across the batch dimension.
  • sigma2sigma^2: Variance of the data across the batch dimension.
  • epsilonepsilon: A small constant added for numerical stability.

Examples:

1.0
2
3
4
5
6
0.00001
-1
-1
0
0
1
1
10.0
4
4
2
6
0
0.00001
1.0911
1.0000
-0.8729
0.0000
-0.2182
-1.0000

Loading...
2024 © TensorGym
Contrinbute | Contact | Copyright | 𝕏