Implement Batch Normalization
0
Given a two-dimensional tensor (matrix) `data` (where each row represents a sample and each column represents a feature) and epsilon, implement batch normalization without using built-in functions specifically designed for batch normalization. Normalize across the batch (i.e., across rows). Return the normalized data.
Note: By default, torch.var() applies Bessel's correction, using N-1 instead of N as the denominator. The 'Source of Bias' section in the Wikipedia article on Bessel's correction ( https://en.wikipedia.org/wiki/Bessel%27s_correction ) provides a good example to help develop intuition behind this.
Where:
- : Input data matrix.
- : Mean of the data across the batch dimension.
- : Variance of the data across the batch dimension.
- : A small constant added for numerical stability.
Examples:
1.0
2
3
4
5
6
0.00001
↓
-1
-1
0
0
1
1
10.0
4
4
2
6
0
0.00001
↓
1.0911
1.0000
-0.8729
0.0000
-0.2182
-1.0000
Loading...