Implementing Custom ReLU Activation Function

0
medium

In this exercise, you are required to implement a custom ReLU (Rectified Linear Unit) activation function using PyTorch's autograd functionality. You need to define both the forward and backward passes for the ReLU operation.

The ReLU activation function outputs the input directly if it is positive, otherwise, it outputs zero. Its derivative is 1 for positive input values and 0 for the rest (it's also 0 at point 0 per PyTorch implementation). You need to implement this behavior in both the forward and backward methods of the CustomReLUFunction. To read more about RuLU gradient and it's value at 0 read: https://www.quora.com/How-do-we-compute-the-gradient-of-a-ReLU-for-backpropagation
ReLU(x)=max(0,x)ReLU(x) = \max(0, x)

Examples:

-1.0
-0.5
1.0
0.0
0.0
1.0
0.0
0.0
1.0

Loading...
2024 © TensorGym
Contrinbute | Contact | Copyright | 𝕏