mmcv.ops.fused_bias_leakyrelu¶
- mmcv.ops.fused_bias_leakyrelu(input: torch.Tensor, bias: torch.nn.parameter.Parameter, negative_slope: float = 0.2, scale: float = 1.4142135623730951) → torch.Tensor[源代码]¶
Fused bias leaky ReLU function.
This function is introduced in the StyleGAN2: Analyzing and Improving the Image Quality of StyleGAN
The bias term comes from the convolution operation. In addition, to keep the variance of the feature map or gradients unchanged, they also adopt a scale similarly with Kaiming initialization. However, since the \(1+{alpha}^2\) is too small, we can just ignore it. Therefore, the final scale is just \(\sqrt{2}\). Of course, you may change it with your own scale.
- 参数
input (torch.Tensor) – Input feature map.
bias (nn.Parameter) – The bias from convolution operation.
negative_slope (float, optional) – Same as nn.LeakyRelu. Defaults to 0.2.
scale (float, optional) – A scalar to adjust the variance of the feature map. Defaults to 2**0.5.
- 返回
Feature map after non-linear activation.
- 返回类型