Shortcuts

mmcv.ops.fused_bias_leakyrelu

mmcv.ops.fused_bias_leakyrelu(input: torch.Tensor, bias: torch.nn.parameter.Parameter, negative_slope: float = 0.2, scale: float = 1.4142135623730951)torch.Tensor[源代码]

Fused bias leaky ReLU function.

This function is introduced in the StyleGAN2: Analyzing and Improving the Image Quality of StyleGAN

The bias term comes from the convolution operation. In addition, to keep the variance of the feature map or gradients unchanged, they also adopt a scale similarly with Kaiming initialization. However, since the \(1+{alpha}^2\) is too small, we can just ignore it. Therefore, the final scale is just \(\sqrt{2}\). Of course, you may change it with your own scale.

参数
  • input (torch.Tensor) – Input feature map.

  • bias (nn.Parameter) – The bias from convolution operation.

  • negative_slope (float, optional) – Same as nn.LeakyRelu. Defaults to 0.2.

  • scale (float, optional) – A scalar to adjust the variance of the feature map. Defaults to 2**0.5.

返回

Feature map after non-linear activation.

返回类型

torch.Tensor

Read the Docs v: stable
Versions
latest
stable
2.x
v2.0.1
v2.0.0
1.x
v1.7.1
v1.7.0
v1.6.2
v1.6.1
v1.6.0
v1.5.3
v1.5.2_a
v1.5.1
v1.5.0
v1.4.8
v1.4.7
v1.4.6
v1.4.5
v1.4.4
v1.4.3
v1.4.2
v1.4.1
v1.4.0
v1.3.18
v1.3.17
v1.3.16
v1.3.15
v1.3.14
v1.3.13
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.