Shortcuts

FusedBiasLeakyReLU

class mmcv.ops.FusedBiasLeakyReLU(num_channels: int, negative_slope: float = 0.2, scale: float = 1.4142135623730951)[source]

Fused bias leaky ReLU.

This function is introduced in the StyleGAN2: Analyzing and Improving the Image Quality of StyleGAN

The bias term comes from the convolution operation. In addition, to keep the variance of the feature map or gradients unchanged, they also adopt a scale similarly with Kaiming initialization. However, since the \(1+{alpha}^2\) is too small, we can just ignore it. Therefore, the final scale is just \(\sqrt{2}\). Of course, you may change it with your own scale.

TODO: Implement the CPU version.

Parameters
  • num_channels (int) – The channel number of the feature map.

  • negative_slope (float, optional) – Same as nn.LeakyRelu. Defaults to 0.2.

  • scale (float, optional) – A scalar to adjust the variance of the feature map. Defaults to 2**0.5.

forward(input: torch.Tensor)torch.Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.