FFDNet#

class deepinv.models.FFDNet(n_conv_layers=15, nf=64, img_channels=1, residual_denoising=False, norm='batch_norm', orthogonal_init=True, last_conv_bias=False, pretrained=None, device='cpu')[source]#

Bases: Denoiser

FFDNet denoiser network.

The network architecture is based on the paper Zhang et al.[1]. and consists of a PixelUnshuffle downsampling operation, a series of 3x3 convolutional layers (similar to DnCNN), followed by a PixelShuffle upsampling operation to get back to the original shape.

The network takes into account the noise level of the input image, which is encoded as an additional input channel.

Parameters:
  • n_conv_layers (int) – Number of convolutional layers used. Default: 15

  • nf (int) – Number of channels per convolutional layer. Default: 64

  • img_channels (int) – Number of channels of your input image. Default: 1 (greyscale)

  • residual_denoising (bool) – Whether to use a residual connection between input image and the network output. Default: False

  • norm (str) – normalization to use in the convolutional layers. Choose from instance_norm, batch_norm, or None (no norm). Default: batch_norm

  • orthogonal_init (bool) – Apply orthogonal initialization to the convolutional weights. Ignored if pretrained not None. Default: True

  • last_conv_bias (bool) – Set the learnable bias on or off on the final convolution. Default: False

  • pretrained (str) – Load pretrained weights from a checkpoint. Default: None

  • device (torch.device, str) – Device to put the model on.


References:

forward(x, sigma)[source]#

Run the denoiser on image with noise level \(\sigma\).

Parameters:
  • x (torch.Tensor) – noisy image

  • sigma (float, torch.Tensor) – noise level. If sigma is a float, it is used for all images in the batch. If sigma is a tensor, it must be of shape (batch_size,).