Denoisers

Denoisers are torch.nn.Module that take a noisy image as input and return a denoised image. They can be used as a building block for plug-and-play restoration, for building unrolled architectures, or as a standalone denoiser. All denoisers have a forward method that takes a noisy image and a noise level (which generally corresponds to the standard deviation of the noise) as input and returns a denoised image:

>>> import torch
>>> import deepinv as dinv
>>> denoiser = dinv.models.DRUNet()
>>> sigma = 0.1
>>> image = torch.ones(1, 3, 32, 32) * .5
>>> noisy_image =  image + torch.randn(1, 3, 32, 32) * sigma
>>> denoised_image = denoiser(noisy_image, sigma)

Note

Some denoisers (e.g., deepinv.models.DnCNN) do not use the information about the noise level. In this case, the noise level is ignored.

Classical Denoisers

deepinv.models.BM3D

BM3D denoiser.

deepinv.models.MedianFilter

Median filter.

deepinv.models.TVDenoiser

Proximal operator of the isotropic Total Variation operator.

deepinv.models.TGVDenoiser

Proximal operator of (2nd order) Total Generalised Variation operator.

deepinv.models.WaveletDenoiser

Orthogonal Wavelet denoising with the \(\ell_1\) norm.

deepinv.models.WaveletDictDenoiser

Overcomplete Wavelet denoising with the \(\ell_1\) norm.

deepinv.models.EPLLDenoiser

Expected Patch Log Likelihood denoising method.

Deep Denoisers

deepinv.models.AutoEncoder

Simple fully connected autoencoder network.

deepinv.models.UNet

U-Net convolutional denoiser.

deepinv.models.DnCNN

DnCNN convolutional denoiser.

deepinv.models.DRUNet

DRUNet denoiser network.

deepinv.models.SCUNet

SCUNet denoising network.

deepinv.models.GSDRUNet

Gradient Step Denoiser with DRUNet architecture

deepinv.models.SwinIR

SwinIR denoising network.

deepinv.models.DiffUNet

Diffusion UNet model.

deepinv.models.Restormer

Restormer denoiser network.

Equivariant Denoisers

The denoisers can be turned into equivariant denoisers by wrapping them with the deepinv.models.EquivariantDenoiser class. The group of transformations available at the moment are vertical/horizontal flips, 90 degree rotations, or a combination of both, consisting in groups with 3, 4 or 8 elements.

The denoising can either be averaged the group of transformation (making the denoiser equivariant) or performed on a single transformation sampled uniformly at random in the group, making the denoiser a Monte-Carlo estimator of the exact equivariant denoiser.

deepinv.models.EquivariantDenoiser

Turns the input denoiser into an equivariant denoiser with respect to geometric transforms.

Pretrained Weights

The following denoisers have pretrained weights available; we next briefly summarize the origin of the weights, associated reference and relevant details. All pretrained weights are hosted on HuggingFace.

Table 1 Summary of pretrained weights

Model

Weight

deepinv.models.DnCNN()

from Learning Maximally Monotone Operators trained on noise level 2.0/255. grayscale weights, color weights.

from Learning Maximally Monotone Operators with Lipschitz constraint to ensure approximate firm nonexpansiveness, trained on noise level 2.0/255. grayscale weights, color weights.

deepinv.models.DRUNet()

Default: trained with deepinv (logs), trained on noise levels in [0, 20]/255 and on the same dataset as DPIR grayscale weights, color weights.

from DPIR, trained on noise levels in [0, 50]/255. grayscale weights, color weights.

deepinv.models.GSDRUNet()

weights from Gradient-Step PnP, trained on noise levels in [0, 50]/255. color weights.

deepinv.models.SCUNet()

from SCUNet, trained on images degraded with synthetic realistic noise and camera artefacts. color weights.

deepinv.models.SwinIR()

from SwinIR, trained on various noise levels levels in {15, 25, 50}/255, in color and grayscale. The weights are automatically downloaded from the authors’ project page.

deepinv.models.DiffUNet()

Default: from Ho et al. trained on FFHQ (128 hidden channels per layer). weights.

from Dhariwal and Nichol trained on ImageNet128 (256 hidden channels per layer). weights.

deepinv.models.EPLL()

Default: parameters estimated with deepinv on 50 mio patches from the training/validation images from BSDS500 for grayscale and color images.

Code for generating the weights for the example Patch priors for limited-angle computed tomography is contained within the demo

deepinv.models.Restormer()

from Restormer: Efficient Transformer for High-Resolution Image Restoration. Pretrained parameters from swz30 github.

Also available on Deepinv Restormer HugginfaceHub.