Denoisers#

The deepinv.models.Denoiser base class describe denoisers that take a noisy image as input and return a denoised image. They can be used as a building block for plug-and-play restoration, for building unrolled architectures, for artifact removal networks, or as a standalone denoisers. All denoisers have a forward method that takes a noisy image and a noise level (which generally corresponds to the standard deviation of the noise) as input and returns a denoised image:

>>> import torch
>>> import deepinv as dinv
>>> denoiser = dinv.models.DRUNet()
>>> sigma = 0.1
>>> image = torch.ones(1, 3, 32, 32) * .5
>>> noisy_image =  image + torch.randn(1, 3, 32, 32) * sigma
>>> denoised_image = denoiser(noisy_image, sigma)

Note

Some denoisers (e.g., deepinv.models.DnCNN) do not use the information about the noise level. In this case, the noise level is ignored.

Deep denoisers#

We provide the following list of deep denoising architectures, which are based on CNN, Transformer or hybrid CNN-Transformer modules. See Pretrained Weights for more information on pretrained denoisers.

Table 5 Deep denoisers#

Model

Type

Tensor Size (C, H, W)

Pretrained Weights

Noise level aware

deepinv.models.AutoEncoder

Fully connected

Any

No

No

deepinv.models.UNet

CNN

Any C; H,W>8

No

No

deepinv.models.DnCNN

CNN

Any C, H, W

RGB, grayscale

No

deepinv.models.DRUNet

CNN-UNet

Any C; H,W>8

RGB, grayscale

Yes

deepinv.models.GSDRUNet

CNN-UNet

Any C; H,W>8

RGB, grayscale

Yes

deepinv.models.SCUNet

CNN-Transformer

Any C, H, W

No

No

deepinv.models.SwinIR

CNN-Transformer

Any C, H, W

RGB

No

deepinv.models.DiffUNet

Transformer

Any C; H,W = 64, 128, 256, …

RGB

Yes

deepinv.models.Restormer

CNN-Transformer

Any C, H, W

RGB, grayscale, deraining, deblurring

No

deepinv.models.ICNN

CNN

Any C; H, W = 128, 256,…

No

No

Classical denoisers#

All denoisers in this list are non-learned (except for EPLL) and rely on hand-crafted priors.

Table 6 Non-Learned Denoisers Overview#

Model

Info

Tensor Size (C, H, W)

deepinv.models.BM3D

Patch-based denoiser

C=1 or C=3, any H, W.

deepinv.models.MedianFilter

Non-learned filter

Any C, H, W

deepinv.models.TVDenoiser

Total variation prior

Any C, H, W

deepinv.models.TGVDenoiser

Total generalized variation prior

Any C, H, W

deepinv.models.WaveletDenoiser

Sparsity in orthogonal wavelet domain

Any C, H, W

deepinv.models.WaveletDictDenoiser

Sparsity in overcomplete wavelet domain

Any C, H, W

deepinv.models.EPLLDenoiser

Learned patch-prior

C=1 or C=3, any H, W

Denoisers Utilities#

Equivariant denoisers#

Denoisers can be turned into equivariant denoisers by wrapping them with the deepinv.models.EquivariantDenoiser class, which symmetrizes the denoiser with respect to a transform from our available transforms such as deepinv.transform.Rotate or deepinv.transform.Reflect. You retain full flexibility by passing in the transform of choice. The denoising can either be averaged over the entire group of transformation (making the denoiser equivariant) or performed on 1 or n transformations sampled uniformly at random in the group, making the denoiser a Monte-Carlo estimator of the exact equivariant denoiser.

Complex denoisers#

Most denoisers in the library are designed to process real images. However, some problems, e.g., phase retrieval, require processing complex-valued images. The function deepinv.models.complex.to_complex_denoiser can convert any real-valued denoiser into a complex-valued denoiser. It can be simply called by complex_denoiser = to_complex_denoiser(denoiser).

Dynamic networks#

When using time-varying (i.e. dynamic) data of 5D shape (B,C,T,H,W), the reconstruction network must be adapted using deepinv.models.TimeAveragingNet.

To adapt any existing network to take dynamic data as independent time-slices, deepinv.models.TimeAgnosticNet creates a time-agnostic wrapper that flattens the time dimension into the batch dimension.