Denoisers
Denoisers are torch.nn.Module
that take a noisy image as input and return a denoised image.
They can be used as a building block for plug-and-play restoration, for building unrolled architectures,
or as a standalone denoiser. All denoisers have a forward
method that takes a noisy image and a noise level
(which generally corresponds to the standard deviation of the noise) as input and returns a denoised image:
>>> import torch
>>> import deepinv as dinv
>>> denoiser = dinv.models.DRUNet()
>>> sigma = 0.1
>>> image = torch.ones(1, 3, 32, 32) * .5
>>> noisy_image = image + torch.randn(1, 3, 32, 32) * sigma
>>> denoised_image = denoiser(noisy_image, sigma)
Note
Some denoisers (e.g., deepinv.models.DnCNN
) do not use the information about the noise level.
In this case, the noise level is ignored.
Classical Denoisers
BM3D denoiser. |
|
Median filter. |
|
Proximal operator of the isotropic Total Variation operator. |
|
Proximal operator of (2nd order) Total Generalised Variation operator. |
|
Orthogonal Wavelet denoising with the \(\ell_1\) norm. |
|
Overcomplete Wavelet denoising with the \(\ell_1\) norm. |
|
Expected Patch Log Likelihood denoising method. |
Deep Denoisers
Simple fully connected autoencoder network. |
|
U-Net convolutional denoiser. |
|
DnCNN convolutional denoiser. |
|
DRUNet denoiser network. |
|
SCUNet denoising network. |
|
Gradient Step Denoiser with DRUNet architecture |
|
SwinIR denoising network. |
|
Diffusion UNet model. |
|
Restormer denoiser network. |
Equivariant Denoisers
The denoisers can be turned into equivariant denoisers by wrapping them with the
deepinv.models.EquivariantDenoiser
class.
The group of transformations available at the moment are vertical/horizontal flips, 90 degree rotations, or a
combination of both, consisting in groups with 3, 4 or 8 elements.
The denoising can either be averaged the group of transformation (making the denoiser equivariant) or performed on a single transformation sampled uniformly at random in the group, making the denoiser a Monte-Carlo estimator of the exact equivariant denoiser.
Turns the input denoiser into an equivariant denoiser with respect to geometric transforms. |
Pretrained Weights
The following denoisers have pretrained weights available; we next briefly summarize the origin of the weights, associated reference and relevant details. All pretrained weights are hosted on HuggingFace.
Model |
Weight |
---|---|
from Learning Maximally Monotone Operators trained on noise level 2.0/255. grayscale weights, color weights. |
|
from Learning Maximally Monotone Operators with Lipschitz constraint to ensure approximate firm nonexpansiveness, trained on noise level 2.0/255. grayscale weights, color weights. |
|
Default: trained with deepinv (logs), trained on noise levels in [0, 20]/255 and on the same dataset as DPIR grayscale weights, color weights. |
|
from DPIR, trained on noise levels in [0, 50]/255. grayscale weights, color weights. |
|
weights from Gradient-Step PnP, trained on noise levels in [0, 50]/255. color weights. |
|
from SCUNet, trained on images degraded with synthetic realistic noise and camera artefacts. color weights. |
|
from SwinIR, trained on various noise levels levels in {15, 25, 50}/255, in color and grayscale. The weights are automatically downloaded from the authors’ project page. |
|
Default: from Ho et al. trained on FFHQ (128 hidden channels per layer). weights. |
|
from Dhariwal and Nichol trained on ImageNet128 (256 hidden channels per layer). weights. |
|
|
Default: parameters estimated with deepinv on 50 mio patches from the training/validation images from BSDS500 for grayscale and color images. |
Code for generating the weights for the example Patch priors for limited-angle computed tomography is contained within the demo |
|
from Restormer: Efficient Transformer for High-Resolution Image Restoration. Pretrained parameters from swz30 github. |
|
Also available on Deepinv Restormer HugginfaceHub. |