Denoisers
Denoisers are torch.nn.Module
that take a noisy image as input and return a denoised image.
They can be used as a building block for plug-and-play restoration, for building unrolled architectures,
or as a standalone denoiser. All denoisers have a forward
method that takes a noisy image and a noise level
(which generally corresponds to the standard deviation of the noise) as input and returns a denoised image:
>>> import torch
>>> import deepinv as dinv
>>> denoiser = dinv.models.DRUNet()
>>> sigma = 0.1
>>> image = torch.ones(1, 3, 32, 32) * .5
>>> noisy_image = image + torch.randn(1, 3, 32, 32) * sigma
>>> denoised_image = denoiser(noisy_image, sigma)
Note
Some denoisers (e.g., deepinv.models.DnCNN
) do not use the information about the noise level.
In this case, the noise level is ignored.
Classical Denoisers
BM3D denoiser. |
|
Median filter. |
|
Proximal operator of the isotropic Total Variation operator. |
|
Proximal operator of (2nd order) Total Generalised Variation operator. |
|
Orthogonal Wavelet denoising with the \(\ell_1\) norm. |
|
Overcomplete Wavelet denoising with the \(\ell_1\) norm. |
|
Expected Patch Log Likelihood denoising method. |
Deep Denoisers
Simple fully connected autoencoder network. |
|
U-Net convolutional denoiser. |
|
DnCNN convolutional denoiser. |
|
DRUNet denoiser network. |
|
SCUNet denoising network. |
|
Gradient Step Denoiser with DRUNet architecture |
|
SwinIR denoising network. |
|
Diffusion UNet model. |
|
Restormer denoiser network. |
|
Input Convex Neural Network. |
Equivariant Denoisers
The denoisers can be turned into equivariant denoisers by wrapping them with the
deepinv.models.EquivariantDenoiser
class, which symmetrizes the denoiser
with respect to a transform from our available transforms such as deepinv.transform.Rotate
or deepinv.transform.Reflect
.
You retain full flexibility by passing in the transform of choice.
The denoising can either be averaged over the entire group of transformation (making the denoiser equivariant) or performed on 1 or n transformations sampled uniformly at random in the group, making the denoiser a Monte-Carlo estimator of the exact equivariant denoiser.
Turns the input denoiser into an equivariant denoiser with respect to geometric transforms. |
Adversarial Networks
Discriminator networks used in networks trained with adversarial learning using adversarial losses.
PatchGAN Discriminator model. |
|
ESRGAN Discriminator. |
|
DCGAN Generator. |
|
DCGAN Discriminator. |
|
Adapts a generator model backbone (e.g DCGAN) for CSGM or AmbientGAN. |
Complex Denoisers
Most denoisers in the library are designed to process real images. However, some problems, e.g., phase retrieval, require processing complex-valued images.The function deepinv.models.complex.to_complex_denoiser
can convert any real-valued denoiser into a complex-valued denoiser. It can be simply called by complex_denoiser = to_complex_denoiser(denoiser)
.
Converts a denoiser with real inputs into the one with complex inputs. |
Pretrained Weights
The following denoisers have pretrained weights available; we next briefly summarize the origin of the weights, associated reference and relevant details. All pretrained weights are hosted on HuggingFace.
Model |
Weight |
---|---|
from Learning Maximally Monotone Operators trained on noise level 2.0/255. grayscale weights, color weights. |
|
from Learning Maximally Monotone Operators with Lipschitz constraint to ensure approximate firm nonexpansiveness, trained on noise level 2.0/255. grayscale weights, color weights. |
|
Default: trained with deepinv (logs), trained on noise levels in [0, 20]/255 and on the same dataset as DPIR grayscale weights, color weights. |
|
from DPIR, trained on noise levels in [0, 50]/255. grayscale weights, color weights. |
|
weights from Gradient-Step PnP, trained on noise levels in [0, 50]/255. color weights. |
|
from SCUNet, trained on images degraded with synthetic realistic noise and camera artefacts. color weights. |
|
from SwinIR, trained on various noise levels levels in {15, 25, 50}/255, in color and grayscale. The weights are automatically downloaded from the authors’ project page. |
|
Default: from Ho et al. trained on FFHQ (128 hidden channels per layer). weights. |
|
from Dhariwal and Nichol trained on ImageNet128 (256 hidden channels per layer). weights. |
|
|
Default: parameters estimated with deepinv on 50 mio patches from the training/validation images from BSDS500 for grayscale and color images. |
Code for generating the weights for the example Patch priors for limited-angle computed tomography is contained within the demo |
|
from Restormer: Efficient Transformer for High-Resolution Image Restoration. Pretrained parameters from swz30 github. |
|
Also available on Deepinv Restormer HugginfaceHub. |