Loss
This package contains popular training losses for supervised and self-supervised learning, which are especially designed for inverse problems.
Introduction
All losses inherit from the base class deepinv.loss.Loss()
, which is a meth:torch.nn.Module.
Base class for all loss/metric functions. |
>>> import torch
>>> import deepinv as dinv
>>> loss = dinv.loss.SureGaussianLoss(.1)
>>> physics = dinv.physics.Denoising()
>>> x = torch.ones(1, 3, 16, 16)
>>> y = physics(x)
>>> model = dinv.models.DnCNN()
>>> x_net = model(y)
>>> l = loss(x_net=x_net, y=y, physics=physics, model=model) # self-supervised loss, doesn't require ground truth x
Supervised Learning
Use a dataset of pairs of signals and measurements (and possibly information about the forward operator), i.e., they can be written as \(\mathcal{L}(x,\inverse{y})\).
Standard supervised loss |
Self-Supervised Learning
Use a dataset of measurement data alone (and possibly information about the forward operator), i.e., they can be written as \(\mathcal{L}(y,\inverse{y})\) and take into account information about the forward measurement process.
Measurement consistency loss |
|
Equivariant imaging self-supervised loss. |
|
Multi-operator imaging loss |
|
Neighbor2Neighbor loss. |
|
Measurement splitting loss. |
|
SURE loss for Gaussian noise |
|
SURE loss for Poisson noise |
|
SURE loss for Poisson-Gaussian noise |
|
Total variation loss (\(\ell_2\) norm). |
|
Recorrupted-to-Recorrupted (R2R) Loss |
Metrics
Metrics are generally used to evaluate the performance of a model. Some of them can be used as training losses as well.
Peak Signal-to-Noise Ratio (PSNR) metric. |
|
Structural Similarity Index (SSIM) metric. |
|
Learned Perceptual Image Patch Similarity (LPIPS) metric. |
|
Natural Image Quality Evaluator (NIQE) metric. |
Transforms
This submodule contains different transforms which can be used for data augmentation or together with the equivariant losses.
2D Rotations. |
|
Fast integer 2D translations. |
|
2D Scaling. |
Network Regularization
These losses can be used to regularize the learned function, e.g., controlling its Lipschitz constant.
Computes the spectral norm of the Jacobian. |
|
Computes the Firm-Nonexpansiveness Jacobian spectral norm. |
Utils
A set of popular distances that can be used by the supervised and self-supervised losses.
\(\ell_p\) metric for \(p>0\). |