Loss
This package contains popular training losses for supervised and self-supervised learning, which are especially designed for inverse problems.
Introduction
All losses inherit from the base class deepinv.loss.Loss()
, which is a meth:torch.nn.Module.
Base class for all loss functions. |
>>> import torch
>>> import deepinv as dinv
>>> loss = dinv.loss.SureGaussianLoss(.1)
>>> physics = dinv.physics.Denoising()
>>> x = torch.ones(1, 3, 16, 16)
>>> y = physics(x)
>>> model = dinv.models.DnCNN()
>>> x_net = model(y)
>>> l = loss(x_net=x_net, y=y, physics=physics, model=model) # self-supervised loss, doesn't require ground truth x
Supervised Learning
Use a dataset of pairs of signals and measurements (and possibly information about the forward operator), i.e., they can be written as \(\mathcal{L}(x,\inverse{y})\).
Standard supervised loss |
Self-Supervised Learning
Use a dataset of measurement data alone (and possibly information about the forward operator), i.e., they can be written as \(\mathcal{L}(y,\inverse{y})\) and take into account information about the forward measurement process.
Measurement consistency loss |
|
Equivariant imaging self-supervised loss. |
|
Multi-operator imaging loss |
|
Multi-operator equivariant imaging. |
|
Neighbor2Neighbor loss. |
|
Measurement splitting loss. |
|
Phase2Phase loss for dynamic data. |
|
Artifact2Artifact loss for dynamic data. |
|
SURE loss for Gaussian noise |
|
SURE loss for Poisson noise |
|
SURE loss for Poisson-Gaussian noise |
|
Total variation loss (\(\ell_2\) norm). |
|
Recorrupted-to-Recorrupted (R2R) Loss |
|
Learns score of noise distribution. |
Adversarial Learning
Adversarial losses train a generator network by jointly training with an additional discriminator network in a minimax game.
We implement various popular (supervised and unsupervised) adversarial training frameworks below. These can be adapted to various flavours of GAN, e.g. WGAN, LSGAN. Generator and discriminator networks are provided in adversarial models.
Training is implemented using deepinv.training.AdversarialTrainer
which overrides the standard deepinv.Trainer
. See Imaging inverse problems with adversarial networks for usage.
Generic GAN discriminator metric building block. |
|
Base generator adversarial loss. |
|
Base discriminator adversarial loss. |
|
Supervised adversarial consistency loss for generator. |
|
Supervised adversarial consistency loss for discriminator. |
|
Unsupervised adversarial consistency loss for generator. |
|
Unsupervised adversarial consistency loss for discriminator. |
|
Reimplementation of UAIR generator's adversarial loss. |
Network Regularization
These losses can be used to regularize the learned function, e.g., controlling its Lipschitz constant.
Computes the spectral norm of the Jacobian. |
|
Computes the Firm-Nonexpansiveness Jacobian spectral norm. |
Loss schedulers
Loss schedulers can be used to control which losses are used when during more advanced training.
Base class for loss schedulers. |
|
Schedule losses at random. |
|
Schedule losses sequentially one-by-one. |
|
Schedule losses sequentially epoch-by-epoch. |
|
Activate losses at specified epoch. |
Utils
A set of popular distances that can be used by the supervised and self-supervised losses.
\(\ell_p\) metric for \(p>0\). |