DataFidelity#
- class deepinv.optim.DataFidelity(d=None)[source]#
Bases:
Potential
Base class for the data fidelity term \(\distance{A(x)}{y}\) where \(A\) is the forward operator, \(x\in\xset\) is a variable and \(y\in\yset\) is the data, and where \(d\) is a distance function, from the class
deepinv.optim.Distance()
.- Parameters:
d (callable) – distance function \(d(x, y)\) between a variable \(x\) and an observation \(y\). Default None.
- fn(x, y, physics, *args, **kwargs)[source]#
Computes the data fidelity term \(\datafid{x}{y} = \distance{\forw{x}}{y}\).
- Parameters:
x (torch.Tensor) – Variable \(x\) at which the data fidelity is computed.
y (torch.Tensor) – Data \(y\).
physics (deepinv.physics.Physics) – physics model.
- Returns:
(torch.Tensor) data fidelity \(\datafid{x}{y}\).
- grad(x, y, physics, *args, **kwargs)[source]#
Calculates the gradient of the data fidelity term \(\datafidname\) at \(x\).
The gradient is computed using the chain rule:
\[\nabla_x \distance{\forw{x}}{y} = \left. \frac{\partial A}{\partial x} \right|_x^\top \nabla_u \distance{u}{y},\]where \(\left. \frac{\partial A}{\partial x} \right|_x\) is the Jacobian of \(A\) at \(x\), and \(\nabla_u \distance{u}{y}\) is computed using
grad_d
with \(u = \forw{x}\). The multiplication is computed using theA_vjp
method of the physics.- Parameters:
x (torch.Tensor) – Variable \(x\) at which the gradient is computed.
y (torch.Tensor) – Data \(y\).
physics (deepinv.physics.Physics) – physics model.
- Returns:
(torch.Tensor) gradient \(\nabla_x \datafid{x}{y}\), computed in \(x\).
- grad_d(u, y, *args, **kwargs)[source]#
Computes the gradient \(\nabla_u\distance{u}{y}\), computed in \(u\).
Note that this is the gradient of \(\distancename\) and not \(\datafidname\). This function direclty calls
deepinv.optim.Distance.grad()
for the speficic distance function \(\distancename\).- Parameters:
u (torch.Tensor) – Variable \(u\) at which the gradient is computed.
y (torch.Tensor) – Data \(y\) of the same dimension as \(u\).
- Returns:
(torch.Tensor) gradient of \(d\) in \(u\), i.e. \(\nabla_u\distance{u}{y}\).
- prox_d(u, y, *args, **kwargs)[source]#
Computes the proximity operator \(\operatorname{prox}_{\gamma\distance{\cdot}{y}}(u)\), computed in \(u\).
Note that this is the proximity operator of \(\distancename\) and not \(\datafidname\). This function direclty calls
deepinv.optim.Distance.prox()
for the speficic distance function \(\distancename\).- Parameters:
u (torch.Tensor) – Variable \(u\) at which the gradient is computed.
y (torch.Tensor) – Data \(y\) of the same dimension as \(u\).
- Returns:
(torch.Tensor) gradient of \(d\) in \(u\), i.e. \(\nabla_u\distance{u}{y}\).
Examples using DataFidelity
:#
Image deblurring with custom deep explicit prior.
Random phase retrieval and reconstruction methods.
Image deblurring with Total-Variation (TV) prior
Image inpainting with wavelet prior
Plug-and-Play algorithm with Mirror Descent for Poisson noise inverse problems.
Vanilla PnP for computed tomography (CT).
DPIR method for PnP image deblurring.
Regularization by Denoising (RED) for Super-Resolution.
PnP with custom optimization algorithm (Condat-Vu Primal-Dual)
Uncertainty quantification with PnP-ULA.
Building your custom sampling algorithm.
Learned Iterative Soft-Thresholding Algorithm (LISTA) for compressed sensing
Vanilla Unfolded algorithm for super-resolution
Learned iterative custom prior
Deep Equilibrium (DEQ) algorithms for image deblurring
Learned Primal-Dual algorithm for CT scan.
Unfolded Chambolle-Pock for constrained image inpainting
Patch priors for limited-angle computed tomography
Radio interferometric imaging with deepinverse