StackedPhysicsDataFidelity#

class deepinv.optim.StackedPhysicsDataFidelity(data_fidelity_list)[source]#

Bases: DataFidelity

Stacked data fidelity term \(\datafid{x}{y} = \sum_i d_i(A_i(x),y_i})\).

Adapted to deepinv.physics.StackedPhysics physics composed of multiple physics operators.

Parameters:

data_fidelity_list (list[dinv.optim.DataFidelity]) – list of data fidelity terms, one per physics operator.


Examples:

Define a stacked data fidelity term with two data fidelity terms \(\datafid{x}{y_1} + \datafid{x}{y_2}\):

>>> import torch
>>> import deepinv as dinv
>>> # define two observations, one with Gaussian noise and one with Poisson noise
>>> physics1 = dinv.physics.Denoising(dinv.physics.GaussianNoise(.1))
>>> physics2 = dinv.physics.Denoising(dinv.physics.PoissonNoise(.1))
>>> physics = dinv.physics.StackedLinearPhysics([physics1, physics2])
>>> fid1 = dinv.optim.L2()
>>> fid2 = dinv.optim.PoissonLikelihood()
>>> data_fidelity = dinv.optim.StackedPhysicsDataFidelity([fid1, fid2])
>>> x = torch.ones(1, 1, 3, 3) # image
>>> y = physics(x) # noisy measurements
>>> d = data_fidelity(x, y, physics)
fn(x, y, physics, *args, **kwargs)[source]#

Computes the data fidelity term \(\datafid{x}{y} = \sum_i d_i(A_i(x),y_i})\).

Parameters:
Returns:

(torch.Tensor) data fidelity \(\datafid{x}{y}\).

grad(x, y, physics, *args, **kwargs)[source]#

Calculates the gradient of the data fidelity term \(\datafidname\) at \(x\).

The gradient is computed using the chain rule:

\[\nabla_x \distance{\forw{x}}{y} = \sum_i \left. \frac{\partial A_i}{\partial x} \right|_x^\top \nabla_u \distance{u}{y_i},\]

where \(\left. \frac{\partial A_i}{\partial x} \right|_x\) is the Jacobian of \(A_i\) at \(x\), and \(\nabla_u \distance{u}{y_i}\) is computed using grad_d with \(u = \forw{x}\). The multiplication is computed using the A_vjp method of each physics.

Parameters:
Returns:

(torch.Tensor) gradient \(\nabla_x \datafid{x}{y}\), computed in \(x\).

grad_d(u, y, *args, **kwargs)[source]#

Computes the gradient \(\nabla_u\distance{u}{y}\), computed in \(u\).

Note that this is the gradient of \(\distancename\) and not \(\datafidname\). This function directly calls deepinv.optim.Distance.grad() for the specific distance function \(\distancename_i\).

Parameters:
  • u (torch.Tensor) – Variable \(u\) at which the gradient is computed.

  • y (torch.Tensor) – Data \(y\) of the same dimension as \(u\).

Returns:

(torch.Tensor) gradient of \(d\) in \(u\), i.e. \(\nabla_u\distance{u}{y}\).

prox_d(u, y, *args, **kwargs)[source]#

Computes the proximity operator \(\operatorname{prox}_{\gamma\distance{\cdot}{y}}(u)\), computed in \(u\).

Note that this is the proximity operator of \(\distancename\) and not \(\datafidname\). This function directly calls deepinv.optim.Distance.prox() for the specific distance function \(\distancename\).

Parameters:
  • u (torch.Tensor) – Variable \(u\) at which the gradient is computed.

  • y (torch.Tensor) – Data \(y\) of the same dimension as \(u\).

Returns:

(torch.Tensor) gradient of \(d\) in \(u\), i.e. \(\nabla_u\distance{u}{y}\).

prox_d_conjugate(u, y, *args, **kwargs)[source]#

Computes the proximity operator of the convex conjugate of the distance function \(\distance{u}{y}\).

This function directly calls deepinv.optim.Distance.prox_conjugate() for the specific distance function \(\distancename\).