NoisyDataFidelity#

class deepinv.sampling.NoisyDataFidelity(d=None, *args, **kwargs)[source]#

Bases: DataFidelity

Preconditioned data fidelity term for noisy data \(- \log p(y|x + \sigma(t) \omega)\) with \(\omega\sim\mathcal{N}(0,\mathrm{I})\).

This is a base class for the conditional classes for approximating \(\log p_t(y|x_t)\) used in diffusion algorithms for inverse problems, in deepinv.sampling.PosteriorDiffusion.

It comes with a .grad method computing the score \(\nabla_{x_t} \log p_t(y|x_t)\).

By default we have

\[\begin{equation*} \nabla_{x_t} \log p(y|x + \sigma(t) \omega) = P(\forw{x_t'}-y), \end{equation*}\]

where \(P\) is a preconditioner and \(x_t'\) is an estimation of the image \(x\). By default, \(P\) is defined as \(A^\top\), \(x_t' = x_t\) and this class matches the deepinv.optim.DataFidelity class.

diff(x, y, physics, *args, **kwargs)[source]#

Computes the difference \(A(x) - y\) between the forward operator applied to the current iterate and the input data.

Parameters:
Returns:

(torch.Tensor) difference between the forward operator applied to the current iterate and the input data.

Return type:

Tensor

forward(x, y, physics, *args, **kwargs)[source]#

Computes the data-fidelity term.

Parameters:
Returns:

(torch.Tensor) loss term.

Return type:

Tensor

grad(x, y, physics, *args, **kwargs)[source]#

Computes the gradient of the data-fidelity term.

Parameters:
Returns:

(torch.Tensor) data-fidelity term.

Return type:

Tensor

precond(u, physics, *args, **kwargs)[source]#

The preconditioner \(P\) for the data fidelity term. Default to \(Id\)

Parameters:
Returns:

(torch.Tensor) preconditionned tensor \(P(u)\).

Return type:

Tensor

Examples using NoisyDataFidelity:#

Building your diffusion posterior sampling method using SDEs

Building your diffusion posterior sampling method using SDEs