Diffusion and MCMC Algorithms#

This package contains posterior sampling algorithms.

\[- \log p(x|y,A) \propto d(Ax,y) + \reg{x},\]

where \(x\) is the image to be reconstructed, \(y\) are the measurements, \(d(Ax,y) \propto - \log p(y|x,A)\) is the negative log-likelihood and \(\reg{x} \propto - \log p_{\sigma}(x)\) is the negative log-prior.

Diffusion#

We provide various sota diffusion methods for sampling from the posterior distribution. Diffusion methods produce a sample from the posterior x given a measurement y as x = model(y, physics), where model is the diffusion algorithm and physics is the forward operator.

Diffusion methods obtain a single sample per call. If multiple samples are required, the deepinv.sampling.DiffusionSampler can be used to convert a diffusion method into a sampler that obtains multiple samples to compute posterior statistics such as the mean or variance.

Table 11 Diffusion methods#

Method

Description

Limitations

deepinv.sampling.DDRM

Diffusion Denoising Restoration Models

Only for SVD decomposable operators.

deepinv.sampling.DiffPIR

Diffusion PnP Image Restoration

Only for linear operators.

deepinv.sampling.DPS

Diffusion Posterior Sampling

Can be slow, requires backpropagation through the denoiser.

Markov Chain Monte Carlo#

The negative log likelihood from this list:, which includes Gaussian noise, Poisson noise, etc. The negative log prior can be approximated using deepinv.optim.ScorePrior() with a pretrained denoiser, which leverages Tweedie’s formula, i.e.,

\[- \nabla \log p_{\sigma}(x) \propto \left(x-\denoiser{x}{\sigma}\right)/\sigma^2\]

where \(p_{\sigma} = p*\mathcal{N}(0,I\sigma^2)\) is the prior convolved with a Gaussian kernel, \(\denoiser{\cdot}{\sigma}\) is a (trained or model-based) denoiser with noise level \(\sigma\), which is typically set to a low value.

Note

The approximation of the prior obtained via deepinv.optim.ScorePrior() is also valid for maximum-a-posteriori (MAP) denoisers, but \(p_{\sigma}(x)\) is not given by the convolution with a Gaussian kernel, but rather given by the Moreau-Yosida envelope of \(p(x)\), i.e.,

\[p_{\sigma}(x)=e^{- \inf_z \left(-\log p(z) + \frac{1}{2\sigma}\|x-z\|^2 \right)}.\]

All MCMC methods inherit from deepinv.sampling.MonteCarlo. We also provide MCMC methods for sampling from the posterior distribution based on the unadjusted Langevin algorithm.

Table 12 MCMC methods#

Method

Description

deepinv.sampling.ULA

Unadjusted Langevin algorithm.

deepinv.sampling.SKRock

Runge-Kutta-Chebyshev stochastic approximation to accelerate the standard Unadjusted Langevin Algorithm.