Diffusion and MCMC Algorithms#
This package contains posterior sampling algorithms.
where \(x\) is the image to be reconstructed, \(y\) are the measurements, \(d(Ax,y) \propto - \log p(y|x,A)\) is the negative log-likelihood and \(\reg{x} \propto - \log p_{\sigma}(x)\) is the negative log-prior.
Diffusion#
We provide various sota diffusion methods for sampling from the posterior distribution.
Diffusion methods produce a sample from the posterior x
given a
measurement y
as x = model(y, physics)
,
where model
is the diffusion algorithm and physics
is the forward operator.
Diffusion methods obtain a single sample per call. If multiple samples are required, the
deepinv.sampling.DiffusionSampler
can be used to convert a diffusion method into a sampler that
obtains multiple samples to compute posterior statistics such as the mean or variance.
Method |
Description |
Limitations |
---|---|---|
Diffusion Denoising Restoration Models |
Only for |
|
Diffusion PnP Image Restoration |
Only for |
|
Diffusion Posterior Sampling |
Can be slow, requires backpropagation through the denoiser. |
Markov Chain Monte Carlo#
The negative log likelihood from this list:, which includes Gaussian noise,
Poisson noise, etc. The negative log prior can be approximated using deepinv.optim.ScorePrior()
with a
pretrained denoiser, which leverages Tweedie’s formula, i.e.,
where \(p_{\sigma} = p*\mathcal{N}(0,I\sigma^2)\) is the prior convolved with a Gaussian kernel, \(\denoiser{\cdot}{\sigma}\) is a (trained or model-based) denoiser with noise level \(\sigma\), which is typically set to a low value.
Note
The approximation of the prior obtained via
deepinv.optim.ScorePrior()
is also valid for maximum-a-posteriori (MAP) denoisers,
but \(p_{\sigma}(x)\) is not given by the convolution with a Gaussian kernel, but rather
given by the Moreau-Yosida envelope of \(p(x)\), i.e.,
All MCMC methods inherit from deepinv.sampling.MonteCarlo
.
We also provide MCMC methods for sampling from the posterior distribution based on the unadjusted Langevin algorithm.
Method |
Description |
---|---|
Unadjusted Langevin algorithm. |
|
Runge-Kutta-Chebyshev stochastic approximation to accelerate the standard Unadjusted Langevin Algorithm. |