.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/sampling/demo_sampling.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note New to DeepInverse? Get started with the basics with the :ref:`5 minute quickstart tutorial `.. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_sampling_demo_sampling.py: Uncertainty quantification with PnP-ULA. ==================================================================================================== This code shows you how to use sampling algorithms to quantify uncertainty of a reconstruction from incomplete and noisy measurements. ULA obtains samples by running the following iteration: .. math:: x_{k+1} = x_k + \alpha \eta \nabla \log p_{\sigma}(x_k) + \eta \nabla \log p(y|x_k) + \sqrt{2 \eta} z_k where :math:`z_k \sim \mathcal{N}(0, I)` is a Gaussian random variable, :math:`\eta` is the step size and :math:`\alpha` is a parameter controlling the regularization. The PnP-ULA method is described in the paper :footcite:t:`laumont2022bayesian`. .. GENERATED FROM PYTHON SOURCE LINES 21-26 .. code-block:: Python import deepinv as dinv from deepinv.utils.plotting import plot import torch from deepinv.utils import load_example .. GENERATED FROM PYTHON SOURCE LINES 27-31 Load image from the internet -------------------------------------------- This example uses an image of Messi. .. GENERATED FROM PYTHON SOURCE LINES 31-36 .. code-block:: Python device = dinv.utils.get_freer_gpu() if torch.cuda.is_available() else "cpu" x = load_example("messi.jpg", img_size=32).to(device) .. rst-class:: sphx-glr-script-out .. code-block:: none Selected GPU 0 with 4635.25 MiB free memory .. GENERATED FROM PYTHON SOURCE LINES 37-41 Define forward operator and noise model -------------------------------------------------------------- This example uses inpainting as the forward operator and Gaussian noise as the noise model. .. GENERATED FROM PYTHON SOURCE LINES 41-49 .. code-block:: Python sigma = 0.1 # noise level physics = dinv.physics.Inpainting(mask=0.5, img_size=x.shape[1:], device=device) physics.noise_model = dinv.physics.GaussianNoise(sigma=sigma) # Set the global random seed from pytorch to ensure reproducibility of the example. torch.manual_seed(0) .. rst-class:: sphx-glr-script-out .. code-block:: none .. GENERATED FROM PYTHON SOURCE LINES 50-57 Define the likelihood -------------------------------------------------------------- Since the noise model is Gaussian, the negative log-likelihood is the L2 loss. .. math:: -\log p(y|x) \propto \frac{1}{2\sigma^2} \|y-Ax\|^2 .. GENERATED FROM PYTHON SOURCE LINES 57-61 .. code-block:: Python # load Gaussian Likelihood likelihood = dinv.optim.data_fidelity.L2(sigma=sigma) .. GENERATED FROM PYTHON SOURCE LINES 62-80 Define the prior ------------------------------------------- The score a distribution can be approximated using Tweedie's formula via the :class:`deepinv.optim.ScorePrior` class. .. math:: \nabla \log p_{\sigma}(x) \approx \frac{1}{\sigma^2} \left(D(x,\sigma)-x\right) This example uses a pretrained DnCNN model. From a Bayesian point of view, the score plays the role of the gradient of the negative log prior The hyperparameter ``sigma_denoiser`` (:math:`sigma`) controls the strength of the prior. In this example, we use a pretrained DnCNN model using the :class:`deepinv.loss.FNEJacobianSpectralNorm` loss, which makes sure that the denoiser is firmly non-expansive (see :footcite:t:`terris2020building`), and helps to stabilize the sampling algorithm. .. GENERATED FROM PYTHON SOURCE LINES 80-86 .. code-block:: Python sigma_denoiser = 2 / 255 prior = dinv.optim.ScorePrior( denoiser=dinv.models.DnCNN(pretrained="download_lipschitz") ).to(device) .. GENERATED FROM PYTHON SOURCE LINES 87-95 Create the MCMC sampler -------------------------------------------------------------- Here we use the Unadjusted Langevin Algorithm (ULA) to sample from the posterior defined in :class:`deepinv.sampling.ULAIterator`. The hyperparameter ``step_size`` controls the step size of the MCMC sampler, ``regularization`` controls the strength of the prior and ``iterations`` controls the number of iterations of the sampler. .. GENERATED FROM PYTHON SOURCE LINES 95-114 .. code-block:: Python regularization = 0.9 step_size = 0.01 * (sigma**2) iterations = int(5e3) if torch.cuda.is_available() else 10 params = { "step_size": step_size, "alpha": regularization, "sigma": sigma_denoiser, } f = dinv.sampling.sampling_builder( "ULA", prior=prior, data_fidelity=likelihood, max_iter=iterations, params_algo=params, thinning=1, verbose=True, ) .. GENERATED FROM PYTHON SOURCE LINES 115-118 Generate the measurement -------------------------------------------------------------- We apply the forward model to generate the noisy measurement. .. GENERATED FROM PYTHON SOURCE LINES 118-122 .. code-block:: Python y = physics(x) .. GENERATED FROM PYTHON SOURCE LINES 123-127 Run sampling algorithm and plot results -------------------------------------------------------------- The sampling algorithm returns the posterior mean and variance. We compare the posterior mean with a simple linear reconstruction. .. GENERATED FROM PYTHON SOURCE LINES 127-146 .. code-block:: Python mean, var = f.sample(y, physics) # compute linear inverse x_lin = physics.A_adjoint(y) # compute PSNR print(f"Linear reconstruction PSNR: {dinv.metric.PSNR()(x, x_lin).item():.2f} dB") print(f"Posterior mean PSNR: {dinv.metric.PSNR()(x, mean).item():.2f} dB") # plot results error = (mean - x).abs().sum(dim=1).unsqueeze(1) # per pixel average abs. error std = var.sum(dim=1).unsqueeze(1).sqrt() # per pixel average standard dev. imgs = [x_lin, x, mean, std / std.flatten().max(), error / error.flatten().max()] plot( imgs, titles=["measurement", "ground truth", "post. mean", "post. std", "abs. error"], ) .. image-sg:: /auto_examples/sampling/images/sphx_glr_demo_sampling_001.png :alt: measurement, ground truth, post. mean, post. std, abs. error :srcset: /auto_examples/sampling/images/sphx_glr_demo_sampling_001.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out .. code-block:: none 0%| | 0/5000 [00:00` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: demo_sampling.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: demo_sampling.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_