.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/sampling/demo_diffusion_sde.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_sampling_demo_diffusion_sde.py: Building your diffusion posterior sampling method using SDEs ============================================================ This demo shows you how to use :class:`deepinv.sampling.PosteriorDiffusion` to perform posterior sampling. It also can be used to perform unconditional image generation with arbitrary denoisers, if the data fidelity term is not specified. This method requires: * A well-trained denoiser with varying noise levels (ideally with large noise levels) (e.g., :class:`deepinv.models.NCSNpp`). * A (noisy) data fidelity term (e.g., :class:`deepinv.sampling.DPSDataFidelity`). * Define a drift term :math:`f(x, t)` and a diffusion term :math:`g(t)` for the forward-time SDE. They can be defined through the :class:`deepinv.sampling.DiffusionSDE` (e.g., :class:`deepinv.sampling.VarianceExplodingDiffusion`). The :class:`deepinv.sampling.PosteriorDiffusion` class can be used to perform posterior sampling for inverse problems. Consider the acquisition model: .. math:: y = \noise{\forw{x}} where :math:`\forw{x}` is the forward operator (e.g., a convolutional operator) and :math:`\noise{\cdot}` is the noise operator (e.g., Gaussian noise). This class defines the reverse-time SDE for the posterior distribution :math:`p(x|y)` given the data :math:`y`: .. math:: d\, x_t = \left( f(x_t, t) - \frac{1 + \alpha}{2} g(t)^2 \nabla_{x_t} \log p_t(x_t | y) \right) d\,t + g(t) \sqrt{\alpha} d\, w_{t} where :math:`f` is the drift term, :math:`g` is the diffusion coefficient and :math:`w` is the standard Brownian motion. The drift term and the diffusion coefficient are defined by the underlying (unconditional) forward-time SDE `sde`. In this example, we will use 2 well-known SDE in the literature: the Variance-Exploding (VE) and Variance-Preserving (VP or DDPM). The (conditional) score function :math:`\nabla_{x_t} \log p_t(x_t | y)` can be decomposed using the Bayes' rule: .. math:: \nabla_{x_t} \log p_t(x_t | y) = \nabla_{x_t} \log p_t(x_t) + \nabla_{x_t} \log p_t(y | x_t). The first term is the score function of the unconditional SDE, which is typically approximated by an MMSE denoiser (`denoiser`) using the well-known Tweedie's formula, while the second term is approximated by the (noisy) data-fidelity term (`data_fidelity`). We implement various data-fidelity terms in `the user guide `_. .. note:: In this demo, we limit the number of diffusion steps for the sake of speed, but in practice, you should use a larger number of steps to obtain better results. .. GENERATED FROM PYTHON SOURCE LINES 47-55 --------------------------------------------------- Let us import the necessary modules, define the denoiser and the SDE. In this first example, we use the Variance-Exploding SDE, whose forward process is defined as: .. math:: d\, x_t = g(t) d\, w_t \quad \mbox{where } g(t) = \sigma_{\mathrm{min}}\left( \frac{\sigma_{\mathrm{max}}}{\sigma_{\mathrm{min}}}\right)^t .. GENERATED FROM PYTHON SOURCE LINES 55-64 .. code-block:: Python import torch import deepinv as dinv from deepinv.models import NCSNpp device = "cuda" if torch.cuda.is_available() else "cpu" dtype = torch.float64 figsize = 2.5 gif_frequency = 10 # Increase this value to save the GIF saving time .. GENERATED FROM PYTHON SOURCE LINES 65-96 .. code-block:: Python from deepinv.sampling import ( PosteriorDiffusion, DPSDataFidelity, EulerSolver, VarianceExplodingDiffusion, ) from deepinv.optim import ZeroFidelity # In this example, we use the pre-trained FFHQ-64 model from the # EDM framework: https://arxiv.org/pdf/2206.00364 . # The network architecture is from Song et al: https://arxiv.org/abs/2011.13456 . denoiser = NCSNpp(pretrained="download").to(device) # The solution is obtained by calling the SDE object with a desired solver (here, Euler). # The reproducibility of the SDE Solver class can be controlled by providing the pseudo-random number generator. num_steps = 150 rng = torch.Generator(device).manual_seed(42) timesteps = torch.linspace(1, 0.001, num_steps) solver = EulerSolver(timesteps=timesteps, rng=rng) sigma_min = 0.005 sigma_max = 5 sde = VarianceExplodingDiffusion( sigma_max=sigma_max, sigma_min=sigma_min, alpha=0.5, device=device, dtype=dtype, ) .. rst-class:: sphx-glr-script-out .. code-block:: none Downloading: "https://huggingface.co/deepinv/edm/resolve/main/ncsnpp-ffhq64-uncond-ve.pt?download=true" to /home/runner/.cache/torch/hub/checkpoints/ncsnpp-ffhq64-uncond-ve.pt 0%| | 0.00/240M [00:00` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: demo_diffusion_sde.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: demo_diffusion_sde.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_