Implementing DPS

In this tutorial, we will go over the steps in the Diffusion Posterior Sampling (DPS) algorithm introduced in Chung et al. The full algorithm is implemented in deepinv.sampling.DPS().

Note

We work with an image of size 64x64 to reduce the computational time of this example. The algorithm works best with images of size 256x256.

import numpy as np
import torch

import deepinv as dinv
from deepinv.utils.plotting import plot
from deepinv.optim.data_fidelity import L2
from deepinv.utils.demo import load_url_image, get_image_url
from tqdm import tqdm  # to visualize progress

device = dinv.utils.get_freer_gpu() if torch.cuda.is_available() else "cpu"

url = get_image_url("butterfly.png")

x_true = load_url_image(url=url, img_size=64).to(device)
x = x_true.clone()

In this tutorial we consider random inpainting as the inverse problem, where the forward operator is implemented in deepinv.physics.Inpainting(). In the example that we use, 90% of the pixels will be masked out randomly, and we will additionally have Additive White Gaussian Noise (AWGN) of standard deviation 12.75/255.

sigma = 12.75 / 255.0  # noise level

physics = dinv.physics.Inpainting(
    tensor_size=(3, x.shape[-2], x.shape[-1]),
    mask=0.1,
    pixelwise=True,
    device=device,
)

y = physics(x_true)

imgs = [y, x_true]
plot(
    imgs,
    titles=["measurement", "groundtruth"],
)
measurement, groundtruth

Diffusion model loading

We will take a pre-trained diffusion model that was also used for the DiffPIR algorithm, namely the one trained on the FFHQ 256x256 dataset. Note that this means that the diffusion model was trained with human face images, which is very different from the image that we consider in our example. Nevertheless, we will see later on that DPS generalizes sufficiently well even in such case.

model = dinv.models.DiffUNet(large_model=False).to(device)
Downloading: "https://huggingface.co/deepinv/diffunet/resolve/main/diffusion_ffhq_10m.pt?download=true" to /home/runner/.cache/torch/hub/checkpoints/diffusion_ffhq_10m.pt

  0%|          | 0.00/357M [00:00<?, ?B/s]
  0%|          | 128k/357M [00:00<08:30, 733kB/s]
  0%|          | 384k/357M [00:00<04:22, 1.42MB/s]
  0%|          | 1.00M/357M [00:00<01:58, 3.15MB/s]
  1%|          | 2.50M/357M [00:00<00:51, 7.22MB/s]
  2%|▏         | 6.00M/357M [00:00<00:22, 16.6MB/s]
  3%|▎         | 9.62M/357M [00:00<00:15, 23.1MB/s]
  4%|▎         | 13.4M/357M [00:00<00:12, 27.8MB/s]
  5%|▍         | 17.4M/357M [00:00<00:11, 32.0MB/s]
  6%|▌         | 21.1M/357M [00:01<00:10, 34.0MB/s]
  7%|▋         | 24.9M/357M [00:01<00:09, 35.4MB/s]
  8%|▊         | 28.8M/357M [00:01<00:09, 36.8MB/s]
  9%|▉         | 32.8M/357M [00:01<00:08, 38.1MB/s]
 10%|█         | 36.5M/357M [00:01<00:08, 38.1MB/s]
 11%|█▏        | 40.2M/357M [00:01<00:08, 38.4MB/s]
 12%|█▏        | 44.1M/357M [00:01<00:08, 39.0MB/s]
 13%|█▎        | 47.9M/357M [00:01<00:08, 38.6MB/s]
 14%|█▍        | 51.8M/357M [00:01<00:08, 39.1MB/s]
 16%|█▌        | 55.6M/357M [00:01<00:08, 39.4MB/s]
 17%|█▋        | 59.5M/357M [00:02<00:07, 39.8MB/s]
 18%|█▊        | 63.4M/357M [00:02<00:07, 39.2MB/s]
 19%|█▉        | 67.1M/357M [00:02<00:07, 39.2MB/s]
 20%|█▉        | 71.1M/357M [00:02<00:07, 39.7MB/s]
 21%|██        | 75.0M/357M [00:02<00:07, 39.1MB/s]
 22%|██▏       | 78.9M/357M [00:02<00:07, 39.3MB/s]
 23%|██▎       | 82.9M/357M [00:02<00:07, 39.8MB/s]
 24%|██▍       | 86.8M/357M [00:02<00:07, 39.4MB/s]
 25%|██▌       | 90.6M/357M [00:02<00:07, 39.1MB/s]
 27%|██▋       | 94.6M/357M [00:02<00:06, 39.8MB/s]
 28%|██▊       | 98.5M/357M [00:03<00:06, 39.3MB/s]
 29%|██▊       | 102M/357M [00:03<00:06, 38.9MB/s]
 30%|██▉       | 106M/357M [00:03<00:06, 39.3MB/s]
 31%|███       | 110M/357M [00:03<00:06, 39.7MB/s]
 32%|███▏      | 114M/357M [00:03<00:06, 39.7MB/s]
 33%|███▎      | 118M/357M [00:03<00:06, 39.1MB/s]
 34%|███▍      | 122M/357M [00:03<00:06, 39.5MB/s]
 35%|███▌      | 126M/357M [00:03<00:06, 39.8MB/s]
 36%|███▌      | 129M/357M [00:03<00:06, 39.1MB/s]
 37%|███▋      | 133M/357M [00:04<00:05, 39.4MB/s]
 38%|███▊      | 137M/357M [00:04<00:05, 39.7MB/s]
 39%|███▉      | 141M/357M [00:04<00:05, 39.1MB/s]
 41%|████      | 145M/357M [00:04<00:05, 39.5MB/s]
 42%|████▏     | 149M/357M [00:04<00:05, 39.8MB/s]
 43%|████▎     | 153M/357M [00:04<00:05, 39.9MB/s]
 44%|████▍     | 156M/357M [00:04<00:05, 39.1MB/s]
 45%|████▍     | 160M/357M [00:04<00:05, 39.4MB/s]
 46%|████▌     | 164M/357M [00:04<00:05, 39.8MB/s]
 47%|████▋     | 168M/357M [00:04<00:05, 39.1MB/s]
 48%|████▊     | 172M/357M [00:05<00:04, 39.6MB/s]
 49%|████▉     | 176M/357M [00:05<00:04, 39.7MB/s]
 50%|█████     | 180M/357M [00:05<00:04, 39.0MB/s]
 51%|█████▏    | 184M/357M [00:05<00:04, 39.4MB/s]
 53%|█████▎    | 188M/357M [00:05<00:04, 39.6MB/s]
 54%|█████▎    | 191M/357M [00:05<00:04, 39.6MB/s]
 55%|█████▍    | 195M/357M [00:05<00:04, 39.2MB/s]
 56%|█████▌    | 199M/357M [00:05<00:04, 39.6MB/s]
 57%|█████▋    | 203M/357M [00:05<00:04, 39.9MB/s]
 58%|█████▊    | 207M/357M [00:05<00:04, 39.1MB/s]
 59%|█████▉    | 211M/357M [00:06<00:03, 39.3MB/s]
 60%|██████    | 215M/357M [00:06<00:03, 39.9MB/s]
 61%|██████    | 219M/357M [00:06<00:03, 39.1MB/s]
 62%|██████▏   | 222M/357M [00:06<00:03, 39.4MB/s]
 63%|██████▎   | 226M/357M [00:06<00:03, 39.5MB/s]
 64%|██████▍   | 230M/357M [00:06<00:03, 38.7MB/s]
 66%|██████▌   | 234M/357M [00:06<00:03, 39.0MB/s]
 67%|██████▋   | 238M/357M [00:06<00:03, 39.7MB/s]
 68%|██████▊   | 242M/357M [00:06<00:03, 39.6MB/s]
 69%|██████▉   | 246M/357M [00:07<00:03, 38.7MB/s]
 70%|██████▉   | 250M/357M [00:07<00:02, 39.5MB/s]
 71%|███████   | 254M/357M [00:07<00:02, 39.9MB/s]
 72%|███████▏  | 258M/357M [00:07<00:02, 38.8MB/s]
 73%|███████▎  | 262M/357M [00:07<00:02, 39.3MB/s]
 74%|███████▍  | 266M/357M [00:07<00:02, 39.9MB/s]
 75%|███████▌  | 270M/357M [00:07<00:02, 38.5MB/s]
 77%|███████▋  | 273M/357M [00:07<00:02, 39.0MB/s]
 78%|███████▊  | 277M/357M [00:07<00:02, 39.8MB/s]
 79%|███████▉  | 281M/357M [00:07<00:02, 39.1MB/s]
 80%|███████▉  | 285M/357M [00:08<00:01, 39.1MB/s]
 81%|████████  | 289M/357M [00:08<00:01, 39.6MB/s]
 82%|████████▏ | 293M/357M [00:08<00:01, 39.5MB/s]
 83%|████████▎ | 297M/357M [00:08<00:01, 39.1MB/s]
 84%|████████▍ | 301M/357M [00:08<00:01, 39.4MB/s]
 85%|████████▌ | 305M/357M [00:08<00:01, 39.9MB/s]
 86%|████████▋ | 308M/357M [00:08<00:01, 39.1MB/s]
 88%|████████▊ | 312M/357M [00:08<00:01, 39.5MB/s]
 89%|████████▊ | 316M/357M [00:08<00:01, 39.9MB/s]
 90%|████████▉ | 320M/357M [00:08<00:00, 39.0MB/s]
 91%|█████████ | 324M/357M [00:09<00:00, 39.6MB/s]
 92%|█████████▏| 328M/357M [00:09<00:00, 39.8MB/s]
 93%|█████████▎| 332M/357M [00:09<00:00, 39.0MB/s]
 94%|█████████▍| 336M/357M [00:09<00:00, 39.5MB/s]
 95%|█████████▌| 340M/357M [00:09<00:00, 39.5MB/s]
 96%|█████████▋| 344M/357M [00:09<00:00, 39.3MB/s]
 97%|█████████▋| 348M/357M [00:09<00:00, 39.5MB/s]
 98%|█████████▊| 352M/357M [00:09<00:00, 39.4MB/s]
100%|█████████▉| 356M/357M [00:09<00:00, 39.8MB/s]
100%|██████████| 357M/357M [00:09<00:00, 37.6MB/s]

Define diffusion schedule

We will use the standard linear diffusion noise schedule. Once \(\beta_t\) is defined to follow a linear schedule that interpolates between \(\beta_{\rm min}\) and \(\beta_{\rm max}\), we have the following additional definitions: \(\alpha_t := 1 - \beta_t\), \(\bar\alpha_t := \prod_{j=1}^t \alpha_j\). The following equations will also be useful later on (we always assume that \(\mathbf{\epsilon} \sim \mathcal{N}(\mathbf{0}, \mathbf{I})\) hereafter.)

\[ \begin{align}\begin{aligned}\mathbf{x}_t = \sqrt{\beta_t}\mathbf{x}_{t-1} + \sqrt{1 - \beta_t}\mathbf{\epsilon}\\\mathbf{x}_t = \sqrt{\bar\alpha_t}\mathbf{x}_0 + \sqrt{1 - \bar\alpha_t}\mathbf{\epsilon}\end{aligned}\end{align} \]

where we use the reparametrization trick.

num_train_timesteps = 1000  # Number of timesteps used during training


def get_betas(
    beta_start=0.1 / 1000, beta_end=20 / 1000, num_train_timesteps=num_train_timesteps
):
    betas = np.linspace(beta_start, beta_end, num_train_timesteps, dtype=np.float32)
    betas = torch.from_numpy(betas).to(device)

    return betas


# Utility function to let us easily retrieve \bar\alpha_t
def compute_alpha(beta, t):
    beta = torch.cat([torch.zeros(1).to(beta.device), beta], dim=0)
    a = (1 - beta).cumprod(dim=0).index_select(0, t + 1).view(-1, 1, 1, 1)
    return a


betas = get_betas()

The DPS algorithm

Now that the inverse problem is defined, we can apply the DPS algorithm to solve it. The DPS algorithm is a diffusion algorithm that alternates between a denoising step, a gradient step and a reverse diffusion sampling step. The algorithm writes as follows, for \(t\) decreasing from \(T\) to \(1\):

\[\begin{split}\begin{equation*} \begin{aligned} \widehat{\mathbf{x}}_{t} &= \denoiser{\mathbf{x}_t}{\sqrt{1-\overline{\alpha}_t}/\sqrt{\overline{\alpha}_t}} \\ \mathbf{g}_t &= \nabla_{\mathbf{x}_t} \log p( \widehat{\mathbf{x}}_{t}(\mathbf{x}_t) | \mathbf{y} ) \\ \mathbf{\varepsilon}_t &= \mathcal{N}(0, \mathbf{I}) \\ \mathbf{x}_{t-1} &= a_t \,\, \mathbf{x}_t + b_t \, \, \widehat{\mathbf{x}}_t + \tilde{\sigma}_t \, \, \mathbf{\varepsilon}_t + \mathbf{g}_t, \end{aligned} \end{equation*}\end{split}\]

where \(\denoiser{\cdot}{\sigma}\) is a denoising network for noise level \(\sigma\), \(\eta\) is a hyperparameter, and the constants \(\tilde{\sigma}_t, a_t, b_t\) are defined as

\[\begin{split}\begin{equation*} \begin{aligned} \tilde{\sigma}_t &= \eta \sqrt{ (1 - \frac{\overline{\alpha}_t}{\overline{\alpha}_{t-1}}) \frac{1 - \overline{\alpha}_{t-1}}{1 - \overline{\alpha}_t}} \\ a_t &= \sqrt{1 - \overline{\alpha}_{t-1} - \tilde{\sigma}_t^2}/\sqrt{1-\overline{\alpha}_t} \\ b_t &= \sqrt{\overline{\alpha}_{t-1}} - \sqrt{1 - \overline{\alpha}_{t-1} - \tilde{\sigma}_t^2} \frac{\sqrt{\overline{\alpha}_{t}}}{\sqrt{1 - \overline{\alpha}_{t}}} \end{aligned} \end{equation*}\end{split}\]

Denoising step

The first step of DPS consists of applying a denoiser function to the current image \(\mathbf{x}_t\), with standard deviation \(\sigma_t = \sqrt{1 - \overline{\alpha}_t}/\sqrt{\overline{\alpha}_t}\).

This is equivalent to sampling \(\mathbf{x}_t \sim q(\mathbf{x}_t|\mathbf{x}_0)\), and then computing the posterior mean.

t = torch.ones(1, device=device) * 50  # choose some arbitrary timestep
at = compute_alpha(betas, t.long())
sigmat = (1 - at).sqrt() / at.sqrt()

x0 = x_true
xt = x0 + sigmat * torch.randn_like(x0)

# apply denoiser
x0_t = model(xt, sigmat)

# Visualize
imgs = [x0, xt, x0_t]
plot(
    imgs,
    titles=["ground-truth", "noisy", "posterior mean"],
)
ground-truth, noisy, posterior mean

DPS approximation

In order to perform gradient-based posterior sampling with diffusion models, we have to be able to compute \(\nabla_{\mathbf{x}_t} \log p(\mathbf{x}_t|\mathbf{y})\). Applying Bayes rule, we have

\[\nabla_{\mathbf{x}_t} \log p(\mathbf{x}_t|\mathbf{y}) = \nabla_{\mathbf{x}_t} \log p(\mathbf{x}_t) + \nabla_{\mathbf{x}_t} \log p(\mathbf{y}|\mathbf{x}_t)\]

For the former term, we can simply plug-in our estimated score function as in Tweedie’s formula. As the latter term is intractable, DPS proposes the following approximation (for details, see Theorem 1 of Chung et al.

\[\nabla_{\mathbf{x}_t} \log p(\mathbf{x}_t|\mathbf{y}) \approx \nabla_{\mathbf{x}_t} \log p(\mathbf{x}_t) + \nabla_{\mathbf{x}_t} \log p(\mathbf{y}|\widehat{\mathbf{x}}_{t})\]

Remarkably, we can now compute the latter term when we have Gaussian noise, as

\[\log p(\mathbf{y}|\hat{\mathbf{x}}_{t}) = -\frac{\|\mathbf{y} - A\widehat{\mathbf{x}}_{t}\|_2^2}{2\sigma_y^2}.\]

Moreover, taking the gradient w.r.t. \(\mathbf{x}_t\) can be performed through automatic differentiation. Let’s see how this can be done in PyTorch. Note that when we are taking the gradient w.r.t. a tensor, we first have to enable the gradient computation by tensor.requires_grad_()

Note

The diffPIR algorithm assumes that the images are in the range [-1, 1], whereas standard denoisers usually output images in the range [0, 1]. This is why we rescale the images before applying the steps.

x0 = x_true * 2.0 - 1.0  # [0, 1] -> [-1, 1]

data_fidelity = L2()

# xt ~ q(xt|x0)
i = 200  # choose some arbitrary timestep
t = (torch.ones(1) * i).to(device)
at = compute_alpha(betas, t.long())
xt = at.sqrt() * x0 + (1 - at).sqrt() * torch.randn_like(x0)

# DPS
with torch.enable_grad():
    # Turn on gradient
    xt.requires_grad_()

    # normalize to [0,1], denoise, and rescale to [-1, 1]
    x0_t = model(xt / 2 + 0.5, (1 - at).sqrt() / at.sqrt() / 2) * 2 - 1
    # Log-likelihood
    ll = data_fidelity(x0_t, y, physics).sqrt().sum()
    # Take gradient w.r.t. xt
    grad_ll = torch.autograd.grad(outputs=ll, inputs=xt)[0]

# Visualize
imgs = [x0, xt, x0_t, grad_ll]
plot(
    imgs,
    titles=["groundtruth", "noisy", "posterior mean", "gradient"],
)
groundtruth, noisy, posterior mean, gradient

DPS Algorithm

As we visited all the key components of DPS, we are now ready to define the algorithm. For every denoising timestep, the algorithm iterates the following

  1. Get \(\hat{\mathbf{x}}\) using the denoiser network.

  2. Compute \(\nabla_{\mathbf{x}_t} \log p(\mathbf{y}|\hat{\mathbf{x}}_t)\) through backpropagation.

  3. Perform reverse diffusion sampling with DDPM(IM), corresponding to an update with \(\nabla_{\mathbf{x}_t} \log p(\mathbf{x}_t)\).

  4. Take a gradient step with \(\nabla_{\mathbf{x}_t} \log p(\mathbf{y}|\hat{\mathbf{x}}_t)\).

There are two caveats here. First, in the original work, DPS used DDPM ancestral sampling. As the DDIM sampler is a generalization of DDPM in a sense that it retrieves DDPM when \(\eta = 1.0\), here we consider DDIM sampling. One can freely choose the \(\eta\) parameter here, but since we will consider 1000 neural function evaluations (NFEs), it is advisable to keep it \(\eta = 1.0\). Second, when taking the log-likelihood gradient step, the gradient is weighted so that the actual implementation is a static step size times the \(\ell_2\) norm of the residual:

\[\nabla_{\mathbf{x}_t} \log p(\mathbf{y}|\hat{\mathbf{x}}_{t}(\mathbf{x}_t)) \simeq \rho \nabla_{\mathbf{x}_t} \|\mathbf{y} - \mathbf{A}\hat{\mathbf{x}}_{t}\|_2\]

With these in mind, let us solve the inverse problem with DPS!

Note

We only use 200 steps to reduce the computational time of this example. As suggested by the authors of DPS, the algorithm works best with num_steps = 1000.

num_steps = 200

skip = num_train_timesteps // num_steps

batch_size = 1
eta = 1.0

seq = range(0, num_train_timesteps, skip)
seq_next = [-1] + list(seq[:-1])
time_pairs = list(zip(reversed(seq), reversed(seq_next)))

# measurement
x0 = x_true * 2.0 - 1.0
y = physics(x0.to(device))

# initial sample from x_T
x = torch.randn_like(x0)

xs = [x]
x0_preds = []

for i, j in tqdm(time_pairs):
    t = (torch.ones(batch_size) * i).to(device)
    next_t = (torch.ones(batch_size) * j).to(device)

    at = compute_alpha(betas, t.long())
    at_next = compute_alpha(betas, next_t.long())

    xt = xs[-1].to(device)

    with torch.enable_grad():
        xt.requires_grad_()

        # 1. denoising step
        # we call the denoiser using standard deviation instead of the time step.
        aux_x = xt / 2 + 0.5
        x0_t = 2 * model(aux_x, (1 - at).sqrt() / at.sqrt() / 2) - 1
        x0_t = torch.clip(x0_t, -1.0, 1.0)  # optional

        # 2. likelihood gradient approximation
        l2_loss = data_fidelity(x0_t, y, physics).sqrt().sum()

    norm_grad = torch.autograd.grad(outputs=l2_loss, inputs=xt)[0]
    norm_grad = norm_grad.detach()

    sigma_tilde = ((1 - at / at_next) * (1 - at_next) / (1 - at)).sqrt() * eta
    c2 = ((1 - at_next) - sigma_tilde**2).sqrt()

    # 3. noise step
    epsilon = torch.randn_like(xt)

    # 4. DDPM(IM) step
    xt_next = (
        (at_next.sqrt() - c2 * at.sqrt() / (1 - at).sqrt()) * x0_t
        + sigma_tilde * epsilon
        + c2 * xt / (1 - at).sqrt()
        - norm_grad
    )

    x0_preds.append(x0_t.to("cpu"))
    xs.append(xt_next.to("cpu"))

recon = xs[-1]

# plot the results
x = recon / 2 + 0.5
imgs = [y, x, x_true]
plot(imgs, titles=["measurement", "model output", "groundtruth"])
measurement, model output, groundtruth
  0%|          | 0/200 [00:00<?, ?it/s]
  0%|          | 1/200 [00:00<01:29,  2.21it/s]
  1%|          | 2/200 [00:00<01:28,  2.24it/s]
  2%|▏         | 3/200 [00:01<01:27,  2.26it/s]
  2%|▏         | 4/200 [00:01<01:27,  2.25it/s]
  2%|▎         | 5/200 [00:02<01:26,  2.26it/s]
  3%|▎         | 6/200 [00:02<01:25,  2.26it/s]
  4%|▎         | 7/200 [00:03<01:25,  2.27it/s]
  4%|▍         | 8/200 [00:03<01:25,  2.26it/s]
  4%|▍         | 9/200 [00:03<01:24,  2.26it/s]
  5%|▌         | 10/200 [00:04<01:23,  2.26it/s]
  6%|▌         | 11/200 [00:04<01:23,  2.26it/s]
  6%|▌         | 12/200 [00:05<01:23,  2.26it/s]
  6%|▋         | 13/200 [00:05<01:22,  2.26it/s]
  7%|▋         | 14/200 [00:06<01:22,  2.27it/s]
  8%|▊         | 15/200 [00:06<01:21,  2.26it/s]
  8%|▊         | 16/200 [00:07<01:21,  2.27it/s]
  8%|▊         | 17/200 [00:07<01:20,  2.28it/s]
  9%|▉         | 18/200 [00:07<01:19,  2.28it/s]
 10%|▉         | 19/200 [00:08<01:19,  2.28it/s]
 10%|█         | 20/200 [00:08<01:19,  2.26it/s]
 10%|█         | 21/200 [00:09<01:19,  2.27it/s]
 11%|█         | 22/200 [00:09<01:19,  2.25it/s]
 12%|█▏        | 23/200 [00:10<01:18,  2.26it/s]
 12%|█▏        | 24/200 [00:10<01:17,  2.26it/s]
 12%|█▎        | 25/200 [00:11<01:17,  2.26it/s]
 13%|█▎        | 26/200 [00:11<01:16,  2.27it/s]
 14%|█▎        | 27/200 [00:11<01:16,  2.27it/s]
 14%|█▍        | 28/200 [00:12<01:15,  2.27it/s]
 14%|█▍        | 29/200 [00:12<01:15,  2.28it/s]
 15%|█▌        | 30/200 [00:13<01:14,  2.29it/s]
 16%|█▌        | 31/200 [00:13<01:13,  2.29it/s]
 16%|█▌        | 32/200 [00:14<01:13,  2.27it/s]
 16%|█▋        | 33/200 [00:14<01:13,  2.28it/s]
 17%|█▋        | 34/200 [00:14<01:12,  2.28it/s]
 18%|█▊        | 35/200 [00:15<01:12,  2.28it/s]
 18%|█▊        | 36/200 [00:15<01:12,  2.27it/s]
 18%|█▊        | 37/200 [00:16<01:11,  2.28it/s]
 19%|█▉        | 38/200 [00:16<01:11,  2.28it/s]
 20%|█▉        | 39/200 [00:17<01:10,  2.27it/s]
 20%|██        | 40/200 [00:17<01:10,  2.27it/s]
 20%|██        | 41/200 [00:18<01:10,  2.26it/s]
 21%|██        | 42/200 [00:18<01:09,  2.26it/s]
 22%|██▏       | 43/200 [00:18<01:09,  2.27it/s]
 22%|██▏       | 44/200 [00:19<01:08,  2.27it/s]
 22%|██▎       | 45/200 [00:19<01:08,  2.25it/s]
 23%|██▎       | 46/200 [00:20<01:08,  2.25it/s]
 24%|██▎       | 47/200 [00:20<01:07,  2.26it/s]
 24%|██▍       | 48/200 [00:21<01:07,  2.26it/s]
 24%|██▍       | 49/200 [00:21<01:06,  2.26it/s]
 25%|██▌       | 50/200 [00:22<01:06,  2.27it/s]
 26%|██▌       | 51/200 [00:22<01:05,  2.27it/s]
 26%|██▌       | 52/200 [00:22<01:05,  2.27it/s]
 26%|██▋       | 53/200 [00:23<01:05,  2.25it/s]
 27%|██▋       | 54/200 [00:23<01:04,  2.26it/s]
 28%|██▊       | 55/200 [00:24<01:03,  2.27it/s]
 28%|██▊       | 56/200 [00:24<01:03,  2.27it/s]
 28%|██▊       | 57/200 [00:25<01:02,  2.28it/s]
 29%|██▉       | 58/200 [00:25<01:02,  2.27it/s]
 30%|██▉       | 59/200 [00:26<01:01,  2.28it/s]
 30%|███       | 60/200 [00:26<01:01,  2.27it/s]
 30%|███       | 61/200 [00:26<01:01,  2.28it/s]
 31%|███       | 62/200 [00:27<01:00,  2.28it/s]
 32%|███▏      | 63/200 [00:27<01:00,  2.28it/s]
 32%|███▏      | 64/200 [00:28<00:59,  2.29it/s]
 32%|███▎      | 65/200 [00:28<00:59,  2.28it/s]
 33%|███▎      | 66/200 [00:29<00:58,  2.28it/s]
 34%|███▎      | 67/200 [00:29<00:58,  2.29it/s]
 34%|███▍      | 68/200 [00:29<00:57,  2.29it/s]
 34%|███▍      | 69/200 [00:30<00:57,  2.29it/s]
 35%|███▌      | 70/200 [00:30<00:57,  2.28it/s]
 36%|███▌      | 71/200 [00:31<00:56,  2.28it/s]
 36%|███▌      | 72/200 [00:31<00:56,  2.28it/s]
 36%|███▋      | 73/200 [00:32<00:55,  2.29it/s]
 37%|███▋      | 74/200 [00:32<00:55,  2.29it/s]
 38%|███▊      | 75/200 [00:33<00:54,  2.28it/s]
 38%|███▊      | 76/200 [00:33<00:54,  2.29it/s]
 38%|███▊      | 77/200 [00:33<00:54,  2.28it/s]
 39%|███▉      | 78/200 [00:34<00:53,  2.28it/s]
 40%|███▉      | 79/200 [00:34<00:53,  2.28it/s]
 40%|████      | 80/200 [00:35<00:52,  2.29it/s]
 40%|████      | 81/200 [00:35<00:51,  2.29it/s]
 41%|████      | 82/200 [00:36<00:51,  2.28it/s]
 42%|████▏     | 83/200 [00:36<00:51,  2.28it/s]
 42%|████▏     | 84/200 [00:36<00:50,  2.27it/s]
 42%|████▎     | 85/200 [00:37<00:50,  2.28it/s]
 43%|████▎     | 86/200 [00:37<00:49,  2.28it/s]
 44%|████▎     | 87/200 [00:38<00:49,  2.28it/s]
 44%|████▍     | 88/200 [00:38<00:49,  2.28it/s]
 44%|████▍     | 89/200 [00:39<00:48,  2.27it/s]
 45%|████▌     | 90/200 [00:39<00:48,  2.28it/s]
 46%|████▌     | 91/200 [00:40<00:47,  2.28it/s]
 46%|████▌     | 92/200 [00:40<00:47,  2.29it/s]
 46%|████▋     | 93/200 [00:40<00:46,  2.29it/s]
 47%|████▋     | 94/200 [00:41<00:46,  2.28it/s]
 48%|████▊     | 95/200 [00:41<00:45,  2.29it/s]
 48%|████▊     | 96/200 [00:42<00:45,  2.27it/s]
 48%|████▊     | 97/200 [00:42<00:45,  2.28it/s]
 49%|████▉     | 98/200 [00:43<00:44,  2.29it/s]
 50%|████▉     | 99/200 [00:43<00:44,  2.28it/s]
 50%|█████     | 100/200 [00:43<00:43,  2.28it/s]
 50%|█████     | 101/200 [00:44<00:44,  2.24it/s]
 51%|█████     | 102/200 [00:44<00:43,  2.26it/s]
 52%|█████▏    | 103/200 [00:45<00:42,  2.26it/s]
 52%|█████▏    | 104/200 [00:45<00:42,  2.27it/s]
 52%|█████▎    | 105/200 [00:46<00:41,  2.27it/s]
 53%|█████▎    | 106/200 [00:46<00:41,  2.26it/s]
 54%|█████▎    | 107/200 [00:47<00:40,  2.27it/s]
 54%|█████▍    | 108/200 [00:47<00:40,  2.26it/s]
 55%|█████▍    | 109/200 [00:47<00:40,  2.27it/s]
 55%|█████▌    | 110/200 [00:48<00:39,  2.28it/s]
 56%|█████▌    | 111/200 [00:48<00:39,  2.27it/s]
 56%|█████▌    | 112/200 [00:49<00:38,  2.28it/s]
 56%|█████▋    | 113/200 [00:49<00:38,  2.24it/s]
 57%|█████▋    | 114/200 [00:50<00:38,  2.26it/s]
 57%|█████▊    | 115/200 [00:50<00:37,  2.27it/s]
 58%|█████▊    | 116/200 [00:51<00:36,  2.28it/s]
 58%|█████▊    | 117/200 [00:51<00:36,  2.28it/s]
 59%|█████▉    | 118/200 [00:51<00:36,  2.28it/s]
 60%|█████▉    | 119/200 [00:52<00:35,  2.28it/s]
 60%|██████    | 120/200 [00:52<00:35,  2.26it/s]
 60%|██████    | 121/200 [00:53<00:34,  2.27it/s]
 61%|██████    | 122/200 [00:53<00:34,  2.28it/s]
 62%|██████▏   | 123/200 [00:54<00:34,  2.26it/s]
 62%|██████▏   | 124/200 [00:54<00:33,  2.27it/s]
 62%|██████▎   | 125/200 [00:55<00:33,  2.24it/s]
 63%|██████▎   | 126/200 [00:55<00:32,  2.25it/s]
 64%|██████▎   | 127/200 [00:55<00:32,  2.26it/s]
 64%|██████▍   | 128/200 [00:56<00:31,  2.27it/s]
 64%|██████▍   | 129/200 [00:56<00:31,  2.25it/s]
 65%|██████▌   | 130/200 [00:57<00:31,  2.25it/s]
 66%|██████▌   | 131/200 [00:57<00:30,  2.27it/s]
 66%|██████▌   | 132/200 [00:58<00:30,  2.25it/s]
 66%|██████▋   | 133/200 [00:58<00:29,  2.26it/s]
 67%|██████▋   | 134/200 [00:59<00:29,  2.27it/s]
 68%|██████▊   | 135/200 [00:59<00:28,  2.27it/s]
 68%|██████▊   | 136/200 [00:59<00:28,  2.27it/s]
 68%|██████▊   | 137/200 [01:00<00:28,  2.23it/s]
 69%|██████▉   | 138/200 [01:00<00:27,  2.25it/s]
 70%|██████▉   | 139/200 [01:01<00:27,  2.25it/s]
 70%|███████   | 140/200 [01:01<00:26,  2.26it/s]
 70%|███████   | 141/200 [01:02<00:25,  2.28it/s]
 71%|███████   | 142/200 [01:02<00:26,  2.22it/s]
 72%|███████▏  | 143/200 [01:03<00:25,  2.24it/s]
 72%|███████▏  | 144/200 [01:03<00:24,  2.24it/s]
 72%|███████▎  | 145/200 [01:03<00:24,  2.26it/s]
 73%|███████▎  | 146/200 [01:04<00:23,  2.27it/s]
 74%|███████▎  | 147/200 [01:04<00:23,  2.26it/s]
 74%|███████▍  | 148/200 [01:05<00:22,  2.27it/s]
 74%|███████▍  | 149/200 [01:05<00:22,  2.22it/s]
 75%|███████▌  | 150/200 [01:06<00:22,  2.24it/s]
 76%|███████▌  | 151/200 [01:06<00:21,  2.25it/s]
 76%|███████▌  | 152/200 [01:07<00:21,  2.26it/s]
 76%|███████▋  | 153/200 [01:07<00:20,  2.27it/s]
 77%|███████▋  | 154/200 [01:07<00:20,  2.27it/s]
 78%|███████▊  | 155/200 [01:08<00:19,  2.27it/s]
 78%|███████▊  | 156/200 [01:08<00:19,  2.24it/s]
 78%|███████▊  | 157/200 [01:09<00:19,  2.26it/s]
 79%|███████▉  | 158/200 [01:09<00:18,  2.27it/s]
 80%|███████▉  | 159/200 [01:10<00:18,  2.26it/s]
 80%|████████  | 160/200 [01:10<00:17,  2.27it/s]
 80%|████████  | 161/200 [01:11<00:17,  2.23it/s]
 81%|████████  | 162/200 [01:11<00:16,  2.25it/s]
 82%|████████▏ | 163/200 [01:11<00:16,  2.25it/s]
 82%|████████▏ | 164/200 [01:12<00:15,  2.27it/s]
 82%|████████▎ | 165/200 [01:12<00:15,  2.27it/s]
 83%|████████▎ | 166/200 [01:13<00:14,  2.27it/s]
 84%|████████▎ | 167/200 [01:13<00:14,  2.28it/s]
 84%|████████▍ | 168/200 [01:14<00:14,  2.28it/s]
 84%|████████▍ | 169/200 [01:14<00:13,  2.29it/s]
 85%|████████▌ | 170/200 [01:14<00:13,  2.29it/s]
 86%|████████▌ | 171/200 [01:15<00:12,  2.28it/s]
 86%|████████▌ | 172/200 [01:15<00:12,  2.28it/s]
 86%|████████▋ | 173/200 [01:16<00:11,  2.25it/s]
 87%|████████▋ | 174/200 [01:16<00:11,  2.27it/s]
 88%|████████▊ | 175/200 [01:17<00:10,  2.27it/s]
 88%|████████▊ | 176/200 [01:17<00:10,  2.27it/s]
 88%|████████▊ | 177/200 [01:18<00:10,  2.27it/s]
 89%|████████▉ | 178/200 [01:18<00:09,  2.23it/s]
 90%|████████▉ | 179/200 [01:18<00:09,  2.25it/s]
 90%|█████████ | 180/200 [01:19<00:08,  2.25it/s]
 90%|█████████ | 181/200 [01:19<00:08,  2.27it/s]
 91%|█████████ | 182/200 [01:20<00:07,  2.28it/s]
 92%|█████████▏| 183/200 [01:20<00:07,  2.28it/s]
 92%|█████████▏| 184/200 [01:21<00:07,  2.28it/s]
 92%|█████████▎| 185/200 [01:21<00:06,  2.28it/s]
 93%|█████████▎| 186/200 [01:21<00:06,  2.29it/s]
 94%|█████████▎| 187/200 [01:22<00:05,  2.29it/s]
 94%|█████████▍| 188/200 [01:22<00:05,  2.28it/s]
 94%|█████████▍| 189/200 [01:23<00:04,  2.29it/s]
 95%|█████████▌| 190/200 [01:23<00:04,  2.23it/s]
 96%|█████████▌| 191/200 [01:24<00:04,  2.25it/s]
 96%|█████████▌| 192/200 [01:24<00:03,  2.26it/s]
 96%|█████████▋| 193/200 [01:25<00:03,  2.26it/s]
 97%|█████████▋| 194/200 [01:25<00:02,  2.26it/s]
 98%|█████████▊| 195/200 [01:26<00:02,  2.21it/s]
 98%|█████████▊| 196/200 [01:26<00:01,  2.23it/s]
 98%|█████████▊| 197/200 [01:26<00:01,  2.24it/s]
 99%|█████████▉| 198/200 [01:27<00:00,  2.26it/s]
100%|█████████▉| 199/200 [01:27<00:00,  2.27it/s]
100%|██████████| 200/200 [01:28<00:00,  2.27it/s]
100%|██████████| 200/200 [01:28<00:00,  2.27it/s]

Using DPS in your inverse problem

You can readily use this algorithm via the deepinv.sampling.DPS() class.

y = physics(x)
model = dinv.sampling.DPS(dinv.models.DiffUNet(), data_fidelity=dinv.optim.L2())
xhat = model(y, physics)

Total running time of the script: (1 minutes 40.789 seconds)

Gallery generated by Sphinx-Gallery