Note
Go to the end to download the full example code.
Imaging inverse problems with adversarial networks#
This example shows you how to train various networks using adversarial
training for deblurring problems. We demonstrate running training and
inference using a conditional GAN (i.e. DeblurGAN), CSGM, AmbientGAN and
UAIR implemented in the library, and how to simply train
your own GAN by using deepinv.training.AdversarialTrainer()
. These
examples can also be easily extended to train more complicated GANs such
as CycleGAN.
This example is based on the following papers:
Kupyn et al., DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks
Bora et al., Compressed Sensing using Generative Models (CSGM)
Bora et al., AmbientGAN: Generative models from lossy measurements
Pajot et al., Unsupervised Adversarial Image Reconstruction
Adversarial networks are characterised by the addition of an adversarial loss \(\mathcal{L}_\text{adv}\) to the standard reconstruction loss:
where \(D(\cdot)\) is the discriminator model, \(x\) is the reference image, \(\hat x\) is the estimated reconstruction, \(q(\cdot)\) is a quality function (e.g \(q(x)=x\) for WGAN). Training alternates between generator \(G\) and discriminator \(D\) in a minimax game. When there are no ground truths (i.e. unsupervised), this may be defined on the measurements \(y\) instead.
from pathlib import Path
import torch
from torch.utils.data import DataLoader, random_split
from torchvision.datasets import ImageFolder
from torchvision.transforms import Compose, ToTensor, CenterCrop, Resize
from torchvision.datasets.utils import download_and_extract_archive
import deepinv as dinv
from deepinv.loss import adversarial
from deepinv.utils.demo import get_data_home
from deepinv.physics.generator import MotionBlurGenerator
device = dinv.utils.get_freer_gpu() if torch.cuda.is_available() else "cpu"
BASE_DIR = Path(".")
DATA_DIR = BASE_DIR / "measurments"
ORGINAL_DATA_DIR = get_data_home() / "Urban100"
Generate dataset#
In this example we use the Urban100 dataset resized to 128x128. We apply random
motion blur physics using
deepinv.physics.generator.MotionBlurGenerator()
, and save the data
using deepinv.datasets.generate_dataset()
.
physics = dinv.physics.Blur(padding="circular", device=device)
blur_generator = MotionBlurGenerator((11, 11))
dataset = dinv.datasets.Urban100HR(
root=ORGINAL_DATA_DIR,
download=True,
transform=Compose([ToTensor(), Resize(256), CenterCrop(128)]),
)
train_dataset, test_dataset = random_split(dataset, (0.8, 0.2))
# Generate data pairs x,y offline using a physics generator
dataset_path = dinv.datasets.generate_dataset(
train_dataset=train_dataset,
test_dataset=test_dataset,
physics=physics,
physics_generator=blur_generator,
device=device,
save_dir=DATA_DIR,
batch_size=1,
)
train_dataloader = DataLoader(
dinv.datasets.HDF5Dataset(dataset_path, train=True), shuffle=True
)
test_dataloader = DataLoader(
dinv.datasets.HDF5Dataset(dataset_path, train=False), shuffle=False
)
0%| | 0/135388067 [00:00<?, ?it/s]
14%|█▍ | 18.1M/129M [00:00<00:00, 190MB/s]
31%|███ | 39.6M/129M [00:00<00:00, 211MB/s]
49%|████▉ | 63.1M/129M [00:00<00:00, 227MB/s]
68%|██████▊ | 87.3M/129M [00:00<00:00, 237MB/s]
85%|████████▌ | 110M/129M [00:00<00:00, 238MB/s]
100%|██████████| 129M/129M [00:00<00:00, 235MB/s]
Extracting: 0%| | 0/101 [00:00<?, ?it/s]
Extracting: 16%|█▌ | 16/101 [00:00<00:00, 149.71it/s]
Extracting: 33%|███▎ | 33/101 [00:00<00:00, 158.90it/s]
Extracting: 52%|█████▏ | 53/101 [00:00<00:00, 174.99it/s]
Extracting: 70%|███████ | 71/101 [00:00<00:00, 162.75it/s]
Extracting: 87%|████████▋ | 88/101 [00:00<00:00, 160.59it/s]
Extracting: 100%|██████████| 101/101 [00:00<00:00, 160.41it/s]
Dataset has been successfully downloaded.
Dataset has been saved at measurments/dinv_dataset0.h5
Define models#
We first define reconstruction network (i.e conditional generator) and discriminator network to use for adversarial training. For demonstration we use a simple U-Net as the reconstruction network and the discriminator from PatchGAN, but these can be replaced with any architecture e.g transformers, unrolled etc. Further discriminator models are in adversarial models.
def get_models(model=None, D=None, lr_g=1e-4, lr_d=1e-4, device=device):
if model is None:
model = dinv.models.UNet(
in_channels=3,
out_channels=3,
scales=2,
circular_padding=True,
batch_norm=False,
).to(device)
if D is None:
D = dinv.models.PatchGANDiscriminator(n_layers=2, batch_norm=False).to(device)
optimizer = dinv.training.adversarial.AdversarialOptimizer(
torch.optim.Adam(model.parameters(), lr=lr_g, weight_decay=1e-8),
torch.optim.Adam(D.parameters(), lr=lr_d, weight_decay=1e-8),
)
scheduler = dinv.training.adversarial.AdversarialScheduler(
torch.optim.lr_scheduler.StepLR(optimizer.G, step_size=5, gamma=0.9),
torch.optim.lr_scheduler.StepLR(optimizer.D, step_size=5, gamma=0.9),
)
return model, D, optimizer, scheduler
Conditional GAN training#
Conditional GANs (Kupyn et al., DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks) are a type of GAN where the generator is conditioned on a label or input. In the context of imaging, this can be used to generate images from a given measurement. In this example, we use a simple U-Net as the generator and a PatchGAN discriminator. The forward pass of the generator is given by:
Conditional GAN forward pass:
Conditional GAN loss:
where \(\mathcal{L}_\text{sup}\) is a supervised loss such as pixel-wise MSE or VGG Perceptual Loss.
G, D, optimizer, scheduler = get_models()
We next define pixel-wise and adversarial losses as defined above. We use the MSE for the supervised pixel-wise metric for simplicity but this can be easily replaced with a perceptual loss if desired.
loss_g = [
dinv.loss.SupLoss(metric=torch.nn.MSELoss()),
adversarial.SupAdversarialGeneratorLoss(device=device),
]
loss_d = adversarial.SupAdversarialDiscriminatorLoss(device=device)
We are now ready to train the networks using deepinv.training.AdversarialTrainer()
.
We load the pretrained models that were trained in the exact same way after 50 epochs,
and fine-tune the model for 1 epoch for a quick demo.
You can find the pretrained models on HuggingFace https://huggingface.co/deepinv/adversarial-demo.
To train from scratch, simply comment out the model loading code and increase the number of epochs.
ckpt = torch.hub.load_state_dict_from_url(
dinv.models.utils.get_weights_url("adversarial-demo", "deblurgan_model.pth"),
map_location=lambda s, _: s,
)
G.load_state_dict(ckpt["state_dict"])
D.load_state_dict(ckpt["state_dict_D"])
optimizer.load_state_dict(ckpt["optimizer"])
trainer = dinv.training.AdversarialTrainer(
model=G,
D=D,
physics=physics,
train_dataloader=train_dataloader,
eval_dataloader=test_dataloader,
epochs=1,
losses=loss_g,
losses_d=loss_d,
optimizer=optimizer,
scheduler=scheduler,
verbose=True,
show_progress_bar=False,
save_path=None,
device=device,
)
G = trainer.train()
Downloading: "https://huggingface.co/deepinv/adversarial-demo/resolve/main/deblurgan_model.pth?download=true" to /home/runner/.cache/torch/hub/checkpoints/deblurgan_model.pth
0%| | 0.00/12.7M [00:00<?, ?B/s]
9%|▉ | 1.12M/12.7M [00:00<00:01, 11.2MB/s]
18%|█▊ | 2.25M/12.7M [00:00<00:00, 11.4MB/s]
27%|██▋ | 3.38M/12.7M [00:00<00:00, 10.4MB/s]
35%|███▌ | 4.50M/12.7M [00:00<00:00, 10.8MB/s]
44%|████▍ | 5.62M/12.7M [00:00<00:00, 10.3MB/s]
52%|█████▏ | 6.62M/12.7M [00:00<00:00, 10.4MB/s]
60%|██████ | 7.62M/12.7M [00:00<00:00, 10.4MB/s]
68%|██████▊ | 8.62M/12.7M [00:00<00:00, 10.4MB/s]
76%|███████▌ | 9.62M/12.7M [00:00<00:00, 10.4MB/s]
84%|████████▎ | 10.6M/12.7M [00:01<00:00, 10.4MB/s]
91%|█████████▏| 11.6M/12.7M [00:01<00:00, 10.4MB/s]
99%|█████████▉| 12.6M/12.7M [00:01<00:00, 10.4MB/s]
100%|██████████| 12.7M/12.7M [00:01<00:00, 10.5MB/s]
The model has 444867 trainable parameters
Train epoch 0: SupLoss=0.004, SupAdversarialGeneratorLoss=0.003, TotalLoss=0.007, PSNR=25.365
Eval epoch 0: PSNR=25.981
Test the trained model and plot the results. We compare to the pseudo-inverse as a baseline.
trainer.plot_images = True
trainer.test(test_dataloader)
Eval epoch 0: PSNR=25.981, PSNR no learning=23.474
Test results:
PSNR no learning: 23.474 +- 2.865
PSNR: 25.981 +- 3.784
{'PSNR no learning': np.float64(23.473597717285156), 'PSNR no learning_std': np.float64(2.8652135153100127), 'PSNR': np.float64(25.980697631835938), 'PSNR_std': np.float64(3.78353894777952)}
UAIR training#
Unsupervised Adversarial Image Reconstruction (UAIR) (Pajot et al., Unsupervised Adversarial Image Reconstruction) is a method for solving inverse problems using generative models. In this example, we use a simple U-Net as the generator and discriminator, and train using the adversarial loss. The forward pass of the generator is defined as:
UAIR forward pass:
UAIR loss:
We next load the models and construct losses as defined above.
G, D, optimizer, scheduler = get_models(
lr_g=1e-4, lr_d=4e-4
) # learning rates from original paper
loss_g = adversarial.UAIRGeneratorLoss(device=device)
loss_d = adversarial.UnsupAdversarialDiscriminatorLoss(device=device)
We are now ready to train the networks using deepinv.training.AdversarialTrainer()
.
Like above, we load a pretrained model trained in the exact same way for 50 epochs,
and fine-tune here for a quick demo with 1 epoch.
ckpt = torch.hub.load_state_dict_from_url(
dinv.models.utils.get_weights_url("adversarial-demo", "uair_model.pth"),
map_location=lambda s, _: s,
)
G.load_state_dict(ckpt["state_dict"])
D.load_state_dict(ckpt["state_dict_D"])
optimizer.load_state_dict(ckpt["optimizer"])
trainer = dinv.training.AdversarialTrainer(
model=G,
D=D,
physics=physics,
train_dataloader=train_dataloader,
eval_dataloader=test_dataloader,
epochs=1,
losses=loss_g,
losses_d=loss_d,
optimizer=optimizer,
scheduler=scheduler,
verbose=True,
show_progress_bar=False,
save_path=None,
device=device,
)
G = trainer.train()
Downloading: "https://huggingface.co/deepinv/adversarial-demo/resolve/main/uair_model.pth?download=true" to /home/runner/.cache/torch/hub/checkpoints/uair_model.pth
0%| | 0.00/12.7M [00:00<?, ?B/s]
9%|▉ | 1.12M/12.7M [00:00<00:01, 11.2MB/s]
18%|█▊ | 2.25M/12.7M [00:00<00:00, 11.4MB/s]
27%|██▋ | 3.38M/12.7M [00:00<00:00, 10.4MB/s]
35%|███▌ | 4.50M/12.7M [00:00<00:00, 10.9MB/s]
44%|████▍ | 5.62M/12.7M [00:00<00:00, 10.3MB/s]
52%|█████▏ | 6.62M/12.7M [00:00<00:00, 10.3MB/s]
60%|██████ | 7.62M/12.7M [00:00<00:00, 10.4MB/s]
68%|██████▊ | 8.62M/12.7M [00:00<00:00, 10.4MB/s]
77%|███████▋ | 9.75M/12.7M [00:00<00:00, 10.8MB/s]
86%|████████▌ | 10.9M/12.7M [00:01<00:00, 10.3MB/s]
93%|█████████▎| 11.9M/12.7M [00:01<00:00, 10.3MB/s]
100%|██████████| 12.7M/12.7M [00:01<00:00, 10.5MB/s]
The model has 444867 trainable parameters
Train epoch 0: TotalLoss=0.145, PSNR=24.311
Eval epoch 0: PSNR=25.024
Test the trained model and plot the results:
trainer.plot_images = True
trainer.test(test_dataloader)
Eval epoch 0: PSNR=25.024, PSNR no learning=23.474
Test results:
PSNR no learning: 23.474 +- 2.865
PSNR: 25.024 +- 3.231
{'PSNR no learning': np.float64(23.473597717285156), 'PSNR no learning_std': np.float64(2.8652135153100127), 'PSNR': np.float64(25.023670959472657), 'PSNR_std': np.float64(3.230904405045883)}
CSGM / AmbientGAN training#
Compressed Sensing using Generative Models (CSGM) and AmbientGAN are two methods for solving inverse problems using generative models. CSGM (Bora et al., Compressed Sensing using Generative Models) uses a generative model to solve the inverse problem by optimising the latent space of the generator. AmbientGAN (Bora et al., AmbientGAN: Generative models from lossy measurements) uses a generative model to solve the inverse problem by optimising the measurements themselves. Both methods are trained using an adversarial loss; the main difference is that CSGM requires a ground truth dataset (supervised loss), while AmbientGAN does not (unsupervised loss).
In this example, we use a DCGAN as the generator and discriminator, and train using the adversarial loss. The forward pass of the generator is given by:
CSGM forward pass at train time:
CSGM/AmbientGAN forward pass at eval time:
CSGM loss:
AmbientGAN loss (where \(\forw{\cdot}\) is the physics):
We next load the models and construct losses as defined above.
G = dinv.models.CSGMGenerator(
dinv.models.DCGANGenerator(output_size=128, nz=100, ngf=32), inf_tol=1e-2
).to(device)
D = dinv.models.DCGANDiscriminator(ndf=32).to(device)
_, _, optimizer, scheduler = get_models(
model=G, D=D, lr_g=2e-4, lr_d=2e-4
) # learning rates from original paper
# For AmbientGAN:
loss_g = adversarial.UnsupAdversarialGeneratorLoss(device=device)
loss_d = adversarial.UnsupAdversarialDiscriminatorLoss(device=device)
# For CSGM:
loss_g = adversarial.SupAdversarialGeneratorLoss(device=device)
loss_d = adversarial.SupAdversarialDiscriminatorLoss(device=device)
As before, we can now train our models. Since inference is very slow for CSGM/AmbientGAN as it requires an optimisation, we only do one evaluation at the end. Note the train PSNR is meaningless as this generative model is trained on random latents. Like above, we load a pretrained model trained in the exact same way for 50 epochs, and fine-tune here for a quick demo with 1 epoch.
ckpt = torch.hub.load_state_dict_from_url(
dinv.models.utils.get_weights_url("adversarial-demo", "csgm_model.pth"),
map_location=lambda s, _: s,
)
G.load_state_dict(ckpt["state_dict"])
D.load_state_dict(ckpt["state_dict_D"])
optimizer.load_state_dict(ckpt["optimizer"])
trainer = dinv.training.AdversarialTrainer(
model=G,
D=D,
physics=physics,
train_dataloader=train_dataloader,
epochs=1,
losses=loss_g,
losses_d=loss_d,
optimizer=optimizer,
scheduler=scheduler,
verbose=True,
show_progress_bar=False,
save_path=None,
device=device,
)
G = trainer.train()
Downloading: "https://huggingface.co/deepinv/adversarial-demo/resolve/main/csgm_model.pth?download=true" to /home/runner/.cache/torch/hub/checkpoints/csgm_model.pth
0%| | 0.00/49.3M [00:00<?, ?B/s]
2%|▏ | 1.12M/49.3M [00:00<00:04, 10.3MB/s]
4%|▍ | 2.12M/49.3M [00:00<00:04, 10.3MB/s]
7%|▋ | 3.25M/49.3M [00:00<00:04, 10.9MB/s]
9%|▉ | 4.38M/49.3M [00:00<00:04, 10.3MB/s]
11%|█▏ | 5.62M/49.3M [00:00<00:04, 10.4MB/s]
13%|█▎ | 6.62M/49.3M [00:00<00:04, 10.4MB/s]
15%|█▌ | 7.62M/49.3M [00:00<00:04, 10.4MB/s]
18%|█▊ | 8.75M/49.3M [00:00<00:03, 10.8MB/s]
20%|██ | 9.88M/49.3M [00:00<00:03, 10.4MB/s]
22%|██▏ | 10.9M/49.3M [00:01<00:03, 10.4MB/s]
24%|██▍ | 12.0M/49.3M [00:01<00:03, 10.7MB/s]
27%|██▋ | 13.1M/49.3M [00:01<00:03, 10.3MB/s]
29%|██▊ | 14.1M/49.3M [00:01<00:03, 10.3MB/s]
31%|███ | 15.1M/49.3M [00:01<00:03, 10.3MB/s]
33%|███▎ | 16.1M/49.3M [00:01<00:03, 10.3MB/s]
35%|███▍ | 17.1M/49.3M [00:01<00:03, 10.3MB/s]
37%|███▋ | 18.1M/49.3M [00:01<00:03, 10.4MB/s]
39%|███▉ | 19.1M/49.3M [00:01<00:03, 10.4MB/s]
41%|████ | 20.1M/49.3M [00:02<00:02, 10.3MB/s]
43%|████▎ | 21.1M/49.3M [00:02<00:02, 10.1MB/s]
45%|████▌ | 22.4M/49.3M [00:02<00:02, 10.4MB/s]
47%|████▋ | 23.4M/49.3M [00:02<00:02, 10.4MB/s]
50%|████▉ | 24.5M/49.3M [00:02<00:02, 10.8MB/s]
52%|█████▏ | 25.6M/49.3M [00:02<00:02, 10.4MB/s]
54%|█████▍ | 26.6M/49.3M [00:02<00:02, 10.4MB/s]
56%|█████▌ | 27.6M/49.3M [00:02<00:02, 10.4MB/s]
58%|█████▊ | 28.6M/49.3M [00:02<00:02, 10.4MB/s]
60%|██████ | 29.6M/49.3M [00:02<00:01, 10.3MB/s]
62%|██████▏ | 30.6M/49.3M [00:03<00:01, 10.4MB/s]
64%|██████▍ | 31.6M/49.3M [00:03<00:01, 10.4MB/s]
66%|██████▋ | 32.8M/49.3M [00:03<00:01, 10.8MB/s]
69%|██████▊ | 33.9M/49.3M [00:03<00:01, 10.3MB/s]
71%|███████ | 34.9M/49.3M [00:03<00:01, 10.3MB/s]
73%|███████▎ | 36.0M/49.3M [00:03<00:01, 10.7MB/s]
75%|███████▌ | 37.1M/49.3M [00:03<00:01, 10.3MB/s]
77%|███████▋ | 38.1M/49.3M [00:03<00:01, 10.3MB/s]
79%|███████▉ | 39.1M/49.3M [00:03<00:01, 10.3MB/s]
82%|████████▏ | 40.2M/49.3M [00:04<00:00, 10.7MB/s]
84%|████████▍ | 41.4M/49.3M [00:04<00:00, 10.3MB/s]
86%|████████▌ | 42.4M/49.3M [00:04<00:00, 10.3MB/s]
88%|████████▊ | 43.4M/49.3M [00:04<00:00, 10.4MB/s]
90%|█████████ | 44.4M/49.3M [00:04<00:00, 10.4MB/s]
92%|█████████▏| 45.5M/49.3M [00:04<00:00, 10.8MB/s]
95%|█████████▍| 46.6M/49.3M [00:04<00:00, 10.3MB/s]
97%|█████████▋| 47.9M/49.3M [00:04<00:00, 10.4MB/s]
99%|█████████▉| 48.9M/49.3M [00:04<00:00, 10.4MB/s]
100%|██████████| 49.3M/49.3M [00:04<00:00, 10.4MB/s]
The model has 3608000 trainable parameters
Train epoch 0: TotalLoss=0.008, PSNR=9.071
Eventually, we run evaluation of the generative model by running test-time optimisation using test measurements. Note that we do not get great results as CSGM / AmbientGAN relies on large datasets of diverse samples, and we run the optimisation to a relatively high tolerance for speed. Improve the results by running the optimisation for longer.
trainer.test(test_dataloader)
Eval epoch 0: PSNR=9.724, PSNR no learning=23.474
Test results:
PSNR no learning: 23.474 +- 2.865
PSNR: 9.724 +- 0.987
{'PSNR no learning': np.float64(23.473597717285156), 'PSNR no learning_std': np.float64(2.8652135153100127), 'PSNR': np.float64(9.723600006103515), 'PSNR_std': np.float64(0.9866807773844393)}
Total running time of the script: (1 minutes 29.085 seconds)