Note
Go to the end to download the full example code.
Imaging inverse problems with adversarial networks
This example shows you how to train various networks using adversarial
training for deblurring problems. We demonstrate running training and
inference using a conditional GAN (i.e. DeblurGAN), CSGM, AmbientGAN and
UAIR implemented in the library, and how to simply train
your own GAN by using deepinv.training.AdversarialTrainer()
. These
examples can also be easily extended to train more complicated GANs such
as CycleGAN.
This example is based on the following papers:
Kupyn et al., DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks
Bora et al., Compressed Sensing using Generative Models (CSGM)
Bora et al., AmbientGAN: Generative models from lossy measurements
Pajot et al., Unsupervised Adversarial Image Reconstruction
Adversarial networks are characterised by the addition of an adversarial loss \(\mathcal{L}_\text{adv}\) to the standard reconstruction loss:
where \(D(\cdot)\) is the discriminator model, \(x\) is the reference image, \(\hat x\) is the estimated reconstruction, \(q(\cdot)\) is a quality function (e.g \(q(x)=x\) for WGAN). Training alternates between generator \(G\) and discriminator \(D\) in a minimax game. When there are no ground truths (i.e. unsupervised), this may be defined on the measurements \(y\) instead.
import deepinv as dinv
from deepinv.loss import adversarial
from deepinv.physics.generator import MotionBlurGenerator
import torch
from torch.utils.data import DataLoader, random_split
from torchvision.datasets import ImageFolder
from torchvision.transforms import Compose, ToTensor, CenterCrop, Resize
from torchvision.datasets.utils import download_and_extract_archive
device = dinv.utils.get_freer_gpu() if torch.cuda.is_available() else "cpu"
Generate dataset
In this example we use the Urban100 dataset resized to 128x128. We apply random
motion blur physics using
deepinv.physics.generator.MotionBlurGenerator()
, and save the data
using deepinv.datasets.generate_dataset()
.
physics = dinv.physics.Blur(padding="circular", device=device)
blur_generator = MotionBlurGenerator((11, 11))
dataset = dinv.datasets.Urban100HR(
root="Urban100",
download=True,
transform=Compose([ToTensor(), Resize(256), CenterCrop(128)]),
)
train_dataset, test_dataset = random_split(dataset, (0.8, 0.2))
# Generate data pairs x,y offline using a physics generator
dataset_path = dinv.datasets.generate_dataset(
train_dataset=train_dataset,
test_dataset=test_dataset,
physics=physics,
physics_generator=blur_generator,
device=device,
save_dir="Urban100",
batch_size=1,
)
train_dataloader = DataLoader(
dinv.datasets.HDF5Dataset(dataset_path, train=True), shuffle=True
)
test_dataloader = DataLoader(
dinv.datasets.HDF5Dataset(dataset_path, train=False), shuffle=False
)
0%| | 0/135388067 [00:00<?, ?it/s]
15%|█▍ | 19.1M/129M [00:00<00:00, 200MB/s]
33%|███▎ | 42.2M/129M [00:00<00:00, 225MB/s]
50%|████▉ | 64.0M/129M [00:00<00:00, 226MB/s]
67%|██████▋ | 87.1M/129M [00:00<00:00, 232MB/s]
85%|████████▌ | 110M/129M [00:00<00:00, 234MB/s]
100%|██████████| 129M/129M [00:00<00:00, 232MB/s]
Extracting: 0%| | 0/101 [00:00<?, ?it/s]
Extracting: 15%|█▍ | 15/101 [00:00<00:00, 145.52it/s]
Extracting: 30%|██▉ | 30/101 [00:00<00:00, 147.75it/s]
Extracting: 50%|█████ | 51/101 [00:00<00:00, 173.52it/s]
Extracting: 68%|██████▊ | 69/101 [00:00<00:00, 166.23it/s]
Extracting: 85%|████████▌ | 86/101 [00:00<00:00, 157.77it/s]
Extracting: 100%|██████████| 101/101 [00:00<00:00, 158.93it/s]
Dataset has been successfully downloaded.
Dataset has been saved in Urban100
Define models
We first define reconstruction network (i.e conditional generator) and discriminator network to use for adversarial training. For demonstration we use a simple U-Net as the reconstruction network and the discriminator from PatchGAN, but these can be replaced with any architecture e.g transformers, unrolled etc. Further discriminator models are in adversarial models.
def get_models(model=None, D=None, lr_g=1e-4, lr_d=1e-4, device=device):
if model is None:
model = dinv.models.UNet(
in_channels=3,
out_channels=3,
scales=2,
circular_padding=True,
batch_norm=False,
).to(device)
if D is None:
D = dinv.models.PatchGANDiscriminator(n_layers=2, batch_norm=False).to(device)
optimizer = dinv.training.adversarial.AdversarialOptimizer(
torch.optim.Adam(model.parameters(), lr=lr_g, weight_decay=1e-8),
torch.optim.Adam(D.parameters(), lr=lr_d, weight_decay=1e-8),
)
scheduler = dinv.training.adversarial.AdversarialScheduler(
torch.optim.lr_scheduler.StepLR(optimizer.G, step_size=5, gamma=0.9),
torch.optim.lr_scheduler.StepLR(optimizer.D, step_size=5, gamma=0.9),
)
return model, D, optimizer, scheduler
Conditional GAN training
Conditional GANs (Kupyn et al., DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks) are a type of GAN where the generator is conditioned on a label or input. In the context of imaging, this can be used to generate images from a given measurement. In this example, we use a simple U-Net as the generator and a PatchGAN discriminator. The forward pass of the generator is given by:
Conditional GAN forward pass:
Conditional GAN loss:
where \(\mathcal{L}_\text{sup}\) is a supervised loss such as pixel-wise MSE or VGG Perceptual Loss.
G, D, optimizer, scheduler = get_models()
We next define pixel-wise and adversarial losses as defined above. We use the MSE for the supervised pixel-wise metric for simplicity but this can be easily replaced with a perceptual loss if desired.
loss_g = [
dinv.loss.SupLoss(metric=torch.nn.MSELoss()),
adversarial.SupAdversarialGeneratorLoss(device=device),
]
loss_d = adversarial.SupAdversarialDiscriminatorLoss(device=device)
We are now ready to train the networks using deepinv.training.AdversarialTrainer()
.
We load the pretrained models that were trained in the exact same way after 50 epochs,
and fine-tune the model for 1 epoch for a quick demo.
You can find the pretrained models on HuggingFace https://huggingface.co/deepinv/adversarial-demo.
To train from scratch, simply comment out the model loading code and increase the number of epochs.
ckpt = torch.hub.load_state_dict_from_url(
dinv.models.utils.get_weights_url("adversarial-demo", "deblurgan_model.pth"),
map_location=lambda s, _: s,
)
G.load_state_dict(ckpt["state_dict"])
D.load_state_dict(ckpt["state_dict_D"])
optimizer.load_state_dict(ckpt["optimizer"])
trainer = dinv.training.AdversarialTrainer(
model=G,
D=D,
physics=physics,
train_dataloader=train_dataloader,
eval_dataloader=test_dataloader,
epochs=1,
losses=loss_g,
losses_d=loss_d,
optimizer=optimizer,
scheduler=scheduler,
verbose=True,
show_progress_bar=False,
save_path=None,
device=device,
)
G = trainer.train()
Downloading: "https://huggingface.co/deepinv/adversarial-demo/resolve/main/deblurgan_model.pth?download=true" to /home/runner/.cache/torch/hub/checkpoints/deblurgan_model.pth
0%| | 0.00/12.7M [00:00<?, ?B/s]
9%|▉ | 1.12M/12.7M [00:00<00:01, 10.9MB/s]
18%|█▊ | 2.25M/12.7M [00:00<00:00, 11.3MB/s]
27%|██▋ | 3.38M/12.7M [00:00<00:00, 10.4MB/s]
35%|███▌ | 4.50M/12.7M [00:00<00:00, 10.8MB/s]
44%|████▍ | 5.62M/12.7M [00:00<00:00, 10.3MB/s]
52%|█████▏ | 6.62M/12.7M [00:00<00:00, 10.3MB/s]
60%|██████ | 7.62M/12.7M [00:00<00:00, 10.3MB/s]
68%|██████▊ | 8.62M/12.7M [00:00<00:00, 10.4MB/s]
76%|███████▌ | 9.62M/12.7M [00:00<00:00, 10.4MB/s]
85%|████████▍ | 10.8M/12.7M [00:01<00:00, 10.8MB/s]
93%|█████████▎| 11.9M/12.7M [00:01<00:00, 10.3MB/s]
100%|██████████| 12.7M/12.7M [00:01<00:00, 10.5MB/s]
The model has 444867 trainable parameters
Train epoch 0: SupLoss=0.004, SupAdversarialGeneratorLoss=0.003, TotalLoss=0.006, PSNR=25.826
Eval epoch 0: PSNR=25.339
Test the trained model and plot the results. We compare to the pseudo-inverse as a baseline.
trainer.plot_images = True
trainer.test(test_dataloader)
Eval epoch 0: PSNR=25.339, PSNR no learning=22.129
Test results:
PSNR no learning: 22.129 +- 2.703
PSNR: 25.339 +- 3.741
{'PSNR no learning': 22.1288010597229, 'PSNR no learning_std': 2.7033153709658162, 'PSNR': 25.339355182647704, 'PSNR_std': 3.7408165641069391}
UAIR training
Unsupervised Adversarial Image Reconstruction (UAIR) (Pajot et al., Unsupervised Adversarial Image Reconstruction) is a method for solving inverse problems using generative models. In this example, we use a simple U-Net as the generator and discriminator, and train using the adversarial loss. The forward pass of the generator is defined as:
UAIR forward pass:
UAIR loss:
We next load the models and construct losses as defined above.
G, D, optimizer, scheduler = get_models(
lr_g=1e-4, lr_d=4e-4
) # learning rates from original paper
loss_g = adversarial.UAIRGeneratorLoss(device=device)
loss_d = adversarial.UnsupAdversarialDiscriminatorLoss(device=device)
We are now ready to train the networks using deepinv.training.AdversarialTrainer()
.
Like above, we load a pretrained model trained in the exact same way for 50 epochs,
and fine-tune here for a quick demo with 1 epoch.
ckpt = torch.hub.load_state_dict_from_url(
dinv.models.utils.get_weights_url("adversarial-demo", "uair_model.pth"),
map_location=lambda s, _: s,
)
G.load_state_dict(ckpt["state_dict"])
D.load_state_dict(ckpt["state_dict_D"])
optimizer.load_state_dict(ckpt["optimizer"])
trainer = dinv.training.AdversarialTrainer(
model=G,
D=D,
physics=physics,
train_dataloader=train_dataloader,
eval_dataloader=test_dataloader,
epochs=1,
losses=loss_g,
losses_d=loss_d,
optimizer=optimizer,
scheduler=scheduler,
verbose=True,
show_progress_bar=False,
save_path=None,
device=device,
)
G = trainer.train()
Downloading: "https://huggingface.co/deepinv/adversarial-demo/resolve/main/uair_model.pth?download=true" to /home/runner/.cache/torch/hub/checkpoints/uair_model.pth
0%| | 0.00/12.7M [00:00<?, ?B/s]
9%|▉ | 1.12M/12.7M [00:00<00:01, 11.4MB/s]
18%|█▊ | 2.25M/12.7M [00:00<00:00, 11.4MB/s]
27%|██▋ | 3.38M/12.7M [00:00<00:00, 10.4MB/s]
35%|███▌ | 4.50M/12.7M [00:00<00:00, 10.8MB/s]
44%|████▍ | 5.62M/12.7M [00:00<00:00, 10.3MB/s]
52%|█████▏ | 6.62M/12.7M [00:00<00:00, 10.3MB/s]
60%|██████ | 7.62M/12.7M [00:00<00:00, 10.4MB/s]
68%|██████▊ | 8.62M/12.7M [00:00<00:00, 10.4MB/s]
76%|███████▌ | 9.62M/12.7M [00:00<00:00, 10.4MB/s]
85%|████████▍ | 10.8M/12.7M [00:01<00:00, 10.8MB/s]
93%|█████████▎| 11.9M/12.7M [00:01<00:00, 10.3MB/s]
100%|██████████| 12.7M/12.7M [00:01<00:00, 10.5MB/s]
The model has 444867 trainable parameters
Train epoch 0: TotalLoss=0.143, PSNR=24.828
Eval epoch 0: PSNR=24.388
Test the trained model and plot the results:
trainer.plot_images = True
trainer.test(test_dataloader)
Eval epoch 0: PSNR=24.388, PSNR no learning=22.129
Test results:
PSNR no learning: 22.129 +- 2.703
PSNR: 24.388 +- 3.427
{'PSNR no learning': 22.1288010597229, 'PSNR no learning_std': 2.7033153709658162, 'PSNR': 24.388035011291503, 'PSNR_std': 3.4269025384078797}
CSGM / AmbientGAN training
Compressed Sensing using Generative Models (CSGM) and AmbientGAN are two methods for solving inverse problems using generative models. CSGM (Bora et al., Compressed Sensing using Generative Models) uses a generative model to solve the inverse problem by optimising the latent space of the generator. AmbientGAN (Bora et al., AmbientGAN: Generative models from lossy measurements) uses a generative model to solve the inverse problem by optimising the measurements themselves. Both methods are trained using an adversarial loss; the main difference is that CSGM requires a ground truth dataset (supervised loss), while AmbientGAN does not (unsupervised loss).
In this example, we use a DCGAN as the generator and discriminator, and train using the adversarial loss. The forward pass of the generator is given by:
CSGM forward pass at train time:
CSGM/AmbientGAN forward pass at eval time:
CSGM loss:
AmbientGAN loss (where \(\forw{\cdot}\) is the physics):
We next load the models and construct losses as defined above.
G = dinv.models.CSGMGenerator(
dinv.models.DCGANGenerator(output_size=128, nz=100, ngf=32), inf_tol=1e-2
).to(device)
D = dinv.models.DCGANDiscriminator(ndf=32).to(device)
_, _, optimizer, scheduler = get_models(
model=G, D=D, lr_g=2e-4, lr_d=2e-4
) # learning rates from original paper
# For AmbientGAN:
loss_g = adversarial.UnsupAdversarialGeneratorLoss(device=device)
loss_d = adversarial.UnsupAdversarialDiscriminatorLoss(device=device)
# For CSGM:
loss_g = adversarial.SupAdversarialGeneratorLoss(device=device)
loss_d = adversarial.SupAdversarialDiscriminatorLoss(device=device)
As before, we can now train our models. Since inference is very slow for CSGM/AmbientGAN as it requires an optimisation, we only do one evaluation at the end. Note the train PSNR is meaningless as this generative model is trained on random latents. Like above, we load a pretrained model trained in the exact same way for 50 epochs, and fine-tune here for a quick demo with 1 epoch.
ckpt = torch.hub.load_state_dict_from_url(
dinv.models.utils.get_weights_url("adversarial-demo", "csgm_model.pth"),
map_location=lambda s, _: s,
)
G.load_state_dict(ckpt["state_dict"])
D.load_state_dict(ckpt["state_dict_D"])
optimizer.load_state_dict(ckpt["optimizer"])
trainer = dinv.training.AdversarialTrainer(
model=G,
D=D,
physics=physics,
train_dataloader=train_dataloader,
epochs=1,
losses=loss_g,
losses_d=loss_d,
optimizer=optimizer,
scheduler=scheduler,
verbose=True,
show_progress_bar=False,
save_path=None,
device=device,
)
G = trainer.train()
Downloading: "https://huggingface.co/deepinv/adversarial-demo/resolve/main/csgm_model.pth?download=true" to /home/runner/.cache/torch/hub/checkpoints/csgm_model.pth
0%| | 0.00/49.3M [00:00<?, ?B/s]
2%|▏ | 1.12M/49.3M [00:00<00:04, 11.2MB/s]
5%|▍ | 2.25M/49.3M [00:00<00:04, 11.4MB/s]
7%|▋ | 3.38M/49.3M [00:00<00:04, 10.4MB/s]
9%|▉ | 4.50M/49.3M [00:00<00:04, 10.9MB/s]
11%|█▏ | 5.62M/49.3M [00:00<00:04, 10.3MB/s]
13%|█▎ | 6.62M/49.3M [00:00<00:04, 10.3MB/s]
15%|█▌ | 7.62M/49.3M [00:00<00:04, 10.3MB/s]
17%|█▋ | 8.62M/49.3M [00:00<00:04, 10.4MB/s]
20%|█▉ | 9.62M/49.3M [00:00<00:04, 10.4MB/s]
22%|██▏ | 10.8M/49.3M [00:01<00:03, 10.8MB/s]
24%|██▍ | 11.9M/49.3M [00:01<00:03, 10.3MB/s]
26%|██▌ | 12.9M/49.3M [00:01<00:03, 10.3MB/s]
28%|██▊ | 13.9M/49.3M [00:01<00:03, 10.4MB/s]
30%|███ | 14.9M/49.3M [00:01<00:03, 10.4MB/s]
32%|███▏ | 15.9M/49.3M [00:01<00:03, 10.3MB/s]
34%|███▍ | 16.9M/49.3M [00:01<00:03, 10.3MB/s]
36%|███▋ | 17.9M/49.3M [00:01<00:03, 10.3MB/s]
38%|███▊ | 18.9M/49.3M [00:01<00:03, 10.3MB/s]
40%|████ | 19.9M/49.3M [00:01<00:02, 10.4MB/s]
42%|████▏ | 20.9M/49.3M [00:02<00:02, 10.4MB/s]
45%|████▍ | 22.0M/49.3M [00:02<00:02, 10.8MB/s]
47%|████▋ | 23.1M/49.3M [00:02<00:02, 10.3MB/s]
49%|████▉ | 24.1M/49.3M [00:02<00:02, 10.3MB/s]
51%|█████ | 25.1M/49.3M [00:02<00:02, 10.3MB/s]
53%|█████▎ | 26.1M/49.3M [00:02<00:02, 10.3MB/s]
55%|█████▌ | 27.1M/49.3M [00:02<00:02, 10.4MB/s]
57%|█████▋ | 28.1M/49.3M [00:02<00:02, 10.4MB/s]
59%|█████▉ | 29.2M/49.3M [00:02<00:01, 10.8MB/s]
62%|██████▏ | 30.4M/49.3M [00:03<00:01, 10.3MB/s]
64%|██████▎ | 31.4M/49.3M [00:03<00:01, 10.4MB/s]
66%|██████▌ | 32.4M/49.3M [00:03<00:01, 10.4MB/s]
68%|██████▊ | 33.5M/49.3M [00:03<00:01, 10.8MB/s]
70%|███████ | 34.6M/49.3M [00:03<00:01, 10.3MB/s]
72%|███████▏ | 35.6M/49.3M [00:03<00:01, 10.4MB/s]
74%|███████▍ | 36.6M/49.3M [00:03<00:01, 10.3MB/s]
77%|███████▋ | 37.8M/49.3M [00:03<00:01, 10.7MB/s]
79%|███████▉ | 38.9M/49.3M [00:03<00:01, 10.3MB/s]
81%|████████ | 39.9M/49.3M [00:04<00:00, 10.3MB/s]
83%|████████▎ | 40.9M/49.3M [00:04<00:00, 10.4MB/s]
85%|████████▍ | 41.9M/49.3M [00:04<00:00, 10.3MB/s]
87%|████████▋ | 43.0M/49.3M [00:04<00:00, 10.7MB/s]
90%|████████▉ | 44.1M/49.3M [00:04<00:00, 10.3MB/s]
92%|█████████▏| 45.2M/49.3M [00:04<00:00, 10.7MB/s]
94%|█████████▍| 46.4M/49.3M [00:04<00:00, 10.3MB/s]
96%|█████████▌| 47.4M/49.3M [00:04<00:00, 10.3MB/s]
98%|█████████▊| 48.4M/49.3M [00:04<00:00, 10.3MB/s]
100%|██████████| 49.3M/49.3M [00:04<00:00, 10.5MB/s]
The model has 3608000 trainable parameters
Train epoch 0: TotalLoss=0.007, PSNR=9.163
Eventually, we run evaluation of the generative model by running test-time optimisation using test measurements. Note that we do not get great results as CSGM / AmbientGAN relies on large datasets of diverse samples, and we run the optimisation to a relatively high tolerance for speed. Improve the results by running the optimisation for longer.
trainer.test(test_dataloader)
Eval epoch 0: PSNR=9.528, PSNR no learning=22.129
Test results:
PSNR no learning: 22.129 +- 2.703
PSNR: 9.528 +- 1.301
{'PSNR no learning': 22.1288010597229, 'PSNR no learning_std': 2.7033153709658162, 'PSNR': 9.5279523849487298, 'PSNR_std': 1.3012826717231585}
Total running time of the script: (1 minutes 28.753 seconds)