.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/self-supervised-learning/demo_r2r_denoising.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note New to DeepInverse? Get started with the basics with the :ref:`5 minute quickstart tutorial `.. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_self-supervised-learning_demo_r2r_denoising.py: Self-supervised denoising with the Generalized R2R loss. ==================================================================================================== This example shows you how to train a denoiser network in a fully self-supervised way, using noisy images only via the Generalized Recorrupted2Recorrupted (GR2R) loss :footcite:t:`monroy2025generalized`, which exploits knowledge about the noise distribution. You can change the noise distribution by selecting from predefined noise models such as Gaussian, Poisson, and Gamma noise. .. GENERATED FROM PYTHON SOURCE LINES 10-21 .. code-block:: Python from pathlib import Path import torch from torch.utils.data import DataLoader from torchvision import transforms, datasets import deepinv as dinv from deepinv.utils import get_cache_home from deepinv.models.utils import get_weights_url .. GENERATED FROM PYTHON SOURCE LINES 22-25 Setup paths for data loading and results. --------------------------------------------------------------- .. GENERATED FROM PYTHON SOURCE LINES 25-37 .. code-block:: Python BASE_DIR = Path(".") DATA_DIR = BASE_DIR / "measurements" CKPT_DIR = BASE_DIR / "ckpts" ORIGINAL_DATA_DIR = get_cache_home() / "datasets" / "MNIST" # Set the global random seed from pytorch to ensure reproducibility of the example. torch.manual_seed(0) device = dinv.utils.get_device() print(device) .. rst-class:: sphx-glr-script-out .. code-block:: none Selected CPU device cpu .. GENERATED FROM PYTHON SOURCE LINES 38-42 Load base image datasets ---------------------------------------------------------------------------------- In this example, we use the MNIST dataset as the base image dataset. .. GENERATED FROM PYTHON SOURCE LINES 42-55 .. code-block:: Python operation = "denoising" train_dataset_name = "MNIST" transform = transforms.Compose([transforms.ToTensor()]) train_dataset = datasets.MNIST( root=ORIGINAL_DATA_DIR, train=True, transform=transform, download=True ) test_dataset = datasets.MNIST( root=ORIGINAL_DATA_DIR, train=False, transform=transform, download=True ) .. GENERATED FROM PYTHON SOURCE LINES 56-67 Generate a dataset of noisy images -------------------------------------------------------------------------------------------------- Generate a dataset of noisy images corrupted by Poisson noise. The predefined noise models in the physics module include Gaussian, Poisson, and Gamma noise. Here, we use Poisson noise as an example, but you can also use Gaussian or Gamma noise. .. note:: We use a subset of the whole training set to reduce the computational load of the example. We recommend to use the whole set by setting ``n_images_max=None`` to get the best results. .. GENERATED FROM PYTHON SOURCE LINES 67-104 .. code-block:: Python # defined physics predefined_noise_models = dict( gaussian=dinv.physics.GaussianNoise(sigma=0.1), poisson=dinv.physics.PoissonNoise(gain=0.5), gamma=dinv.physics.GammaNoise(l=10.0), ) noise_name = "poisson" noise_model = predefined_noise_models[noise_name] physics = dinv.physics.Denoising(noise_model) operation = f"{operation}_{noise_name}" # Use parallel dataloader if using a GPU to speed up training, # otherwise, as all computes are on CPU, use synchronous data loading. num_workers = 0 if torch.cuda.is_available() else 0 n_images_max = ( 100 if torch.cuda.is_available() else 5 ) # number of images used for training measurement_dir = DATA_DIR / train_dataset_name / operation deepinv_datasets_path = dinv.datasets.generate_dataset( train_dataset=train_dataset, test_dataset=test_dataset, physics=physics, device=device, save_dir=measurement_dir, train_datapoints=n_images_max, test_datapoints=n_images_max, num_workers=num_workers, dataset_filename="demo_r2r", ) train_dataset = dinv.datasets.HDF5Dataset(path=deepinv_datasets_path, train=True) test_dataset = dinv.datasets.HDF5Dataset(path=deepinv_datasets_path, train=False) .. rst-class:: sphx-glr-script-out .. code-block:: none Dataset has been saved at measurements/MNIST/denoising_poisson/demo_r2r0.h5 .. GENERATED FROM PYTHON SOURCE LINES 105-109 Set up the denoiser network --------------------------------------------------------------- We use a simple U-Net architecture with 2 scales as the denoiser network. .. GENERATED FROM PYTHON SOURCE LINES 109-115 .. code-block:: Python model = dinv.models.ArtifactRemoval( dinv.models.UNet(in_channels=1, out_channels=1, scales=2, residual=False).to(device) ) .. GENERATED FROM PYTHON SOURCE LINES 116-124 Set up the training parameters -------------------------------------------- We set :class:`deepinv.loss.R2RLoss` as the training loss. .. note:: There are GR2R losses for various noise distributions, which can be specified by the noise model. .. GENERATED FROM PYTHON SOURCE LINES 124-147 .. code-block:: Python epochs = 1 # choose training epochs learning_rate = 1e-4 batch_size = 64 if torch.cuda.is_available() else 1 # choose self-supervised training loss loss = dinv.loss.R2RLoss(noise_model=None) model = loss.adapt_model(model) # important step! # choose optimizer and scheduler optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, weight_decay=1e-4) scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=int(epochs * 0.8) + 1) # # start with a pretrained model to reduce training time if noise_name == "poisson": file_name = "ckp_10_demo_r2r_poisson.pth" url = get_weights_url(model_name="demo", file_name=file_name) ckpt = torch.hub.load_state_dict_from_url( url, map_location=lambda storage, loc: storage, file_name=file_name ) model.load_state_dict(ckpt["state_dict"]) .. GENERATED FROM PYTHON SOURCE LINES 148-157 Train the network -------------------------------------------- To simulate a realistic self-supervised learning scenario, we do not use any supervised metrics for training, such as PSNR or SSIM, which require clean ground truth images. .. tip:: We can use the same self-supervised loss for evaluation, as it does not require clean images, to monitor the training process (e.g. for early stopping). This is done automatically when `metrics=None` and `early_stop>0` in the trainer. .. GENERATED FROM PYTHON SOURCE LINES 157-194 .. code-block:: Python verbose = True # print training information train_dataloader = DataLoader( train_dataset, batch_size=batch_size, num_workers=num_workers, shuffle=False ) test_dataloader = DataLoader( test_dataset, batch_size=batch_size, num_workers=num_workers, shuffle=False ) # Initialize the trainer trainer = dinv.Trainer( model=model, physics=physics, epochs=epochs, scheduler=scheduler, losses=loss, optimizer=optimizer, device=device, metrics=None, # no supervised metrics train_dataloader=train_dataloader, eval_dataloader=test_dataloader, early_stop=2, # early stop using the self-supervised loss on the test set compute_eval_losses=True, # use self-supervised loss for evaluation early_stop_on_losses=True, # stop using self-supervised eval loss plot_images=True, save_path=str(CKPT_DIR / operation), verbose=verbose, show_progress_bar=False, # disable progress bar for better vis in sphinx gallery. ) # Train the network model = trainer.train() .. rst-class:: sphx-glr-horizontal * .. image-sg:: /auto_examples/self-supervised-learning/images/sphx_glr_demo_r2r_denoising_001.png :alt: Ground truth, Measurement, Reconstruction :srcset: /auto_examples/self-supervised-learning/images/sphx_glr_demo_r2r_denoising_001.png :class: sphx-glr-multi-img * .. image-sg:: /auto_examples/self-supervised-learning/images/sphx_glr_demo_r2r_denoising_002.png :alt: Ground truth, Measurement, Reconstruction :srcset: /auto_examples/self-supervised-learning/images/sphx_glr_demo_r2r_denoising_002.png :class: sphx-glr-multi-img .. rst-class:: sphx-glr-script-out .. code-block:: none The model has 444737 trainable parameters Train epoch 0: TotalLoss=0.44 Eval epoch 0: TotalLoss=0.361 Best model saved at epoch 1 .. GENERATED FROM PYTHON SOURCE LINES 195-200 Test the network -------------------------------------------- We now assume that we have access to a small test set of clean images to evaluate the performance of the trained network. and we compute the PSNR between the denoised images and the clean ground truth images. .. GENERATED FROM PYTHON SOURCE LINES 200-202 .. code-block:: Python trainer.test(test_dataloader, metrics=dinv.metric.PSNR()) .. image-sg:: /auto_examples/self-supervised-learning/images/sphx_glr_demo_r2r_denoising_003.png :alt: Ground truth, Measurement, No learning, Reconstruction :srcset: /auto_examples/self-supervised-learning/images/sphx_glr_demo_r2r_denoising_003.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out .. code-block:: none Eval epoch 0: TotalLoss=0.443, PSNR=20.328, PSNR no learning=12.853 Test results: PSNR no learning: 12.853 +- 2.357 PSNR: 20.328 +- 2.453 {'PSNR no learning': 12.85311279296875, 'PSNR no learning_std': 2.3566859967293277, 'PSNR': 20.3280387878418, 'PSNR_std': 2.4525542582395112} .. GENERATED FROM PYTHON SOURCE LINES 203-206 :References: .. footbibliography:: .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 0.695 seconds) .. _sphx_glr_download_auto_examples_self-supervised-learning_demo_r2r_denoising.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: demo_r2r_denoising.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: demo_r2r_denoising.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: demo_r2r_denoising.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_