.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/self-supervised-learning/demo_unsure.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_self-supervised-learning_demo_unsure.py: Self-supervised denoising with the UNSURE loss. ==================================================================================================== This example shows you how to train a denoiser network in a fully self-supervised way, i.e., using noisy images with unknown noise level only via the UNSURE loss, which is introduced in https://arxiv.org/abs/2409.01985. The UNSURE optimization problem for Gaussian denoising with unknown noise level is defined as: .. math:: \min_{R} \max_{\sigma^2} \frac{1}{m}\|y-\inverse{y}\|_2^2 +\frac{2\sigma^2}{m\tau}b^{\top} \left(\inverse{y+\tau b}-\inverse{y}\right) where :math:`R` is the trainable network, :math:`y` is the noisy image with :math:`m` pixels, :math:`b\sim \mathcal{N}(0,1)` is a Gaussian random variable, :math:`\tau` is a small positive number, and :math:`\odot` is an elementwise multiplication. .. GENERATED FROM PYTHON SOURCE LINES 20-30 .. code-block:: Python from pathlib import Path import torch from torch.utils.data import DataLoader from torchvision import transforms, datasets import deepinv as dinv from deepinv.utils.demo import get_data_home .. GENERATED FROM PYTHON SOURCE LINES 31-34 Setup paths for data loading and results. --------------------------------------------------------------- .. GENERATED FROM PYTHON SOURCE LINES 34-45 .. code-block:: Python BASE_DIR = Path(".") DATA_DIR = BASE_DIR / "measurements" CKPT_DIR = BASE_DIR / "ckpts" ORIGINAL_DATA_DIR = get_data_home() # Set the global random seed from pytorch to ensure reproducibility of the example. torch.manual_seed(0) device = dinv.utils.get_freer_gpu() if torch.cuda.is_available() else "cpu" .. GENERATED FROM PYTHON SOURCE LINES 46-50 Load base image datasets ---------------------------------------------------------------------------------- In this example, we use the MNIST dataset as the base image dataset. .. GENERATED FROM PYTHON SOURCE LINES 50-63 .. code-block:: Python operation = "denoising" train_dataset_name = "MNIST" transform = transforms.Compose([transforms.ToTensor()]) train_dataset = datasets.MNIST( root=ORIGINAL_DATA_DIR, train=True, transform=transform, download=True ) test_dataset = datasets.MNIST( root=ORIGINAL_DATA_DIR, train=False, transform=transform, download=True ) .. GENERATED FROM PYTHON SOURCE LINES 64-73 Generate a dataset of noisy images ---------------------------------------------------------------------------------- We generate a dataset of noisy images corrupted by Gaussian noise. .. note:: We use a subset of the whole training set to reduce the computational load of the example. We recommend to use the whole set by setting ``n_images_max=None`` to get the best results. .. GENERATED FROM PYTHON SOURCE LINES 73-103 .. code-block:: Python true_sigma = 0.1 # defined physics physics = dinv.physics.Denoising(dinv.physics.GaussianNoise(sigma=true_sigma)) # Use parallel dataloader if using a GPU to fasten training, # otherwise, as all computes are on CPU, use synchronous data loading. num_workers = 4 if torch.cuda.is_available() else 0 n_images_max = ( 100 if torch.cuda.is_available() else 5 ) # number of images used for training measurement_dir = DATA_DIR / train_dataset_name / operation deepinv_datasets_path = dinv.datasets.generate_dataset( train_dataset=train_dataset, test_dataset=test_dataset, physics=physics, device=device, save_dir=measurement_dir, train_datapoints=n_images_max, test_datapoints=n_images_max, num_workers=num_workers, dataset_filename="demo_sure", ) train_dataset = dinv.datasets.HDF5Dataset(path=deepinv_datasets_path, train=True) test_dataset = dinv.datasets.HDF5Dataset(path=deepinv_datasets_path, train=False) .. rst-class:: sphx-glr-script-out .. code-block:: none Dataset has been saved at measurements/MNIST/denoising/demo_sure0.h5 .. GENERATED FROM PYTHON SOURCE LINES 104-108 Set up the denoiser network --------------------------------------------------------------- We use a simple U-Net architecture with 2 scales as the denoiser network. .. GENERATED FROM PYTHON SOURCE LINES 108-114 .. code-block:: Python model = dinv.models.ArtifactRemoval( dinv.models.UNet(in_channels=1, out_channels=1, scales=2).to(device) ) .. GENERATED FROM PYTHON SOURCE LINES 115-129 Set up the training parameters -------------------------------------------- We set :class:`deepinv.loss.SureGaussianLoss` as the training loss with the ``unsure=True`` option. The optimization with respect to the noise level is done by stochastic gradient descent with momentum inside the loss class, so it is seamlessly integrated into the training process. .. note:: There are (UN)SURE losses for various noise distributions. See also :class:`deepinv.loss.SurePGLoss` for mixed Poisson-Gaussian noise. .. note:: We train for only 10 epochs to reduce the computational load of the example. We recommend to train for more epochs to get the best results. .. GENERATED FROM PYTHON SOURCE LINES 129-148 .. code-block:: Python epochs = 10 # choose training epochs learning_rate = 5e-4 batch_size = 32 if torch.cuda.is_available() else 1 sigma_init = 0.05 # initial guess for the noise level step_size = 1e-4 # step size for the optimization of the noise level momentum = 0.9 # momentum for the optimization of the noise level # choose self-supervised training loss loss = dinv.loss.SureGaussianLoss( sigma=sigma_init, unsure=True, step_size=step_size, momentum=momentum ) # choose optimizer and scheduler optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, weight_decay=1e-8) print(f"INIT. noise level {loss.sigma2.sqrt().item():.3f}") .. rst-class:: sphx-glr-script-out .. code-block:: none INIT. noise level 0.050 .. GENERATED FROM PYTHON SOURCE LINES 149-153 Train the network -------------------------------------------- We train the network using the :class:`deepinv.Trainer` class. .. GENERATED FROM PYTHON SOURCE LINES 153-177 .. code-block:: Python train_dataloader = DataLoader( train_dataset, batch_size=batch_size, num_workers=num_workers, shuffle=True ) # Initialize the trainer trainer = dinv.Trainer( model=model, physics=physics, epochs=epochs, losses=loss, optimizer=optimizer, device=device, train_dataloader=train_dataloader, plot_images=False, save_path=str(CKPT_DIR / operation), verbose=True, # print training information show_progress_bar=False, # disable progress bar for better vis in sphinx gallery. ) # Train the network model = trainer.train() .. rst-class:: sphx-glr-script-out .. code-block:: none The model has 444737 trainable parameters Train epoch 0: TotalLoss=0.082, PSNR=11.174 Train epoch 1: TotalLoss=0.029, PSNR=14.826 Train epoch 2: TotalLoss=0.015, PSNR=17.465 Train epoch 3: TotalLoss=0.01, PSNR=19.055 Train epoch 4: TotalLoss=0.007, PSNR=20.112 Train epoch 5: TotalLoss=0.006, PSNR=21.228 Train epoch 6: TotalLoss=0.005, PSNR=21.775 Train epoch 7: TotalLoss=0.005, PSNR=22.558 Train epoch 8: TotalLoss=0.004, PSNR=23.066 Train epoch 9: TotalLoss=0.004, PSNR=23.374 .. GENERATED FROM PYTHON SOURCE LINES 178-182 Check learned noise level -------------------------------------------- We can verify the learned noise level by checking the estimated noise level from the loss function. .. GENERATED FROM PYTHON SOURCE LINES 182-189 .. code-block:: Python est_sigma = loss.sigma2.sqrt().item() print(f"LEARNED noise level {est_sigma:.3f}") print(f"Estimation error noise level {abs(est_sigma-true_sigma):.3f}") .. rst-class:: sphx-glr-script-out .. code-block:: none LEARNED noise level 0.097 Estimation error noise level 0.003 .. GENERATED FROM PYTHON SOURCE LINES 190-193 Test the network -------------------------------------------- .. GENERATED FROM PYTHON SOURCE LINES 193-200 .. code-block:: Python test_dataloader = DataLoader( test_dataset, batch_size=batch_size, num_workers=num_workers, shuffle=False ) trainer.plot_images = True trainer.test(test_dataloader=test_dataloader) .. image-sg:: /auto_examples/self-supervised-learning/images/sphx_glr_demo_unsure_001.png :alt: Ground truth, Measurement, No learning, Reconstruction :srcset: /auto_examples/self-supervised-learning/images/sphx_glr_demo_unsure_001.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out .. code-block:: none Eval epoch 0: PSNR=23.689, PSNR no learning=19.981 Test results: PSNR no learning: 19.981 +- 0.108 PSNR: 23.689 +- 0.881 {'PSNR no learning': np.float64(19.98084411621094), 'PSNR no learning_std': np.float64(0.10847507169258835), 'PSNR': np.float64(23.68878173828125), 'PSNR_std': np.float64(0.8814043693576259)} .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 1.732 seconds) .. _sphx_glr_download_auto_examples_self-supervised-learning_demo_unsure.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: demo_unsure.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: demo_unsure.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: demo_unsure.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_