.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/unfolded/demo_vanilla_unfolded.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_unfolded_demo_vanilla_unfolded.py: Vanilla Unfolded algorithm for super-resolution ==================================================================================================== This is a simple example to show how to use vanilla unfolded Plug-and-Play. The DnCNN denoiser and the algorithm parameters (stepsize, regularization parameters) are trained jointly. For simplicity, we show how to train the algorithm on a small dataset. For optimal results, use a larger dataset. For visualizing the training, you can use Weight&Bias (wandb) by setting ``wandb_vis=True``. .. GENERATED FROM PYTHON SOURCE LINES 10-21 .. code-block:: Python import deepinv as dinv from pathlib import Path import torch from torch.utils.data import DataLoader from deepinv.optim.data_fidelity import L2 from deepinv.optim.prior import PnP from deepinv.unfolded import unfolded_builder from torchvision import transforms from deepinv.utils.demo import load_dataset .. GENERATED FROM PYTHON SOURCE LINES 22-25 Setup paths for data loading and results. ---------------------------------------------------------------------------------------- .. GENERATED FROM PYTHON SOURCE LINES 25-36 .. code-block:: Python BASE_DIR = Path(".") DATA_DIR = BASE_DIR / "measurements" RESULTS_DIR = BASE_DIR / "results" CKPT_DIR = BASE_DIR / "ckpts" # Set the global random seed from pytorch to ensure reproducibility of the example. torch.manual_seed(0) device = dinv.utils.get_freer_gpu() if torch.cuda.is_available() else "cpu" .. GENERATED FROM PYTHON SOURCE LINES 37-40 Load base image datasets and degradation operators. ---------------------------------------------------------------------------------------- In this example, we use the CBSD500 dataset for training and the Set3C dataset for testing. .. GENERATED FROM PYTHON SOURCE LINES 40-45 .. code-block:: Python img_size = 64 if torch.cuda.is_available() else 32 n_channels = 3 # 3 for color images, 1 for gray-scale images operation = "super-resolution" .. GENERATED FROM PYTHON SOURCE LINES 46-49 Generate a dataset of low resolution images and load it. ---------------------------------------------------------------------------------------- We use the Downsampling class from the physics module to generate a dataset of low resolution images. .. GENERATED FROM PYTHON SOURCE LINES 49-100 .. code-block:: Python # For simplicity, we use a small dataset for training. # To be replaced for optimal results. For example, you can use the larger "drunet" dataset. train_dataset_name = "CBSD500" test_dataset_name = "set3c" # Specify the train and test transforms to be applied to the input images. test_transform = transforms.Compose( [transforms.CenterCrop(img_size), transforms.ToTensor()] ) train_transform = transforms.Compose( [transforms.RandomCrop(img_size), transforms.ToTensor()] ) # Define the base train and test datasets of clean images. train_base_dataset = load_dataset(train_dataset_name, transform=train_transform) test_base_dataset = load_dataset(test_dataset_name, transform=test_transform) # Use parallel dataloader if using a GPU to fasten training, otherwise, as all computes are on CPU, use synchronous # dataloading. num_workers = 4 if torch.cuda.is_available() else 0 # Degradation parameters factor = 2 noise_level_img = 0.03 # Generate the gaussian blur downsampling operator. physics = dinv.physics.Downsampling( filter="gaussian", img_size=(n_channels, img_size, img_size), factor=factor, device=device, noise_model=dinv.physics.GaussianNoise(sigma=noise_level_img), ) my_dataset_name = "demo_unfolded_sr" n_images_max = ( 1000 if torch.cuda.is_available() else 10 ) # maximal number of images used for training measurement_dir = DATA_DIR / train_dataset_name / operation generated_datasets_path = dinv.datasets.generate_dataset( train_dataset=train_base_dataset, test_dataset=test_base_dataset, physics=physics, device=device, save_dir=measurement_dir, train_datapoints=n_images_max, num_workers=num_workers, dataset_filename=str(my_dataset_name), ) train_dataset = dinv.datasets.HDF5Dataset(path=generated_datasets_path, train=True) test_dataset = dinv.datasets.HDF5Dataset(path=generated_datasets_path, train=False) .. rst-class:: sphx-glr-script-out .. code-block:: none Downloading datasets/CBSD500.zip 0%| | 0.00/71.0M [00:00` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: demo_vanilla_unfolded.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: demo_vanilla_unfolded.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_