.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/self-supervised-learning/demo_ei_transforms.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_self-supervised-learning_demo_ei_transforms.py: Image transformations for Equivariant Imaging ============================================= This example demonstrates various geometric image transformations implemented in ``deepinv`` that can be used in Equivariant Imaging (EI) for self-supervised learning: - Shift: integer pixel 2D shift; - Rotate: 2D image rotation; - Scale: continuous 2D image downscaling; - Euclidean: includes continuous translation, rotation, and reflection, forming the group :math:`\mathbb{E}(2)`; - Similarity: as above but includes scale, forming the group :math:`\text{S}(2)`; - Affine: as above but includes shear effects, forming the group :math:`\text{Aff}(3)`; - Homography: as above but includes perspective (i.e pan and tilt) effects, forming the group :math:`\text{PGL}(3)`; - PanTiltRotate: pure 3D camera rotation i.e pan, tilt and 2D image rotation. See :ref:`docs ` for full list. These were proposed in the papers: - ``Shift``, ``Rotate``: `Chen et al., Equivariant Imaging: Learning Beyond the Range Space `__ - ``Scale``: `Scanvic et al., Self-Supervised Learning for Image Super-Resolution and Deblurring `__ - ``Homography`` and the projective geometry framework: `Wang et al., Perspective-Equivariant Imaging: an Unsupervised Framework for Multispectral Pansharpening `__ .. GENERATED FROM PYTHON SOURCE LINES 37-52 .. code-block:: Python import torch from torch.utils.data import DataLoader, random_split from torchvision.datasets import ImageFolder from torchvision.transforms import Compose, ToTensor, CenterCrop, Resize from torchvision.datasets.utils import download_and_extract_archive import deepinv as dinv from deepinv.utils.demo import get_data_home device = dinv.utils.get_freer_gpu() if torch.cuda.is_available() else "cpu" ORIGINAL_DATA_DIR = get_data_home() / "Urban100" .. GENERATED FROM PYTHON SOURCE LINES 53-56 Define transforms. For the transforms that involve 3D camera rotation (i.e pan or tilt), we limit ``theta_max`` for display. .. GENERATED FROM PYTHON SOURCE LINES 56-69 .. code-block:: Python transforms = [ dinv.transform.Shift(), dinv.transform.Rotate(), dinv.transform.Scale(), dinv.transform.Homography(theta_max=10), dinv.transform.projective.Euclidean(), dinv.transform.projective.Similarity(), dinv.transform.projective.Affine(), dinv.transform.projective.PanTiltRotate(theta_max=10), ] .. GENERATED FROM PYTHON SOURCE LINES 70-74 Plot transforms on a sample image. Note that, during training, we never have access to these ground truth images ``x``, only partial and noisy measurements ``y``. .. GENERATED FROM PYTHON SOURCE LINES 74-82 .. code-block:: Python x = dinv.utils.load_url_image(dinv.utils.demo.get_image_url("celeba_example.jpg")) dinv.utils.plot( [x] + [t(x) for t in transforms], ["Orig"] + [t.__class__.__name__ for t in transforms], ) .. image-sg:: /auto_examples/self-supervised-learning/images/sphx_glr_demo_ei_transforms_001.png :alt: Orig, Shift, Rotate, Scale, Homography, Euclidean, Similarity, Affine, PanTiltRotate :srcset: /auto_examples/self-supervised-learning/images/sphx_glr_demo_ei_transforms_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 83-90 Now, we run an inpainting experiment to reconstruct images from images masked with a random mask, without ground truth, using EI. For this example we use the Urban100 images of natural urban scenes. As these scenes are imaged with a camera free to move and rotate in the world, all of the above transformations are valid invariances that we can impose on the unknown image set :math:`x\in X`. .. GENERATED FROM PYTHON SOURCE LINES 90-106 .. code-block:: Python dataset = dinv.datasets.Urban100HR( root=ORIGINAL_DATA_DIR, download=True, transform=Compose([ToTensor(), Resize(256), CenterCrop(256)]), ) train_dataset, test_dataset = random_split(dataset, (0.8, 0.2)) train_dataloader = DataLoader(train_dataset, shuffle=True) test_dataloader = DataLoader(test_dataset) # Use physics to generate data online physics = dinv.physics.Inpainting((3, 256, 256), mask=0.6, device=device) .. rst-class:: sphx-glr-script-out .. code-block:: none 0%| | 0/135388067 [00:00` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: demo_ei_transforms.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: demo_ei_transforms.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_