TiledSpaceVaryingBlur#
- class deepinv.physics.TiledSpaceVaryingBlur(filters=None, patch_size=None, stride=None, blending_mode='bump', use_fft=True, device='cpu', dtype=torch.float32, **kwargs)[source]#
Bases:
TiledMixin2d,LinearPhysicsSpace varying blur via tiled-convolution.
This forward operator performs the space-varying blur using local convolutions on overlapping patches (tiles) of the image. The resulting images is then reconstructed using a smooth blending of the overlapping patches (
blending_mode).Given an input image \(x\), this linear operator performs
\[y = \sum_{k=1}^K h_k \star (m_k \odot x)\]where \(\star\) is a convolution, \(\odot\) is a Hadamard product, \(m_k\) are binary masks defining the tiles and \(h_k\) are filters. When the size of the image minus
strideis not perfectly divisible by thepatch_size, the image is padded with zeros to fit an integer number of patches and the extra padding is removed afterwards automatically.The number of patches (tiles) \(K\) is determined by the
patch_sizeandstrideparameters: it is the product of the number of patches along each spatial dimension. A helper class methodnum_filters(img_size, patch_size, stride)is provided to compute the number of filters needed for a given image size, patch size and stride.Note
For simplicity, we also provide the generator class
deepinv.physics.generator.TiledBlurGeneratorto generate the filters, for example during training.Note
This class supports both FFT-based convolutions and direct convolutions, see
deepinv.physics.functional.conv2d_fft()anddeepinv.physics.functional.conv2d(). FFT-based convolutions are typically faster for large filters. This can be selected via theuse_fftparameter (default:True).Note
This class supports broadcast between of batch and channel dimensions of the filters and the input image. See
deepinv.physics.functional.conv2d()for more details.- Parameters:
filters (torch.Tensor) – Filters \(h_k\). Tensor of size
(B, C, K, h, w)whereBis the batch size,Cthe number of channels,Kthe number of filters,handwthe filter height and width which should be smaller or equal than the image \(x\) height and width respectively. IfNone, filters must be provided during the forward pass.blending_mode (str) – Blending mode for overlapping patches. Options are
'bump'(default) and'linear'.patch_size (int, tuple[int, int]) – Size of the patches (tiles). If
int, the same size is used for both spatial dimensions.
Note
This class only supports
'valid'padding. If you need other padding options, please raise an issue.
- Examples:
>>> import torch >>> from deepinv.physics import TiledSpaceVaryingBlur >>> from deepinv.physics.generator import MotionBlurGenerator, TiledBlurGenerator >>> import deepinv as dinv >>> img_size = (256, 256) >>> patch_size = (64, 64) >>> stride = (32, 32) >>> x = dinv.utils.load_example( ... "butterfly.png", img_size=img_size, resize_mode="resize" ... ) >>> psf_generator = MotionBlurGenerator(psf_size=(31, 31)) >>> generator = TiledBlurGenerator( ... psf_generator=psf_generator, ... patch_size=patch_size, ... stride=stride, ... ) >>> filters = generator.step(batch_size=1, img_size=img_size)["filters"] >>> physics = TiledSpaceVaryingBlur(patch_size=patch_size, stride=stride) >>> y = physics(x, filters=filters) >>> print(x.shape, y.shape) torch.Size([1, 3, 256, 256]) torch.Size([1, 3, 226, 226]) >>> dinv.utils.plot([x, y], titles=["Original", "Blurred"])
- A(x, filters=None, **kwargs)[source]#
Applies the space varying blur operator to the input image.
- Parameters:
x (torch.Tensor) – input image.
filters (torch.Tensor) – Filters \(h_k\).
- Returns:
torch.Tensor: Space varying blurred image.
- Return type:
- A_adjoint(y, filters=None, **kwargs)[source]#
Applies the adjoint operator.
- Parameters:
y (torch.Tensor) – blurry image \(y\). Tensor of size
(B, C, H', W').filters (torch.Tensor) – Filters \(h_k\). Tensor of size
(b, c, K, h, w).
- Returns:
torch.Tensor: Adjoint result.
- Return type: