SpaceVaryingBlur#
- class deepinv.physics.SpaceVaryingBlur(filters=None, multipliers=None, padding=None, **kwargs)[source]#
Bases:
LinearPhysics
Implements a space varying blur via product-convolution.
This operator performs
\[y = \sum_{k=1}^K h_k \star (w_k \odot x)\]where \(\star\) is a convolution, \(\odot\) is a Hadamard product, \(w_k\) are multipliers \(h_k\) are filters.
- Parameters:
w (torch.Tensor) – Multipliers \(w_k\). Tensor of size (b, c, K, H, W). b in {1, B} and c in {1, C}
h (torch.Tensor) – Filters \(h_k\). Tensor of size (b, c, K, h, w). b in {1, B} and c in {1, C}, h<=H and w<=W.
padding – options =
'valid'
,'circular'
,'replicate'
,'reflect'
. Ifpadding = 'valid'
the blurred output is smaller than the image (no padding), otherwise the blurred output has the same size as the image.device (str) – cpu or cuda
- Examples:
We show how to instantiate a spatially varying blur operator.
>>> from deepinv.physics.generator import DiffractionBlurGenerator, ProductConvolutionBlurGenerator >>> from deepinv.physics.blur import SpaceVaryingBlur >>> from deepinv.utils.plotting import plot >>> psf_size = 32 >>> img_size = (256, 256) >>> delta = 16 >>> psf_generator = DiffractionBlurGenerator((psf_size, psf_size)) >>> pc_generator = ProductConvolutionBlurGenerator(psf_generator=psf_generator, img_size=img_size) >>> params_pc = pc_generator.step(1) >>> physics = SpaceVaryingBlur(**params_pc) >>> dirac_comb = torch.zeros(img_size).unsqueeze(0).unsqueeze(0) >>> dirac_comb[0,0,::delta,::delta] = 1 >>> psf_grid = physics(dirac_comb) >>> plot(psf_grid, titles="Space varying impulse responses")
- A(x: Tensor, filters=None, multipliers=None, padding=None, **kwargs) Tensor [source]#
Applies the space varying blur operator to the input image.
It can receive new parameters \(w_k\), \(h_k\) and padding to be used in the forward operator, and stored as the current parameters.
- Parameters:
filters (torch.Tensor) – Multipliers \(w_k\). Tensor of size (b, c, K, H, W). b in {1, B} and c in {1, C}
multipliers (torch.Tensor) – Filters \(h_k\). Tensor of size (b, c, K, h, w). b in {1, B} and c in {1, C}, h<=H and w<=W
padding – options =
'valid'
,'circular'
,'replicate'
,'reflect'
. If padding = ‘valid’ the blurred output is smaller than the image (no padding), otherwise the blurred output has the same size as the image.device (str) – cpu or cuda
- A_adjoint(y: Tensor, filters=None, multipliers=None, padding=None, **kwargs) Tensor [source]#
Applies the adjoint operator.
It can receive new parameters \(w_k\), \(h_k\) and padding to be used in the forward operator, and stored as the current parameters.
- Parameters:
h (torch.Tensor) – Filters \(h_k\). Tensor of size (b, c, K, h, w). b in {1, B} and c in {1, C}, h<=H and w<=W
w (torch.Tensor) – Multipliers \(w_k\). Tensor of size (b, c, K, H, W). b in {1, B} and c in {1, C}
padding – options =
'valid'
,'circular'
,'replicate'
,'reflect'
. If padding = ‘valid’ the blurred output is smaller than the image (no padding), otherwise the blurred output has the same size as the image.device (str) – cpu or cuda
- update_parameters(filters=None, multipliers=None, padding=None, **kwargs)[source]#
Updates the current parameters.
- Parameters:
filters (torch.Tensor) – Multipliers \(w_k\). Tensor of size (b, c, K, H, W). b in {1, B} and c in {1, C}
multipliers (torch.Tensor) – Filters \(h_k\). Tensor of size (b, c, K, h, w). b in {1, B} and c in {1, C}, h<=H and w<=W
padding – options =
'valid'
,'circular'
,'replicate'
,'reflect'
.