KernelIdentificationNetwork#
- class deepinv.models.KernelIdentificationNetwork(filters=25, blur_kernel_size=33, bilinear=False, no_softmax=False, pretrained='download', device='cpu')[source]#
Bases:
ModuleSpace varying blur kernel estimation network.
U-Net proposed by Carbajal et al.[1], estimating the parameters of
deepinv.physics.SpaceVaryingBlurforward model, i.e., blur kernels and corresponding spatial multipliers (weights).Current implementation supports blur kernels of size 33x33 (default) and 65x65, and 1 or 3 input channels.
Code adapted from GuillermoCarbajal/J-MKPD with permission from the author.
Images are assumed to be in range [0, 1] before being passed to the network, and to be non-gamma corrected (i.e., linear RGB). If your blurry image has been gamma-corrected (e.g., standard sRGB images), consider applying an inverse gamma correction (e.g., raising to the power of 2.2) before passing it to the network for better results.
- Parameters:
filters (int) – number of blur kernels to estimate, defaults to 25.
blur_kernel_size (int) – size of the blur kernels to estimate, defaults to 33. Only 33 and 65 are currently supported.
bilinear (bool) – whether to use bilinear upsampling or transposed convolutions, defaults to False.
no_softmax (bool) – whether to apply softmax to the estimated kernels, defaults to False.
pretrained (str, None) – use a pretrained network. If
pretrained=None, the weights will be initialized at random using Pytorch’s default initialization. Ifpretrained='download', the weights will be downloaded from an online repository (only available for the default architecture with default parameters). Finally,pretrainedcan also be set as a path to the user’s own pretrained weights. See pretrained-weights for more details.device (str, torch.device) – device to use, defaults to ‘cpu’.
Example usage:
>>> import deepinv as dinv >>> import torch >>> device = dinv.utils.get_freer_gpu(verbose=False) if torch.cuda.is_available() else "cpu" >>> kernel_estimator = dinv.models.KernelIdentificationNetwork(device=device) >>> physics = dinv.physics.SpaceVaryingBlur(device=device, padding="constant") >>> y = torch.randn(1, 3, 128, 128).to(device) # random blurry image for demonstration >>> with torch.no_grad(): ... params = kernel_estimator(y) # this outputs {"filters": ..., "multipliers": ...} >>> physics.update(**params) # update physics with estimated kernels >>> print(params["filters"].shape, params["multipliers"].shape) torch.Size([1, 1, 25, 33, 33]) torch.Size([1, 1, 25, 128, 128])
- References:
- forward(x)[source]#
Forward pass of the kernel estimation network.
- Parameters:
x – input blurry image of shape (N, C, H, W) with values in [0, 1]. Assumed to be non-gamma corrected (i.e., linear RGB).
- Returns:
dictionary with estimated blur kernels and spatial multipliers: -
'filters': estimated blur kernels of shape (N, 1, K, blur_kernel_size, blur_kernel_size) -'multipliers': estimated spatial multipliers of shape (N, 1, K, H, W)