BlurStrength#

class deepinv.loss.metric.BlurStrength(h_size=11, **kwargs)[source]#

Bases: Metric

No-reference blur strength metric for batched images.

Returns a value in (0, 1) for each image in the batch, where 0 indicates a very sharp image and 1 indicates a very blurry image.

The metric has been introduced in Crete et al. [26].

Parameters:
  • h_size (int) – size of the uniform blur filter. Default: 11.

  • complex_abs (bool) – perform complex magnitude before passing data to metric function. If True, the data must either be of complex dtype or have size 2 in the channel dimension (usually the second dimension after batch).

  • reduction (str) – a method to reduce metric score over individual batch scores. mean: takes the mean, sum takes the sum, none or None no reduction will be applied (default).

  • norm_inputs (str) – normalize images before passing to metric. l2 normalizes by \({\ell}_2\) spatial norm, min_max normalizes by min and max of each input.

  • check_input_range (bool) – if True, pyiqa will raise error if inputs aren’t in the appropriate range [0, 1].

  • center_crop (int, tuple[int], None) – If not None (default), center crop the tensor(s) before computing the metrics. If an int is provided, the cropping is applied equally on all spatial dimensions (by default, all dimensions except the first two). If tuple of int, cropping is performed over the last len(center_crop) dimensions. If positive values are provided, a standard center crop is applied. If negative (or zero) values are passed, cropping will be done by removing center_crop pixels from the borders (useful when tensors vary in size across the dataset).


Example:

>>> from deepinv.loss.metric import BlurStrength
>>> m = BlurStrength()
>>> x_net = torch.randn(2, 3, 16, 16)  # batch of 2 RGB images
>>> m(x_net).shape
torch.Size([2])
metric(x_net, *args, **kwargs)[source]#

Compute blur strength metric for a batch of images.

Parameters:

x_net (Tensor) – (B, C, …) input tensors with C=1 or 3 channels. The spatial dimensions can be 1D, 2D, or higher.

Returns:

(B,) tensor of blur strength values in (0,1) for each image in the batch.

Return type:

Tensor

static sobel1d(x, axis)[source]#

Batched 1D Sobel derivative along an arbitrary axis.

Parameters:
  • x (torch.Tensor) – (B, C, ...)

  • axis (int) – axis along which to compute sobel derivative along.

Returns:

torch.Tensor of shape (B, C, ...)

Return type:

Tensor

static uniform_filter1d(x, size, axis)[source]#

Batched 1D uniform filter along an arbitrary axis.

Parameters:
  • x (torch.Tensor) – input tensor of shape (B, C, ...)

  • size (int) – size of filter

  • axis (int) – axis along which to compute filter

Returns:

filtered tensor of shape (B, C, ...)

Return type:

Tensor

Examples using BlurStrength:#

Blind deblurring with kernel estimation network

Blind deblurring with kernel estimation network