LPIPS
- class deepinv.loss.metric.LPIPS(device='cpu', check_input_range=False, **kwargs)[source]
Bases:
Metric
Learned Perceptual Image Patch Similarity (LPIPS) metric.
Calculates the LPIPS \(\text{LPIPS}(\hat{x},x)\) where \(\hat{x}=\inverse{y}\).
Computes the perceptual similarity between two images, based on a pre-trained deep neural network. Uses implementation from pyiqa.
Note
By default, no reduction is performed in the batch dimension.
- Example:
>>> from deepinv.utils.demo import get_image_url, load_url_image >>> from deepinv.loss.metric import LPIPS >>> ();m = LPIPS();() (...) >>> x = load_url_image(get_image_url("celeba_example.jpg"), img_size=128) >>> x_net = x - 0.01 >>> m(x_net, x) tensor([...])
- Parameters:
device (str) – device to use for the metric computation. Default: ‘cpu’.
complex_abs (bool) – perform complex magnitude before passing data to metric function. If
True
, the data must either be of complex dtype or have size 2 in the channel dimension (usually the second dimension after batch).reduction (str) – a method to reduce metric score over individual batch scores.
mean
: takes the mean,sum
takes the sum,none
or None no reduction will be applied (default).norm_inputs (str) – normalize images before passing to metric.
l2``normalizes by L2 spatial norm, ``min_max
normalizes by min and max of each input.check_input_range (bool) – if True,
pyiqa
will raise error if inputs aren’t in the appropriate range[0, 1]
.
- metric(x_net, x, *args, **kwargs)[source]
Calculate metric on data.
Override this function to implement your own metric. Always include
args
andkwargs
arguments.- Parameters:
x_net (torch.Tensor) – Reconstructed image \(\hat{x}=\inverse{y}\) of shape
(B, ...)
or(B, C, ...)
.x (torch.Tensor) – Reference image \(x\) (optional) of shape
(B, ...)
or(B, C, ...)
.
- Return torch.Tensor:
calculated metric, the tensor size might be
(1,)
or(B,)
.