LPIPS#
- class deepinv.loss.metric.LPIPS(device='cpu', check_input_range=False, as_loss=False, **kwargs)[source]#
Bases:
MetricLearned Perceptual Image Patch Similarity (LPIPS) metric.
Calculates the LPIPS \(\text{LPIPS}(\hat{x},x)\) where \(\hat{x}=\inverse{y}\).
Computes the perceptual similarity between two images, based on a pre-trained deep neural network. Uses implementation from pyiqa.
Note
By default, no reduction is performed in the batch dimension.
- Example:
>>> from deepinv.utils import load_example >>> from deepinv.loss.metric import LPIPS >>> m = LPIPS() >>> x = load_example("celeba_example.jpg", img_size=128) >>> x_net = x - 0.01 >>> m(x_net, x) tensor([...])
- Parameters:
device (str) β device to use for the metric computation. Default: βcpuβ.
complex_abs (bool) β perform complex magnitude before passing data to metric function. If
True, the data must either be of complex dtype or have size 2 in the channel dimension (usually the second dimension after batch).reduction (str) β a method to reduce metric score over individual batch scores.
mean: takes the mean,sumtakes the sum,noneor None no reduction will be applied (default).norm_inputs (str) β normalize images before passing to metric.
l2normalizes by \(\ell_2\) spatial norm,min_maxnormalizes by min and max of each input.check_input_range (bool) β if True,
pyiqawill raise error if inputs arenβt in the appropriate range[0, 1].as_loss (bool) β if True, returns LPIPS as a loss. Default: False.
center_crop (int, tuple[int], None) β If not
None(default), center crop the tensor(s) before computing the metrics. If anintis provided, the cropping is applied equally on all spatial dimensions (by default, all dimensions except the first two). Iftupleofint, cropping is performed over the lastlen(center_crop)dimensions. If positive values are provided, a standard center crop is applied. If negative (or zero) values are passed, cropping will be done by removingcenter_croppixels from the borders (useful when tensors vary in size across the dataset).