FNEJacobianSpectralNorm#

class deepinv.loss.FNEJacobianSpectralNorm(max_iter=10, tol=0.001, verbose=False, eval_mode=False)[source]#

Bases: Loss

Computes the Firm-Nonexpansiveness Jacobian spectral norm.

Given a function \(f:\mathbb{R}^n\to\mathbb{R}^n\), this module computes the spectral norm of the Jacobian of \(2f-\operatorname{Id}\) (where \(\operatorname{Id}\) denotes the identity) in \(x\), i.e.

\[\|\frac{d(2f-\operatorname{Id})}{du}(x)\|_2,\]

as proposed in https://arxiv.org/abs/2012.13247v2. This spectral norm is computed with the deepinv.loss.JacobianSpectralNorm() module.

Parameters:
  • max_iter (int) – maximum numer of iteration of the power method.

  • tol (float) – tolerance for the convergence of the power method.

  • eval_mode (bool) – set to False if one does not want to backpropagate through the spectral norm (default), set to True otherwise.

  • verbose (bool) – whether to print computation details or not.

forward(y_in, x_in, model, *args_model, interpolation=False, **kwargs_model)[source]#

Computes the Firm-Nonexpansiveness (FNE) Jacobian spectral norm of a model.

Parameters:
  • y_in (torch.Tensor) – input of the model (by default).

  • x_in (torch.Tensor) – an additional point of the model (by default).

  • model (torch.nn.Module) – neural network, or function, of which we want to compute the FNE Jacobian spectral norm.

  • *args_model – additional arguments of the model.

  • interpolation (bool) – whether to input to model an interpolation between y_in and x_in instead of y_in (default is False).

  • **kargs_model – additional keyword arguments of the model.

Examples using FNEJacobianSpectralNorm:#

Uncertainty quantification with PnP-ULA.

Uncertainty quantification with PnP-ULA.