JacobianSpectralNorm#

class deepinv.loss.JacobianSpectralNorm(max_iter=10, tol=0.001, eval_mode=False, verbose=False)[source]#

Bases: Loss

Computes the spectral norm of the Jacobian.

Given a function \(f:\mathbb{R}^n\to\mathbb{R}^n\), this module computes the spectral norm of the Jacobian of \(f\) in \(x\), i.e.

\[\|\frac{df}{du}(x)\|_2.\]

This spectral norm is computed with a power method leveraging jacobian vector products, as proposed in https://arxiv.org/abs/2012.13247v2.

Parameters:
  • max_iter (int) – maximum numer of iteration of the power method.

  • tol (float) – tolerance for the convergence of the power method.

  • eval_mode (bool) – set to False if one does not want to backpropagate through the spectral norm (default), set to True otherwise.

  • verbose (bool) – whether to print computation details or not.


Examples:

>>> import torch
>>> from deepinv.loss.regularisers import JacobianSpectralNorm
>>> _ = torch.manual_seed(0)
>>> _ = torch.cuda.manual_seed(0)
>>>
>>> reg_l2 = JacobianSpectralNorm(max_iter=10, tol=1e-3, eval_mode=False, verbose=True)
>>> A = torch.diag(torch.Tensor(range(1, 51)))  # creates a diagonal matrix with largest eigenvalue = 50
>>> x = torch.randn_like(A).requires_grad_()
>>> out = A @ x
>>> regval = reg_l2(out, x)
>>> print(regval) # returns approx 50
tensor([49.0202])
forward(y, x, **kwargs)[source]#

Computes the spectral norm of the Jacobian of \(f\) in \(x\).

Warning

The input \(x\) must have requires_grad=True before evaluating \(f\).

Parameters: