gradient_descent#

deepinv.optim.utils.gradient_descent(grad_f, x, step_size=1.0, max_iter=1e2, tol=1e-5)[source]#

Standard gradient descent algorithm`.

Parameters:
  • grad_f (Callable) – gradient of function to bz minimized as a callable function.

  • x (torch.Tensor) – input tensor.

  • step_size (torch.Tensor, float) – (constant) step size of the gradient descent algorithm.

  • max_iter (int) – maximum number of iterations.

  • tol (float) – absolute tolerance for stopping the algorithm.

Returns:

torch.Tensor \(x\) minimizing \(f(x)\).