GDIteration#
- class deepinv.optim.optim_iterators.GDIteration(**kwargs)[source]#
Bases:
OptimIterator
Iterator for Gradient Descent.
Class for a single iteration of the gradient descent (GD) algorithm for minimising \(f(x) + \lambda \regname(x)\).
The iteration is given by
\[\begin{split}\begin{equation*} \begin{aligned} v_{k} &= \nabla f(x_k) + \lambda \nabla \regname(x_k) \\ x_{k+1} &= x_k-\gamma v_{k} \end{aligned} \end{equation*}\end{split}\]where \(\gamma\) is a stepsize.
- forward(X, cur_data_fidelity, cur_prior, cur_params, y, physics, *args, **kwargs)[source]#
Single gradient descent iteration on the objective \(f(x) + \lambda \regname(x)\).
- Parameters:
X (dict) – Dictionary containing the current iterate \(x_k\).
cur_data_fidelity (deepinv.optim.DataFidelity) – Instance of the DataFidelity class defining the current data_fidelity.
cur_prior (deepinv.optim.prior) – Instance of the Prior class defining the current prior.
cur_params (dict) – Dictionary containing the current parameters of the algorithm.
y (torch.Tensor) – Input data.
- Returns:
Dictionary {“est”: (x, ), “cost”: F} containing the updated current iterate and the estimated current cost.
Examples using GDIteration
:#
Random phase retrieval and reconstruction methods.