ADMM#
- class deepinv.optim.ADMM(data_fidelity=None, prior=None, lambda_reg=1.0, stepsize=1.0, beta=1.0, g_param=None, sigma_denoiser=None, max_iter=100, crit_conv='residual', thres_conv=1e-5, early_stop=False, custom_metrics=None, custom_init=None, unfold=False, trainable_params=None, g_first=False, cost_fn=None, params_algo=None, device=torch.device('cpu'), **kwargs)[source]#
Bases:
BaseOptimADMM module for solving the problem
\[\begin{equation} \label{eq:min_prob} \tag{1} \underset{x}{\arg\min} \quad \datafid{x}{y} + \lambda \reg{x}, \end{equation}\]where \(\datafid{x}{y}\) is the data-fidelity term, \(\reg{x}\) is the regularization term. If the attribute
g_firstis set to False (by default), the ADMM iterations write (see Boyd et al.[1] for more details):\[\begin{split}\begin{equation*} \begin{aligned} u_{k+1} &= \operatorname{prox}_{\gamma f}(x_k - z_k) \\ x_{k+1} &= \operatorname{prox}_{\gamma \lambda \regname}(u_{k+1} + z_k) \\ z_{k+1} &= z_k + \beta (u_{k+1} - x_{k+1}) \end{aligned} \end{equation*}\end{split}\]where \(\gamma>0\) is a stepsize and \(\beta>0\) is a relaxation parameter. If the attribute
g_firstis set toTrue, the functions \(f\) and \(\regname\) are inverted in the previous iterations. The ADMM iterations are defined in the iterator classdeepinv.optim.optim_iterators.ADMMIteration. For using early stopping or stepsize backtracking, see the documentation of thedeepinv.optim.BaseOptimclass.If the attribute
unfoldis set toTrue, the algorithm is unfolded and the algorithmic parameters (stepsize, regularization parameter, etc.) of the algorithm are trainable. By default (if the attributeunfoldis set toTrue) all the algorithm parameters are trainable: the stepsize \(\gamma\), the regularization parameter \(\lambda\), the prior parameter and the relaxation parameter \(\beta\). Use thetrainable_paramsargument to adjust the list of trainable parameters. Note also that by default, if the prior has trainable parameters (e.g. a neural network denoiser), these parameters are learnable by default. If the model is used for inference only, use thewith torch.no_grad():context when calling the model in order to avoid unnecessary gradient computations.- Parameters:
data_fidelity (list, deepinv.optim.DataFidelity) โ data-fidelity term \(\datafid{x}{y}\). Either a single instance (same data-fidelity for each iteration) or a list of instances of
deepinv.optim.DataFidelity(distinct data fidelity for each iteration). Default:Nonecorresponding to \(\datafid{x}{y} = 0\).prior (list, deepinv.optim.Prior) โ regularization prior \(\reg{x}\). Either a single instance (same prior for each iteration) or a list of instances of
deepinv.optim.Prior(distinct prior for each iteration). Default:Nonecorresponding to \(\reg{x} = 0\).lambda_reg (float) โ regularization parameter \(\lambda\). Default:
1.0.stepsize (float) โ stepsize parameter \(\gamma\). Default:
1.0.beta (float) โ ADMM relaxation parameter \(\beta\). Default:
1.0.g_param (float) โ parameter of the prior function. For example the noise level for a denoising prior. Default:
None.sigma_denoiser (float) โ same as
g_param. If bothg_paramandsigma_denoiserare provided,g_paramis used. Default:None.max_iter (int) โ maximum number of iterations of the optimization algorithm. Default:
100.crit_conv (str) โ convergence criterion to be used for claiming convergence, either
"residual"(residual of the iterate norm) or"cost"(on the cost function). Default:"residual"thres_conv (float) โ convergence threshold for the chosen convergence criterion. Default:
1e-5.early_stop (bool) โ whether to stop the algorithm as soon as the convergence criterion is met. Default:
False.custom_metrics (dict) โ dictionary of custom metric functions to be computed along the iterations. The keys of the dictionary are the names of the metrics, and the values are functions that take as input the current and previous iterates, and return a scalar value. Default:
None.custom_init (Callable) โ
Custom initialization of the algorithm. The callable function
custom_init(y, physics)takes as input the measurement \(y\) and the physicsphysicsand returns the initialization in the form of either:a tuple \((x_0, z_0)\) (where
x_0andz_0are the initial primal and dual variables),a torch.Tensor \(x_0\) (if no dual variables \(z_0\) are used), or
a dictionary of the form
X = {'est': (x_0, z_0)}.
Note that custom initialization can also be directly defined via the
initargument in theforwardmethod.If
None(default value), the algorithm is initialized with the adjoint \(A^{\top}y\) when the adjoint is defined, and with the observationyif the adjoint is not defined. Default:None.g_first (bool) โ whether to perform the proximal step on \(\reg{x}\) before that on \(\datafid{x}{y}\), or the opposite. Default:
False.unfold (bool) โ whether to unfold the algorithm or not. Default:
False.trainable_params (list) โ list of ADMM parameters to be trained if
unfoldis True. To choose between["lambda", "stepsize", "g_param", "beta"]. Default: None, which means that all parameters are trainable ifunfoldis True. For no trainable parameters, set to an empty list.cost_fn (Callable) โ Custom user input cost function.
cost_fn(x, data_fidelity, prior, cur_params, y, physics)takes as input the current primal variable (torch.Tensor), the current data-fidelity (deepinv.optim.DataFidelity), the current prior (deepinv.optim.Prior), the current parameters (dict), and the measurement (torch.Tensor). Default:None.params_algo (dict) โ optionally, directly provide the ADMM parameters in a dictionary. This will overwrite the parameters in the arguments
stepsize,lambda_reg,g_paramandbeta.device (torch.device) โ device to use for the algorithm. Default:
torch.device("cpu").
- References:
Examples using ADMM:#
Pattern Ordering in a Compressive Single Pixel Camera
PnP with custom optimization algorithm (Primal-Dual Condat-Vu)