TGVDenoiser#
- class deepinv.models.TGVDenoiser(verbose=False, n_it_max=1000, crit=1e-05, x2=None, u2=None, r2=None)[source]#
Bases:
Denoiser
Proximal operator of (2nd order) Total Generalised Variation operator.
(see K. Bredies, K. Kunisch, and T. Pock, “Total generalized variation,” SIAM J. Imaging Sci., 3(3), 492-526, 2010.)
This algorithm converges to the unique image \(x\) (and the auxiliary vector field \(r\)) minimizing
\[\underset{x, r}{\arg\min} \; \frac{1}{2}\|x-y\|_2^2 + \lambda_1 \|r\|_{1,2} + \lambda_2 \|J(Dx-r)\|_{1,F}\]where \(D\) maps an image to its gradient field and \(J\) maps a vector field to its Jacobian. For a large value of \(\lambda_2\), the TGV behaves like the TV. For a small value, it behaves like the \(\ell_1\)-Frobenius norm of the Hessian.
The problem is solved with an over-relaxed Chambolle-Pock algorithm (see L. Condat, “A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms”, J. Optimization Theory and Applications, vol. 158, no. 2, pp. 460-479, 2013.
Code (and description) adapted from Laurent Condat’s matlab version (https://lcondat.github.io/software.html) and Daniil Smolyakov’s code.
Note
The regularization term \(\|r\|_{1,2} + \|J(Dx-r)\|_{1,F}\) is implicitly normalized by its Lipschitz constant, i.e. \(\sqrt{72}\), see e.g. K. Bredies et al., “Total generalized variation,” SIAM J. Imaging Sci., 3(3), 492-526, 2010.
- Parameters:
verbose (bool) – Whether to print computation details or not. Default: False.
n_it_max (int) – Maximum number of iterations. Default: 1000.
crit (float) – Convergence criterion. Default: 1e-5.
x2 (torch.Tensor, None) – Primary variable. Default: None.
u2 (torch.Tensor, None) – Dual variable. Default: None.
r2 (torch.Tensor, None) – Auxiliary variable. Default: None.
- forward(y, ths=None, **kwargs)[source]#
Computes the proximity operator of the TGV norm.
- Parameters:
y (torch.Tensor) – Noisy image.
ths (float, torch.Tensor) – Regularization parameter.
- Returns:
Denoised image.