SupAdversarialDiscriminatorLoss#

class deepinv.loss.adversarial.SupAdversarialDiscriminatorLoss(weight_adv: float = 1.0, D: Module | None = None, device='cpu', **kwargs)[source]#

Bases: DiscriminatorLoss

Supervised adversarial consistency loss for discriminator.

This loss was as used in conditional GANs such as Kupyn et al., “DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks”, and generative models such as Bora et al., “Compressed Sensing using Generative Models”.

Constructs adversarial loss between reconstructed image and the ground truth, to be maximised by discriminator.

\(\mathcal{L}_\text{adv}(x,\hat x;D)=\mathbb{E}_{x\sim p_x}\left[q(D(x))\right]+\mathbb{E}_{\hat x\sim p_{\hat x}}\left[q(1-D(\hat x))\right]\)

See Imaging inverse problems with adversarial networks for examples of training generator and discriminator models.

Parameters:
  • weight_adv (float) – weight for adversarial loss, defaults to 1.0

  • D (torch.nn.Module) – discriminator network. If not specified, D must be provided in forward(), defaults to None.

  • device (str) – torch device, defaults to “cpu”

forward(x: Tensor, x_net: Tensor, D: Module | None = None, **kwargs) Tensor[source]#

Forward pass for supervised adversarial discriminator loss.

Parameters:
  • x (Tensor) – ground truth image

  • x_net (Tensor) – reconstructed image

  • D (nn.Module) – discriminator model. If None, then D passed from __init__ used. Defaults to None.

Examples using SupAdversarialDiscriminatorLoss:#

Imaging inverse problems with adversarial networks

Imaging inverse problems with adversarial networks