SupAdversarialDiscriminatorLoss#

class deepinv.loss.adversarial.SupAdversarialDiscriminatorLoss(weight_adv=1.0, D=None, device='cpu', **kwargs)[source]#

Bases: DiscriminatorLoss

Supervised adversarial consistency loss for discriminator.

This loss was as used in conditional GANs such as Kupyn et al.[1] and generative models such as Bora et al.[2].

Constructs adversarial loss between reconstructed image and the ground truth, to be maximised by discriminator.

\(\mathcal{L}_\text{adv}(x,\hat x;D)=\mathbb{E}_{x\sim p_x}\left[q(D(x))\right]+\mathbb{E}_{\hat x\sim p_{\hat x}}\left[q(1-D(\hat x))\right]\)

See Imaging inverse problems with adversarial networks for examples of training generator and discriminator models.

Parameters:
  • weight_adv (float) – weight for adversarial loss, defaults to 1.0

  • D (torch.nn.Module) – discriminator network. If not specified, D must be provided in forward(), defaults to None.

  • device (str) – torch device, defaults to “cpu”


References:

forward(x, x_net, D=None, **kwargs)[source]#

Forward pass for supervised adversarial discriminator loss.

Parameters:

Examples using SupAdversarialDiscriminatorLoss:#

Imaging inverse problems with adversarial networks

Imaging inverse problems with adversarial networks