PatchNR

class deepinv.optim.PatchNR(normalizing_flow=None, pretrained=None, patch_size=6, channels=1, num_layers=5, sub_net_size=256, device='cpu')[source]

Bases: Prior

Patch prior via normalizing flows.

The forward method evaluates its negative log likelihood.

Parameters:
  • normalizing_flow (torch.nn.Module) – describes the normalizing flow of the model. Generally it can be any torch.nn.Module() supporting backpropagation. It takes a (batched) tensor of flattened patches and the boolean rev (default False) as input and provides the value and the log-determinant of the Jacobian of the normalizing flow as an output If rev=True, it considers the inverse of the normalizing flow. When set to None it is set to a dense invertible neural network built with the FrEIA library, where the number of invertible blocks and the size of the subnetworks is determined by the parameters num_layers and sub_net_size.

  • pretrained (str) – Define pretrained weights by its path to a .pt file, None for random initialization, “PatchNR_lodopab_small” for the weights from the limited-angle CT example.

  • patch_size (int) – size of patches

  • channels (int) – number of channels for the underlying images/patches.

  • num_layers (int) – defines the number of blocks of the generated normalizing flow if normalizing_flow is None.

  • sub_net_size (int) – defines the number of hidden neurons in the subnetworks of the generated normalizing flow if normalizing_flow is None.

  • device (str) – used device

fn(x, *args, **kwargs)[source]

Evaluates the negative log likelihood function of th PatchNR.

Parameters:

x (torch.Tensor) – image tensor

Examples using PatchNR:

Patch priors for limited-angle computed tomography

Patch priors for limited-angle computed tomography