.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/physics/demo_scattering.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note New to DeepInverse? Get started with the basics with the :ref:`5 minute quickstart tutorial `.. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_physics_demo_scattering.py: Inverse scattering problem ========================== In this example we show how to use the :class:`deepinv.physics.Scattering` forward model. The scattering inverse problem consists in reconstructing the contrast of an unknown object from measurements of the scattered wave field resulting from the interaction of an incident wave with the object. For each of the :math:`i=1,\dots,T` transmitters, the 2D forward model is given by inhomogeneous Helmholtz equation (see e.g., :cite:t:`soubies2017efficient`): .. math:: \nabla^2 u_i(\mathbf{r}) + k^2(\mathbf{r}) u_i(\mathbf{r}) = - (k^2(\mathbf{r}) - k^2_b) v_i(\mathbf{r}) \quad \mathbf{r} \in \mathbb{R}^2 where :math:`u_i` is the (unknown) scattered field, :math:`k_b` is the (known scalar) wavenumber of the incident wave in the background medium, :math:`k(\mathbf{r})` is the (unknown) spatially-varying wavenumber of the object to be recovered, and :math:`v_i` is the incident field generated by the ith transmitter in the absence of the object. The total field (scattered + incident) is measured at :math:`R` different receiver locations surrounding the object. Parametrizing the unknown spatially-varying wavenumber as :math:`k^2(\mathbf{r}) = k_b^2 (x(\mathbf{r})+1)`, where :math:`k_b` is the background wavenumber, and :math:`x = k^2/k_b^2 - 1` is the scattering potential of the object to be recovered, the forward problem can be reformulated in the **Lippmann-Schwinger** integral equation form: .. math:: u_i &= g * \left( x \circ (u_i+v_i) \right) \\ y_i &= G_s \left( x \circ (u_i+v_i) \right) where :math:`g(\mathbf{r}) = k_b^2 \frac{i}{4} H_0^1(k_b\|\mathbf{r}\|)` is Green's function in 2D (normalized by :math:`k_b^2`), :math:`y \in \mathbb{C}^{R}` are the measurements at the receivers for the ith transmitter, and :math:`G_s` denotes the convolution with Green's operator plus sampling at the :math:`R` different receiver locations. .. tip:: This parametrization ensures that the scattering potential :math:`x` is dimensionless, and can be used for different physical modalities: In **microwave tomography** applications, the scattering potential is related to the object's relative permittivity :math:`\epsilon_r` as :math:`x(\mathbf{r}) = \epsilon_r(\mathbf{r}) - 1`. In **optical diffraction tomography**, the scattering potential is related to the refractive index :math:`n` as :math:`x(\mathbf{r}) = n^2(\mathbf{r}) - 1`. Moreover, the wavenumber can be also provided in a dimensionless form by normalizing it with respect to the box length :math:`L` as :math:`k_b = 2 \pi L / \lambda`, where :math:`\lambda` is the wavelength of the incident wave. This example shows how to define the scattering forward model, generate measurements, and perform reconstructions (i.e., recover the contrast of the object) using both a linear (Born approximation) solver and a non-linear gradient descent solver. We also explore the trade-off between resolution and non-linearity by varying the wavenumber of the incident wave. .. GENERATED FROM PYTHON SOURCE LINES 52-76 .. code-block:: Python import deepinv as dinv import torch from matplotlib import pyplot as plt device = dinv.utils.get_freer_gpu() if torch.cuda.is_available() else "cpu" img_width = 32 x = dinv.utils.load_example( "SheppLogan.png", img_size=img_width, resize_mode="resize", device=device, grayscale=True, ) contrast = ( 0.5 if device != "cpu" else 0.1 ) # reduce contrast for CPU for faster convergence x = x * contrast psnr = dinv.metric.PSNR(max_pixel=contrast) .. rst-class:: sphx-glr-script-out .. code-block:: none Selected GPU 0 with 8055.25 MiB free memory .. GENERATED FROM PYTHON SOURCE LINES 77-102 Define the forward model --------------------------- We define a scattering forward model with circularly distributed transmitters and receivers. The forward operator internally solves the Lippmann-Schwinger equation for the scattered field of each transmitter: .. math:: u_i = g * \left(x \circ (u_i + v_i) \right) using a linear system solver, which aims at finding the solution :math:`u_i` of .. math:: \left(I - G_s \text{diag}(x)\right) u_i = b_i where :math:`G_s` is the convolution operator with Green's function :math:`g`, and :math:`b_i = G_s (x \circ v_i)`. This linear system becomes highly ill-conditioned for high contrast objects (i.e., large :math:`\|x\|_{\infty}`) and/or high wavenumber (which induces a high spectral norm of the Green operator :math:`\|G_s\|_2`), and the solver may fail to converge. In that case, one can try to increase the number of iterations, or change the solver (see :class:`deepinv.physics.Scattering.SolverConfig` for more details). .. GENERATED FROM PYTHON SOURCE LINES 102-127 .. code-block:: Python sensors = 32 transmitters, receivers = dinv.physics.scattering.circular_sensors( sensors, radius=1, device=device ) wavenumber = 5 * (2 * torch.pi) config = dinv.physics.Scattering.SolverConfig( max_iter=200, tol=1e-5, solver="lsqr", adjoint_state=True ) physics = dinv.physics.Scattering( img_width=img_width, device=device, background_wavenumber=wavenumber, solver_config=config, transmitters=transmitters, receivers=receivers, ) physics.normalize(x) physics.set_verbose(True) y = physics(x) .. rst-class:: sphx-glr-script-out .. code-block:: none LSQR converged at iteration 24 .. GENERATED FROM PYTHON SOURCE LINES 128-135 Visualize sensor positions --------------------------- In this example, we assume we have 64 sensors that can be used both as transmitters and receivers, placed on a circle of radius 1 around the object. Each transmitter emits a cylindrical wave, and the rest of the sensors measure the scattered field. We first visualize the position of the first transmitter and its associated receivers. .. GENERATED FROM PYTHON SOURCE LINES 135-155 .. code-block:: Python plt.figure() plt.scatter( transmitters[0, 0].cpu(), transmitters[1, 0].cpu(), c="r", label="Transmitter" ) plt.scatter( receivers[0, 0, :].cpu(), receivers[1, 0, :].cpu(), c="b", label="Receivers" ) # draw square of length1 plt.plot( [-0.5, 0.5, 0.5, -0.5, -0.5], [-0.5, -0.5, 0.5, 0.5, -0.5], c="k", label="box where object is located", ) plt.legend() plt.title("First transmitter and associated receiver positions") plt.axis("equal") plt.show() .. image-sg:: /auto_examples/physics/images/sphx_glr_demo_scattering_001.png :alt: First transmitter and associated receiver positions :srcset: /auto_examples/physics/images/sphx_glr_demo_scattering_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 156-163 Visualize incident and total fields -------------------------------------- We now visualize the incident and scattered fields for the first transmitter. The incident field :math:`v_i` is the field that would be present in the absence of the object, while the total field :math:`u_i + v_i` is the sum of the incident and scattered fields. Since the fields are complex-valued, we only visualize their real part here. .. GENERATED FROM PYTHON SOURCE LINES 163-172 .. code-block:: Python incident_field = physics.incident_field scattered_field = physics.compute_total_field(x) - incident_field dinv.utils.plot( [incident_field[:, :1, ...].real, scattered_field[:, :1, ...].real], titles=["Incident field", "Scattered field"], figsize=(4, 2), ) .. image-sg:: /auto_examples/physics/images/sphx_glr_demo_scattering_002.png :alt: Incident field, Scattered field :srcset: /auto_examples/physics/images/sphx_glr_demo_scattering_002.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out .. code-block:: none LSQR converged at iteration 24 .. GENERATED FROM PYTHON SOURCE LINES 173-185 Computing gradients through the physics operator -------------------------------------------------- The gradient computation is fully compatible with PyTorch autograd, allowing to easily plug this physics operator into more complex optimization or learning-based algorithms. Gradients can be computed with less memory using the adjoint-state method under the hood, requiring a single additional solver pass per gradient evaluation (see e.g. :cite:t:`soubies2017efficient`), and does not require storing all intermediate variables (as in standard backpropagation via PyTorch autograd). Here we show that the gradients computed using the adjoint-state method are consistent with those computed via standard PyTorch autograd. .. note:: If you are using a GPU, the peak memory usage during gradient computation is also displayed for comparison. .. GENERATED FROM PYTHON SOURCE LINES 185-231 .. code-block:: Python def compute_grad(x, y, physics): if device != "cpu": torch.cuda.reset_peak_memory_stats() # Reset peak memory tracking x_ = x.clone() x_[ ..., img_width // 4 : 3 * img_width // 4, img_width // 4 : 3 * img_width // 4 ] = 0.0 x_ = x_.requires_grad_(True) y_ = physics.A(x_) error = torch.mean((y_ - y).abs() ** 2) grad = torch.autograd.grad(error, x_)[0] if device != "cpu": print( f"Peak GPU memory usage for grad computation: " f"{torch.cuda.max_memory_allocated() / 1e6 :.1f} MB", ) return x_, grad x_, grad = compute_grad(x, y, physics) # set solver to not use adjoint state config2 = dinv.physics.Scattering.SolverConfig( max_iter=200, tol=1e-5, solver="lsqr", adjoint_state=False ) physics.set_solver(config2) _, grad2 = compute_grad(x, y, physics) dinv.utils.plot( [x_.real, grad, grad2], titles=[ "Image where grad is computed", "Grad with adjoint state", "Grad via Pytorch autograd", ], figsize=(8, 4), ) print("Difference between gradients:", (grad - grad2).abs().mean().item()) # go back to adjoint state solver physics.set_solver(config) .. image-sg:: /auto_examples/physics/images/sphx_glr_demo_scattering_003.png :alt: Image where grad is computed, Grad with adjoint state, Grad via Pytorch autograd :srcset: /auto_examples/physics/images/sphx_glr_demo_scattering_003.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out .. code-block:: none LSQR converged at iteration 22 LSQR converged at iteration 22 Peak GPU memory usage for grad computation: 61.3 MB Peak GPU memory usage for grad computation: 180.2 MB Difference between gradients: 1.0060698940606017e-09 .. GENERATED FROM PYTHON SOURCE LINES 232-245 Reconstruction with linear (Born approximation) solver ------------------------------------------------------ The born (first-order) approximation consists in assuming that the scattered field is small compared to the incident field, i.e., :math:`u_i \ll v_i`. This allows to approximate the Lippmann-Schwinger equation as: .. math:: y_i = G_s \left(x \circ v_i\right) which is a linear forward model in :math:`x`. This linear model can be inverted using its linear pseudo-inverse, computed with a linear solver. .. GENERATED FROM PYTHON SOURCE LINES 245-251 .. code-block:: Python physics.set_verbose(False) x_lin = physics.A_dagger(y, linear=True) print(f"PSNR of Born approximation: {psnr(x, x_lin).item():2f} dB") .. rst-class:: sphx-glr-script-out .. code-block:: none PSNR of Born approximation: 14.431273 dB .. GENERATED FROM PYTHON SOURCE LINES 252-274 Reconstruction with gradient descent ------------------------------------- The linear Born approximation is only valid for weakly scattering objects. We now perform a non-linear reconstruction using gradient descent to minimize the least-squares data-fidelity term: .. math:: \min_x \frac{1}{2} \sum_{i=1}^T \| y_i - A_i(x) \|_2^2 where :math:`A_i(x)` is the non-linear forward operator for the ith transmitter, given by the Lippmann-Schwinger equation above. .. note:: We can use a step size of 1 here since we have normalized the physics operator to have a (local) Lipschitz constant of 1. .. note:: Here we only do 50 iterations for demonstration purposes. In practice, more iterations may be needed to reach convergence. .. GENERATED FROM PYTHON SOURCE LINES 274-297 .. code-block:: Python data_fid = dinv.optim.L2() gd_solver = dinv.optim.GD( max_iter=50, data_fidelity=data_fid, stepsize=1, custom_init=lambda y, physics: physics.A_dagger(y, linear=True), ) x_gd = gd_solver(y, physics) print(f"PSNR of gradient descent reconstruction: {psnr(x, x_gd).item():.2f} dB") dinv.utils.plot( [x, x_lin, x_gd], titles=[ "ground truth", f"Born approximation\nPSNR={psnr(x, x_lin).item():.2f}dB", f"Gradient descent\nPSNR={psnr(x, x_gd).item():.2f}dB", ], figsize=(10, 3), ) .. image-sg:: /auto_examples/physics/images/sphx_glr_demo_scattering_004.png :alt: ground truth, Born approximation PSNR=14.43dB, Gradient descent PSNR=22.85dB :srcset: /auto_examples/physics/images/sphx_glr_demo_scattering_004.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out .. code-block:: none PSNR of gradient descent reconstruction: 22.85 dB .. GENERATED FROM PYTHON SOURCE LINES 298-313 Understanding the trade-off between resolution and non-linearity ------------------------------------------------------------------- The background wavenumber :math:`k_b` (or equivalently the frequency) of the transmitted wave plays a key role in the scattering process. Higher wavenumbers lead to smaller waves which can resolve smaller details in the object being imaged. However, higher wavenumbers also lead to stronger multiple scattering effects, since the non-linearity of the problem is roughly proportional to :math:`\|x\|_{\infty} k_b` (i.e., the product of the object contrast and the wavenumber). We now compare the Born approximation reconstruction with a gradient descent reconstruction for different normalized wavenumbers (i.e. different resolutions). .. note:: This example requires a GPU to run in a reasonable time. .. GENERATED FROM PYTHON SOURCE LINES 313-340 .. code-block:: Python if device != "cpu": imgs = [x.detach().cpu()] titles = ["ground truth"] wavenumbers = [1, 5, 7] for wavenumber in wavenumbers: physics = dinv.physics.Scattering( img_width=img_width, device=device, background_wavenumber=wavenumber * (2 * torch.pi), transmitters=transmitters, receivers=receivers, ) physics.normalize(x) y = physics(x) x_gd = gd_solver(y, physics) metric = psnr(x_gd, x) titles = titles + [f"wavenumber={wavenumber} \n PSNR={metric.item():.2f}dB"] imgs = imgs + [x_gd.detach().cpu()] dinv.utils.plot(imgs, titles=titles, figsize=(10, 3)) .. image-sg:: /auto_examples/physics/images/sphx_glr_demo_scattering_005.png :alt: ground truth, wavenumber=1 PSNR=19.18dB, wavenumber=5 PSNR=22.85dB, wavenumber=7 PSNR=17.73dB :srcset: /auto_examples/physics/images/sphx_glr_demo_scattering_005.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 341-350 Going further ---------------- You can check out the following examples to go further: - Try other sensor configurations, e.g., a linear array of transmitters on one side of the object, and receivers on the opposite side. - Use a pretrained denoiser to perform plug-and-play reconstruction, as in :ref:`sphx_glr_auto_examples_plug-and-play_demo_vanilla_PnP.py` - Learn a reconstruction network using an unrolled architecture, as in :ref:`sphx_glr_auto_examples_unfolded_demo_vanilla_unfolded.py`. .. GENERATED FROM PYTHON SOURCE LINES 352-355 :References: .. footbibliography:: .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 41.518 seconds) .. _sphx_glr_download_auto_examples_physics_demo_scattering.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: demo_scattering.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: demo_scattering.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: demo_scattering.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_