Please login to view abstract download link
Physics-informed neural networks (PINNs) have recently arisen as a promising solution methodology for inverse problems. The solution is approximated with a neural network trained by minimizing the residual of a partial differential equation. This work aims to pinpoint the strengths and weaknesses of PINNs in relation to the classical adjoint optimization. We present an incremental comparison of PINNs w.r.t. the classical adjoint optimization in the context of inverse problems. To this end, we consider the three key ingredients (a) the forward solver, (b) the Ansatz space of the optimization variable, and (c) the sensitivity computation. The empirical investigation is performed for full waveform inversion, where the unknown is a scaling function of the density field to locate internal voids. PINN-based approaches represent both the solution and the scaling function with separate neural networks and perform a nested minimization of the emerging residuals. For the incremental comparison, we first replace the neural network responsible for the forward solution with a non-learnable forward operator, i.e., a classical finite difference scheme as forward solver (a). Next, the importance of the discretization of the scaling function for the density field is investigated by comparing a discretization using a neural network with a discretization using piece-wise polynomials (b). Lastly, the sensitivity computation with automatic differentiation is partially substituted with the continuous adjoint method (c). These aspects are studied using two-dimensional benchmark problems and complex three-dimensional cases based on CT scans of rare drill cores. The investigation led to two main insights. Firstly, the fully PINN-based approach is not the most efficient, as the optimizer needs to learn both the forward and inverse solution simultaneously. Instead, it is more advantageous to restrict the optimization to the scaling function of the density field and use a conventional method, e.g., the finite difference method, as forward solver. This modification reduces the number of required epochs from 400'000-600'000 to 50, while the time per epoch only increases by a factor of five. The second insight is that using neural networks to discretize the scaling function of the density field is highly beneficial. Piece-wise polynomials lead to numerous oscillatory artifacts, whereas neural networks recover both smoother and sharper solutions.