ADMOS 2023

A Piggyback-Style Algorithm for Learning Improved Shearlets and TGV Discretizations

  • Bogensperger, Lea (Institute of Computer Graphics and Vision)
  • Chambolle, Antonin (CEREMADE, CNRS)
  • Pock, Thomas (Institute of Computer Graphics and Vision)

Please login to view abstract download link

For inverse problems in imaging, there is a wide range of regularizers that can be used, classic choices include the total variation (TV), total generalized variation (TGV), wavelets and shearlets, to name but a few. The typical variational structure is governed by a data tern determined by the underlying task and a regularizer with a linear operator, which offers a plug-and-play framework for the specific choice of the regularizer. An interesting direction is therefore to learn (parts of) the linear operator to improve some of the standard choices within a convex imaging application. Traditionally such bilevel problems have been addressed using implicit differentiation, which is computationally expensive and also requires a certain regularity of the lower level problem by construction. Later on, unrolling techniques have become very popular, however, the number of iterations that can be unravelled is limited by memory constraints. Thus a piggyback-style algorithm is a suitable alternative in case of convex-concave saddle-point problems, with less regularity assumptions on the functions. This is shown for two distinct regularizers; first, an optimal shearlet transform is learned in an image denoising setting. Shearlets are based on an extension of wavelets with the benefit of offering more isotropy whilst being faithfully discretizable. Using a piggyback algorithm, we show that parameters of a shearlet system consisting of basic 1D and 2D filters can be learned, as well as optimal regularization parameters to weigh the contribution of each shearlet individually. Another application consists of improving the second-order TGV regularizer, which can suffer from discretization artefacts related to isotropy and rotational invariance similar to TV. Recently, an improved TGV discretization has been proposed, where the dual variables are interpolated to denser grids. We extend this to a more general setting, showing that the discretization is consistent within the framework of Gamma-convergence. An improved discretization scheme is learned using a piggyback algorithm, where numerical results demonstrate the effectiveness of the learned interpolation filters.