Diarra Fall : Bayesian Approaches to Inverse Problems: Deep Regularization and Uncertainty Quantification
Inverse problems are ubiquitous in signal and image processing. As inverse problems are known to be ill-posed or, at least, ill-conditioned, they require regularization through the introduction of additional constraints to mitigate the lack of information provided by the observations. A common difficulty lies in selecting an appropriate regularizer, which [...]
Inverse problems are ubiquitous in signal and image processing. As inverse problems are known to be ill-posed or, at least, ill-conditioned, they require regularization through the introduction of additional constraints to mitigate the lack of information provided by the observations. A common difficulty lies in selecting an appropriate regularizer, which has a decisive influence on the quality of the reconstruction. Another challenge concerns the level of confidence we may have in the reconstructed signal or image. In other words, it is desirable for a method to quantify the uncertainty associated with the reconstructed image in order to promote more principled decision-making.
These two tasks – regularization and uncertainty quantification – can be addressed simultaneously within the Bayesian statistical framework. This approach makes it possible to incorporate additional information by specifying a marginal distribution for the image, known as the prior distribution. The traditional approach consists in defining the prior analytically, as a hand-crafted explicit function chosen to promote specific desired properties of the recovered image. Following the recent surge in deep learning, data-driven regularization using priors specified by neural networks has become widespread in image inverse problems. Popular approaches within this framework include Plug-and-Play (PnP) [1] and Regularization by Denoising (RED) [2].
In the first part of the talk, I will present the probabilistic approach to the RED framework we have introduced in [3], which defines a new probability distribution based on a RED potential that can be used as the prior distribution in a Bayesian inversion task. We also propose a dedicated Markov chain Monte Carlo (MCMC) sampling algorithm that is particularly well suited for high-dimensional sampling of the resulting posterior distribution. In addition, we provide a theoretical analysis guaranteeing convergence to the target distribution and quantifying the convergence rate. The effectiveness of the proposed approach is illustrated on various linear inverse restoration tasks such as image deblurring, inpainting, and super-resolution.
The second part of the talk will be devoted to a novel approach we proposed in [4], for solving nonlinear Poisson inverse problems. We also develop a Monte Carlo sampling algorithm that accounts for the underlying non-Euclidean geometry of the problem. The proposed approach has been evaluated on different tasks such as denoising, deblurring, and positron emission tomography (PET) reconstruction.
———————————————————————————————————————————–
References
[1] S. V. Venkatakrishnan et al. « Plug-and-Play priors for model based reconstruction ». In IEEE Global Conf. on Signal and Information Processing, pp 945-948, 2013.
[2] Y. Romano, M. Elad and P. Milanfar, « The little engine that could:
Regularization by denoising (RED), » SIAM Journal on Imaging Sciences, 10(4):1804–1844, 2017
[3] E.C. Faye, M.D. Fall and N. Dobigeon. « Regularization by denoising: Bayesian model and Langevin-within-split Gibbs sampling », IEEE Transactions on Image Processing, vol 34, pages 221-234, 2024
[4] E.C. Faye, M.D. Fall, N. Dobigeon and É. Barat « Bregman geometry-aware split Gibbs sampling for Bayesian Poisson inverse problems », Under revision for SIAM Journal on Imaging Sciences, 2026.
