We consider the problem of recovering signals from noisy indirect observations under the additional a priori
information that the signal is believed to be slowly varying except at an unknown number of points where
it may have discontinuities of unknown size. The model problem is a linear deconvolution problem. To take
advantage of the qualitative prior information available, we use a non-stationary Markov model with the variance
of the innovation process also unknown, and apply Bayesian techniques to estimate both the signal and the prior
variance. We propose a fast iterative method for computing a MAP estimates and we show that, with a rather
standard choices of the hyperpriors, the algorithm produces the fixed point iterative solutions of the total variation
and of the Perona-Malik regularization methods. We also demonstrate that, unlike the non-statistical estimation
methods, the Bayesian approach leads to a very natural reliability assessment of edge detection by a Markov
Chain Monte Carlo (MCMC) based analysis of the posterior.
In this paper, we restore a one-dimensional signal that a priori is known to be a smooth function with a few jump discontinuities from a blurred, noisy specimen signal using a local regularization scheme derived in a Bayesian statistical inversion framework. The proposed method is computationally effective and reproduces well the jump discontinuities, thus is an alternative to using total variation (TV) penalty as a regularizing functional. Our approach avoids the non-differentiability problems encountered in TV methods and is completely data driven in the sense that the parameter selection is done automatically and requires no user intervention. A computed example illustrating the performance of the method when applied to the solution of a deconvolution problem is presented.
We consider the deconvolution problem of estimating an image from a noisy blurred version of it. In particular, we are interested in the boundary effects: since the convolution operator is non-local, the blurred image depend on the scenery outside the field of view. Ignoring this dependency leads to image distortion known as boundary effect. In this article, we consider two different approaches to treat the non-locality. One is to estimate the image extended outside the field of view. The other is to treat the influence of the out of view scenery as boundary clutter. both approaches are considered from the Bayesian point of view.
Total variation-penalized Tikhonov regularization is a popular method for the restoration of images that have been degraded by noise and blur. The method is particularly effective, when the desired noise- and blur-free image has edges between smooth surfaces. The method, however, is computationally expensive. We describe a hybrid regularization method that combines a few steps of the GMRES iterative method with total variation-penalized Tikhonov regularization on a space of small dimension. This hybrid method requires much less computational work than available methods for total variation-penalized Tikhonov regularization and can
produce restorations of similar quality.
The BiCG and QMR methods are well-known Krylov subspace iterative methods for the solution of linear systems of equations with a large nonsymmetric, nonsingular matrix. However, little is known of the performance of these methods when they are applied to the computation of approximate solutions of linear systems of equations with a matrix of ill-determined rank. Such linear systems are known as linear discrete ill-posed problems. We describe an application of the BiCG and QMR methods to the solution of linear discrete ill-posed problems that arise in image restoration, and compare these methods to the conjugate gradient method applied to the associated normal equations and to total variation-penalized Tikhonov regularization.
A variant of the MINRES method, often referred to as the MR-II method, has in the last few years become a popular iterative scheme for computing approximate solutions of large linear discrete ill- posed problems with a symmetric matrix. It is important to terminate the iterations sufficiently early in order to avoid severe amplification of measurement and round-off errors. We present a new L-curve for determining when to terminate the iterations with the MINRES and MR-II method.
The GMRES method is a popular iterative method for the solution of linear systems of equations with a large nonsymmetric nonsingular matrix. However, little is known about the performance of the GMRES method when the matrix of the linear system is of ill-determined rank, i.e., when the matrix has many singular values of different orders of magnitude close to the origin. Linear systems with such matrices arise, for instance, in image restoration, when the image to be restored is contaminated by noise and blur. We describe how the GMRES method can be applied to the restoration of such images. The GMRES method is compared to the conjugate gradient method applied to the normal equations associated with the given linear system of equations. The numerical examples show the GMRES method to require less computational work and to give restored images of higher quality than the conjugate gradient method.
In this paper we compare a new regularizing scheme based on the exponential filter function with two classical regularizing methods: Tikhonov regularization and a variant of truncated singular value regularization. The filter functions for the former methods are smooth, but for the latter discontinuous. These regularization methods are applied to the restoration of images degraded by blur and noise. The norm of the noise is assumed to be known, and this allows application of the Morozov discrepancy principle to determine the amount of regularization. We compare the restored images produced by the three regularization methods with optimal values of the regularization parameter. This comparison sheds light on how these different approaches are related.
We describe new iterative methods for the solution of large ill-conditioned that arise from the discretization of ill-posed problems. In these methods a filter function, which determines the regularization of the problem, is chosen and expanded in terms of orthogonal polynomials. Each iteration yields one new term in this expansion. A variety of iterative methods, which differ in the choice of filter function and orthogonal polynomials, can be derived in this manner.