The distorted Born iterative method (DBI) is used to solve the inverse scattering problem in the ultrasound tomography with the objective of determining a scattering function that is related to the acoustical properties of the region of interest (ROI) from the disturbed waves measured by transducers outside the ROI. Since the method is iterative, we use Born approximation for the first estimate of the scattering function. The main problem with the DBI is that the linear system of the inverse scattering equations is ill-posed. To deal with that, we use two different algorithms and compare the relative errors and execution times. The first one is Truncated Total Least Squares (TTLS). The second one is Regularized Total Least Squares method (RTLS-Newton) where the parameters for regularization were found by solving a nonlinear system with Newton method. We simulated the data for the DBI method in a way that leads to the overdetermined system. The advantage of RTLS-Newton is that the computation of singular value decomposition for a matrix is avoided, so it is faster than TTLS, but it still solves the similar minimization problem. For the exact scattering function we used Modified Shepp-Logan phantom. For finding the Born approximation, RTLS-Newton is 10 times faster than TTLS. In addition, the relative error in L2-norm is smaller using RTLS-Newton than TTLS after 10 iterations of the DBI method and it takes less time.
The distorted Born iterative (DBI) method is a powerful approach for solving the inverse scattering problem for ultrasound tomographic imaging. This method iteratively solves the inverse problem for the scattering function and the forward problem for the inhomogeneous Green’s function and the total field. Because of the ill-posed system from the inverse problem, regularization methods are needed to obtain a smooth solution. The three methods compared are truncated total least squares (TTLS), conjugate gradient for least squares (CGLS), and Tikhonov regularization. This paper uses numerical simulations to compare these three approaches to regularization in terms of both quality of image reconstruction and speed. Noise from both transmitters and receivers is very common in real applications, and is considered in stimulation as well. The solutions are evaluated by residual error of scattering function of region of interest(ROI), convergence of total field solutions in all iteration steps, and accuracy of estimated Green’s functions. By comparing the result of reconstruction quality as well as the computational cost of the three methods under different ultrasound frequency, we prove that TTLS method has the lowest error in solving strongly ill-posed problems. CGLS consumes the shortest computational time but its error is higher than TTLS, but lower than Tikhonov regularization.
The problem of deleting a row from the QR factorization X = UR by Gram-Schmidt techniques is intimately connected to solving the least squares problem (formula available in paper) by classical iterative methods. Past approaches to this problem have focused upon accurate computation of the residual (formula available in paper), thereby finding a vector (formula available in paper) that is orthogonal to U. This work shows that it is also important to accurately compute the vector f and that it must be used in the downdating process to maintain good backward error in the new factorization. New algorithms are proposed based upon this observation.
The ULV decomposition (ULVD) is an important member of a class of rank-revealing two-sided orthogonal decompositions used to approximate the singular value decomposition (SVD). The ULVD can be modified much faster than the SVD. The accurate computation of the subspaces is required in applications in signal processing. In this paper we introduce a recursive ULVD algorithm which is faster than all available stable SVD algorithms. Moreover, we present an alternative refinement algorithm.
The ULV decomposition (ULVD) is an important member of a class of rank-revealing two-sided orthogonal decompositions used to approximate the singular value decomposition (SVD). The ULVD can be updated and downdated much faster than the SVD, hence its utility in the solution of recursive total least squares (TLS) problems. However, the robust implementation of ULVD after the addition and deletion of rows (called updating and downdating respectively) is not altogether straightforward. When updating or downdating the ULVD, the accurate computation of the subspaces necessary to solve the TLS problem is of great importance. In this paper, algorithms are given to compute simple parameters that can often show when good subspaces have been computed.
Methods for updating and downdating singular value decompositions (SVDs) and partially reduced bidiagonal forms (partial SVDs) are introduced. The methods are based upon chasing procedures for updating the SVD and downdating procedures for orthogonal decompositions. The main feature of these methods is the ability to separate the singular values into `large' and `small' sets and then obtain an updated bidiagonal form with corresponding `large' and `small' columns. This makes for a more accurate update or dosndate.
The need to construct architectures in VLSI has focused attention on unnormalized floating point arithmetic. Certain unnormalized arithmetics allow one to 'pipe on digits,' thus producing significant speed up in computation and making the input problems of special purpose devices such as systolic arrays easier to solve. We consider the error analysis implications of using unnormalized arithmetic in numerical algorithms. We also give specifications for its implementation. Our discussion centers on the example of Gaussian elimination. We show that the use of unnormalized arithmetic requires change in the analysis of this algorithm. We will show that only for certain classes of matrices that include diagonally dominant matrices (either row or column), Gaussian elimination is as stable in unnormalized arithmetic as in normalized arithmetic. However, if the diagonal elements of the upper triangular matrix are post normalized, then Gaussian elimination is as stable in unnormalized arithmetic as in normalized arithmetic for all matrices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.