After the implantation of metal prostheses, such as hip and knee replacements, there is a need to monitor the success of the implantation and, in some cases, to determine the cause of continuing pain. MRI would be well suited to visualizing the possible changes in soft tissue surrounding the implant, however its success is badly affected by artifacts due to the presence of the metal in the scanner's strong magnetic field. The Dixon method allows the variation in the magnetic field to be estimated, but only as the wrapped phase of a complex quantity. High field gradients near the metal mean that the phase estimate is highly undersampled and challenging to unwrap.
In 2017, we reported a new algorithm named POP (phase estimation by onion peeling) and conjectured that our initial implementation for 2-D imaging could be extended to 3-D imaging. This would require expanding the phase estimate using a suitable smoothly varying basis over closed enveloping surfaces which are irregular. The algorithm initially estimates the phase over a surface which surrounds the implant and is sufficiently distant from the implant that the phase is well sampled. Then, surface-by-surface, the smooth nature of the true phase is invoked to form an initial estimate of the true phase on the next surface. That estimate is corrected using the sampled wrapped phase data arising from the MR scan.
In this paper, I investigate the choice of basis functions and methods for projecting them onto irregular surfaces which are typical for applying POP to imaging the soft tissue surrounding hip implants.
In MRI the presence of metal implants causes severe artifacts in images and interferes with the usual techniques used to separate fat signals from other tissues. In the Dixon method, three images are acquired at different echo times to enable the variation in the magnetic field to be estimated. However, the estimate is represented as the phase of a complex quantity and therefore suffers from wrapping. High field gradients near the metal mean that the phase estimate is undersampled and therefore challenging to unwrap.
We have developed POP, phase estimation by onion peeling, an algorithm which unwraps the phase along 1-D paths for a 2-D image obtained with the Dixon method. The unwrapping is initially performed along a closed path enclosing the implant and well separated from it. The recovered phase is expanded using a smooth periodic basis along the path. Then, path-by-path, the estimate is applied to the next path and then the expansion coefficients are estimated to best fit the wrapped measurements. We have successfully tested POP on MRI images of specially constructed phantoms and on a group of patients with hip implants.
In principle, POP can be extended to 3-D imaging. In that case, POP would entail representing phase with a suitably smooth basis over a series of surfaces enclosing the implant (the "onion skins"), again beginning the phase estimation well away from the implant. An approach for this is proposed.
Results are presented for fat and water separation for 2-D images of phantoms and actual patients. The practicality of the method and its employment in clinical MRI are discussed.
Serial femtosecond nanocrystallography (SFX) is a form of x-ray coherent diffraction imaging that utilises a stream of tiny nanocrystals of the biological assembly under study, in contrast to the larger crystals used in conventional x-ray crystallography using conventional x-ray synchrotron x-ray sources. Nanocrystallography utilises the extremely brief and intense x-ray pulses that are obtained from an x-ray free-electron laser (XFEL). A key advantage is that some biological macromolecules, such as membrane proteins for example, do not easily form large crystals, but spontaneously form nanocrystals. There is therefore an opportunity for structure determination for biological molecules that are inaccessible using conventional x-ray crystallography. Nanocrystallography introduces a number of interesting image reconstruction problems. Weak diffraction patterns are recorded from hundreds of thousands of nancocrystals in unknown orientations, and these data have to be assembled and merged into a 3D intensity dataset. The diffracted intensities can also be affected by the surface structure of the crystals that can contain incomplete unit cells. Furthermore, the small crystal size means that there is potentially access to diffraction information between the crystalline Bragg peaks. With this information, phase retrieval is possible without resorting to the collection of additional experimental data as is necessary in conventional protein crystallography. We report recent work on the diffraction characteristics of nanocrystals and the resulting reconstruction algorithms.
Magnetic resonance imaging (MRI) has the potential to be the best technique for assessing complications in patients with metal orthopedic implants. The presence of fat can obscure definition of the other soft tissues in MRI images, so fat suppression is often required. However, the performance of existing fat suppression techniques is inadequate near implants, due to very significant magnetic field perturbations induced by the metal. The three-point Dixon technique is potentially a method of choice as it is able to suppress fat in the presence of inhomogeneities, but the success of this technique depends on being able to accurately calculate the phase shift. This is generally done using phase unwrapping and/or iterative reconstruction algorithms. Most current phase unwrapping techniques assume that the phase function is slowly varying and phase differences between adjacent points are limited to less than π radians in magnitude. Much greater phase differences can be present near metal implants. We present our experience with two phase unwrapping techniques which have been adapted to use prior knowledge of the implant. The first method identifies phase discontinuities before recovering the phase along paths through the image. The second method employs a transform to find the least squares solution to the unwrapped phase. Simulation results indicate that the methods show promise.
X-ray scatter can cause significant distortion in CT imaging, especially with the move to cone-beam geometries. Incoherent scatter (Compton scatter) is known to reduce the energy of scattered photons according to the angle of the scattering. The emergence of energy-resolved x-ray detectors offers an opportunity to produce and apply more accurate scatter estimates, leading to improved image quality. We have developed a scatter estimation algorithm that accounts for the variation in scatter with incident radiation energy. Where existing methods generate estimates of scatter for the complete detected energy band, our new method produces separate estimates for each of the energy bands that are measured, allowing a more focused correction of scatter. Our method is intended to be used in an iterative compensation framework like that of Rührnschopf and Klingenbeck (2011); it calculates the scatter contribution to each energy bin used in a scan based on the current volume estimate. Comparisons with Monte Carlo simulations indicate that this algorithm is effective at estimating the scatter level in separate energy bins. We found that the amount of scatter that loses enough energy to hop between energy bands is small enough to neglect, but that scatter intensity is dependent on the incident energy, so application of a spectrally-aware compensation technique is valuable.
There is a strong motivation to reduce the amount of acquired data necessary to reconstruct clinically useful MR images,
since less data means faster acquisition sequences, less time for the patient to remain motionless in the scanner and better
time resolution for observing temporal changes within the body. We recently introduced an improvement in image quality
for reconstructing parallel MR images by incorporating a data ordering step with compressed sensing (CS) in an algorithm
named `PECS'. That method requires a prior estimate of the image to be available. We are extending the algorithm
to explore ways of utilizing the data ordering step without requiring a prior estimate. The method presented here first
reconstructs an initial image x1 by compressed sensing (with scarcity enhanced by SVD), then derives a data ordering
from x1, R'1 , which ranks the voxels of x1 according to their value. A second reconstruction is then performed which
incorporates minimization of the first norm of the estimate after ordering by R'1 , resulting in a new reconstruction x2.
Preliminary results are encouraging.
Photon counting detectors are of growing importance in medical imaging because they enable routine measurement
of photon energy. Detectors such as Medipix2 and Medipix3 record the energy of incident photons with
minimal loss of spatial resolution. Their use is being investigated for both pre-clinical and clinical applications
of X-ray CT. The Medipix3 detector has 256 x 256 55 μm pixels and a silicon or cadmium telluride detector
layer, giving a spatial resolution comparable to mammographic film. Each Medipix pixel can be seen as an individual
spectral detector. The logic circuits for each pixel (some 1300 transistors) can analyze incoming events at
megahertz rates, comparing the charge of the electron-hole cloud with preset levels, giving a resolution of about
2 keV across the range of 8 - 140 keV.
A prototype CT scanner has been developed for laboratory animals and excised specimens. Applications under
investigation include: K-edge imaging: Using spectral information to measure heavy elements (e.g., preparations
of iodine, barium, and gadolinium) and Soft tissue contrast: Dual energy systems have shown that image contrast
for soft tissue can be improved, e.g., distinguishing between iron and calcium within vascular plaques.
A method to improve time resolution in 3D contrast-enhanced magnetic resonance angiography (CE-MRA) is
proposed. A temporal basis based on prior knowledge of the contrast flow dynamics is applied to a sequence of
In CE-MRA a contrast agent (gadolinium) is injected into a peripheral vein and MR data is acquired as
the agent arrives in the arteries and then the veins of the region of clinical interest. The acquisition extends
over several minutes. Information is effectively measured in 3D k-space (spatial frequency space) one line at-atime.
That line may be along a Cartesian grid line in k-space, a radial line or a spiral trajectory. A complete
acquisition comprises many such lines but in order to improve temporal resolution, reconstructions are made from
only partial sets of k-space data. By imposing a basis for the temporal changes, based on prior expectation of the
smoothness of the changes in contrast concentration with time, it is demonstrated that a significant reduction
in artifacts caused by the under-sampling of k-space can be achieved. The basis is formed from a set of gamma
variate functions. Results are presented for a simulated set of 2D spiral-sampled CE-MRA data.
A hybrid method is presented which allows the acceleration of parallel MR imaging through combining the ideas
of compressed sensing with inversion of the imaging matrix. A novel data reordering is employed to enhance the
sparsity inherent in the image transform. Simulation results with actual head scan data are presented.
Bulk motion occurring during the acquisition of data in magnetic resonance imaging (MRI) causes serious artifacts
in the reconstructed images. The paper presents an extension to TRELLIS, a recently developed method of
detecting and correcting for bulk motion. While TRELLIS detects and corrects for bulk translation and rotation,
only rotation is considered here. Accurate determination of the relative orientations of overlapping strips of kspace
is demonstrated using a robust statistical approach to aid least squares estimation. Reconstructions for
both simulated and actual MRI acquisitions are presented.
Proc. SPIE. 6913, Medical Imaging 2008: Physics of Medical Imaging
KEYWORDS: Data acquisition, Temporal resolution, Computer programming, Signal to noise ratio, Resonance enhancement, Magnetic resonance angiography, Magnetic resonance imaging, Angiography, Receivers, Image restoration
A new way of performing contrast enhanced magnetic resonance angiography (CE-MRA) is presented, in which the entire k-space is decomposed into interlaced subsets that are acquired sequentially. Based on a new parallel imaging technique, Generalized Unaliasing Incorporating object Support constraint and sensitivity Encoding (GUISE), reconstructions can be made using different subsets of k-space to reveal the level of contrast agent in the corresponding data acquisition time period. A proof-of-concept study using a custom made phantom was carried out to examine the utility of the new method. A quantity of contrast agent (copper sulfate solution) was injected into water flowing within a tube while data was acquired using an 8-coil receiver and the modified MRI sequence. A sequence of images was successfully reconstructed at high temporal resolution. This eliminated the need to precisely synchronize data acquisition with contrast arrival. Furthermore, subtraction of a pre-contrast data set prior to reconstruction, which eliminates the need for recovering the static background signal, has proven to be an effective way to improve the SNR and allow a higher temporal resolution to be achieved in recovering the dynamic signal containing contrast level change. Acceptably good reconstruction results were obtained at a temporal resolution equivalent to a 16-fold speed up compared to the time taken to fully sample k-space.
Patient motion during magnetic resonance imaging (MRI) can produce significant artifacts in a reconstructed
image. Since measurements are made in the spatial frequency domain ('k-space'), rigid-body translational
motion results in phase errors in the data samples while rotation causes location errors. A method is presented
to detect and correct these errors via a modified sampling strategy, thereby achieving more accurate image
reconstruction. The strategy involves sampling vertical and horizontal strips alternately in k-space and employs
phase correlation within the overlapping segments to estimate translational motion. An extension, also based
on correlation, is employed to estimate rotational motion. Results from simulations with computer-generated
phantoms suggest that the algorithm is robust up to realistic noise levels. The work is being extended to physical
phantoms. Provided that a reference image is available and the object is of limited extent, it is shown that a
measure related to the amount of energy outside the support can be used to objectively compare the severity of
Frontal chest radiographs ("chest X-rays") are routinely used by medical personnel to assess patients for a wide
range of suspected disorders. Often large numbers of images need to be analyzed. Furthermore, at times the
images need to analyzed ("reported") when no radiological expert is available. A system which enhances the
images in such a way that abnormalities are more obvious is likely to reduce the chance that an abnormality
goes unnoticed. The authors previously reported the use of principal components analysis to derive a basis set
of eigenimages from a training set made up of images from normal subjects. The work is here extended to
investigate how best to emphasize the abnormalities in chest radiographs. Results are also reported for various
forms of image normalizing transformations used in performing the eigenimage processing.
A method first employed for face recognition has been employed to analyse a set of chest x-ray images. After marking certain common features on the images, they are registered by means of an affine transformation. The differences between each registered image and the mean of all images in the set are computed and the first K principal components are found, where K is less than or equal to the number of images in the set. These form eigenimages (we have coined the term 'eigenchests') from which an approximation to any one of the original images can be reconstructed. Since the method effectively treats each pixel as a dimension in a hyperspace, the matrices concerned are huge; we employ the method developed by Turk and Pentland for face recognition to make the computations tractable. The K coefficients for the eigenimages encode the variation between images
and form the basis for discriminating normal from abnormal. Preliminary results have been obtained for a set of eigenimages formed from a set of normal chests and tested on separate sets of normals and patients with pneumonia. The distributions of coefficients have been observed to be different for the two test sets and work is continuing to determine the most sensitive method for detecting the differences.
An automated image analysis system for determination of myosin filament orientations in electron micrographs of muscle cross-sections is described. Analysis of the distribution of the orientations is important in studies of muscle structure, particularly for interpretation of x-ray diffraction data. Filament positions are determined using h-dome extraction and image filtering, based on grayscale reconstruction. Erroneous locations are eliminated based on lattice regularity. Filament orientations are determined by correlation with a template that incorporates the salient filament characteristics and classified using a Gaussian mixture model. Application to a number of micrographs and comparison with manual classifications of orientations shows that the system is effective in many cases.
Microcalcification clusters appear as an early sign of breast cancer and play an important role in interpreting mammograms. Progress is reported towards an automated computer aided detection system for clustered microcalcifications utilizing two image feature parameters: local contrast and shape. The use of a shape parameter is necessary to distinguish thin patches of connective tissue from microcalcifications. Two shape parameter techniques are compared in the segmentation of 15 digital mammogram images. The first technique implements the linear Hough transform, while the second uses image phase information in the Fourier domain. In both cases labeling of the image is performed by a deterministic relaxation scheme, in which both image data dn prior beliefs are weighted simultaneously. Similar segmentation results are obtained for each shape parameter technique however the execution time for the phase method is approximately one quarter that for the Hough method. Both techniques offer an improvement over segmentation results obtained without the shape parameter.
A method for estimating the point spread function of a remotely sensed image is described. The technique developed is based on locating and measuring blurred linear features in the imagery and tomographically reconstructing the point spread function. Image features able to be modeled by a single step function or by a combination of two steps are located. For the latter cases a form of blind deconvolution is applied to extract the estimate of the line spread function (equivalent to the projection of the point spread function in the direction of the feature). The process is unsupervised and requires only that the image contains suitable linear features. Examples are given of the estimation of blur in satellite images.
The problem of interpolating an image from a downsampled version is investigated. In particular, prior knowledge of the statistics of the data and measurement noise, as well as the method of sampling, are shown to lead to an optimal interpolator. The availability of SPOT satellite image data sampled at two resolutions, one twice that of the other, provides a basis for the study. Firstly a direct inverse filter is derived from the satellite data. Secondly, interpolators based on models for the auto-covariance of the higher resolution data are derived and an equivalence for these and the direct type is shown. Thirdly a comparison of the spectra of the interpolators reveals that both the inverse and statistical interpolators give significant boost to frequencies below the nominal bandlimit and that their response is significant at frequencies above but adjacent to the nominal bandlimit. Finally, numerical studies indicate that when the prior knowledge is accurate there is less residual mean square error associated with the direct and statistical interpolators, compared to a sinc-based interpolator.
Many methods for deconvolving images assume either that the entire convolution is available, or that the convolution is adequately modelled as a circular convolution. In reality, neither is usually the case, and only a section of a much larger blurred (and contaminated) image is observed. The truncation gives rise to null objects in reconstructions obtained by deconvolution methods. It is possible to formulate the problem as exact, though underdetermined, and to apply singular value decomposition to deriving an inverse operator. We compare different practical methods for performing deconvolution with a scanning finite impulse response filter derived in this manner.
Three new algorithms for deconvolving image blur are presented. All three are based on the computation of the zeros of an image's z-transform and the separation of the zeros into sets belonging to the image and to the point spread function (psf). The zeros lie on a sheet existing in a four-dimensional space. The first two algorithms are applicable when the psf is known a priori. In Algorithm I portions of the 'zero sheet' are matched using a Euclidean measure, then zeros are selected from the remainder and an image is algebraically reconstructed. In Algorithm II the point zeros of 1-dimensional cuts through Fourier space are matched before reconstructing an image estimate via inverse Fourier transformation. Finally, Algorithm III is applicable when an ensemble of differently blurred images are recorded from the same object (e.g. astronomical speckle images); even through the psf is unknown for each member of the ensemble (i.e. deconvolution is to be 'blind'), parts of the zero sheet corresponding to the actual (unblurred) image can be matched over the ensemble and a reconstruction made by inverse Fourier transformation. Encouraging results have been obtained for Algorithms I and III for small positive images contaminated by small amounts of noise; Algorithm II has been successfully applied to larger images. Algorithms I and III have an inherent advantage over conventional Wiener filtering in that the psf does not need to be known precisely to achieve acceptable results.
Document scanning is now an accepted part of office procedure, allowing the incorporation of digitized images into new documents and the conversion of scanned print into ASCII by optical character recognition ( OCR). Often document pages contain more than one form of information - textual, graphical and/or pictorial. Segmentation of document images into these three categories is feasible with the aid of image processing. Projections of the thresholded document images in conjunction with autocorrelation are used to check text alignment. Then the edge shifting properties of the rank filter are used to coalesce image regions containing text into solid near-rectangular blocks. Pyramidal reduction is combined with the filtering to ease the computational burden. Horizontal and vertical projections are used to segment whole pages recursively into homogeneous blocks
whose properties are then analysed. Applications forseen for the image segmentation include modified facsimile systems, achievement of
artifact-free OCR and conversion of document images into files with separate formats for text, graphics and pictures.
SC201: Practical Digital Image Reconstruction Algorithms
Many technical disciplines (e.g. remote sensing, medical imaging, etc.) require digital processing of data to reconstruct an image. This course presents an analysis of methods and algorithms used for reconstructing images from distorted and/or incomplete data, and the development for specific applications. Topics covered include image formation and degradation, Fourier methods and computations, filtering, projection- and probabilistic-based algorithms, deconvolution (deblurring), and phase retrieval.