The rotation axis position is an important parameter of classical reconstruction algorithms in X-ray computed tomography (CT). The use of incorrect values of the axis position parameters during the reconstruction leads to the appearance of various artifacts distorting the reconstructed image. Therefore, to obtain a reconstruction of better quality, automatic rotation axis position determination and misalignment correction methods are of use. Most of the existing high-precision automatic rotation axis position determination methods are either fast, but suitable only within a parallel-beam geometric scheme, or indifferent to the geometric scheme, but computationally laborious. In this paper, we propose a method for auto-detection of two scalar parameters of rotation axis position — axis shift and tilt in the plane parallel to the detector window plane — using a pixel-wise arithmetically averaged projection image. The described method is highly accurate within both parallel-beam and cone-beam geometric schemes whereas it is characterized by robustness to noise in projection data. The method has performed an increase in reconstruction quality when compared with some well-known and still used in practice methods both on synthetic data and on real data obtained in real laboratory conditions.
The human olfactory bulb (OB), an important part of the brain responsible for the sense of smell, is a complex structure composed of multiple layers and cell types. Studying the OB morphological structure is essential for understanding the decline in olfactory function related to aging, neurodegenerative disorders, and other pathologies. Traditional microscopy methods in which slices are stained with solutions to contrast individual elements of the morphological structure are destructive. Non-destructive high-resolution technique is the X-ray phase-contrast tomography. However, manual segmentation of the reconstructed images are time-consuming due to large amount of data and prone to errors. U-Net-based model to optimize the segmentation of OB morphological structures, focusing specifically on glomeruli, in tomographic images of the human OB is proposed. The strategy to address overfitting and enhance the model's accuracy is described. This method addresses the challenges posed by complex limited data containing abundant details, similar grayscale levels between soft tissues, and blurry image details. Additionally, it successfully overcomes the limitations of a small dataset containing images with extremely dense point clouds, preventing the models from overfitting.
Virtual unrolling or unfolding, digital unwrapping, flattening or unfurling - all these terms are used to describe the process of surface straightening of a tomographically reconstructed digital object. For many objects of historical heritage, tomography is the only way to obtain a hidden image of the original object without its destruction. Digital flattening is no longer considered a unique met hodology. It being applied by many research group, but AI-based methods are used insignificantly in such projects, despite the amazing success of AI in computer vision, in particular optical text recognition. It can be explained by the fact that the success of AI depends on large, broad and high quality datasets, but there are very few published CT-based datasets relevant to the task of digital flattening. Accumulation of a sufficient amount of data necessary for training models is a key point for the next technological breakthrough. In this paper, we present open and cumulative dataset CT-OCR-2022. Dataset includes 6 packages data for different model objects that help to enrich tomographic solutions and to train machine learning models. Each package contains optically scanned image of model objects, 400 measured X-ray projections, 2687 CT- reconstructed cross-sections of 3D reconstructed image, segmentation markups. We believe that CT-OCR-2022 dataset will serve as a benchmark for reconstructed object digital flattening and recognition systems, and that it will prove invaluable for advancement of the field of CT-reconstruction, symbols analysis and recognition. The data presented are openly available in Zenodo at doi:10.5281/zenodo.7123495 and linked repositories.
In Computed tomography (CT) usage of common reconstruction algorithms to the projection data acquired with polychromatic probing radiation leads to the appearance of a cup-like distortion. CT image quality can be improved by adjusting the CT scanner or the reconstruction algorithm, but for this purpose assessment of cupping artifacts evaluation needs to be done. Existing assessment methods either rely on expert opinion or require an object binary mask, which can be unavailable. In this paper, we propose a method for blind assessment of cupping artifacts that do not require any prior information. The main idea of the proposed method is to evaluate the degree of change in intensity near automatically found edges of optically dense objects. We prove the applicability of the method on the collected dataset with cupping artifacts. The results show a monotonic dependency between the severity of cupping artifacts and the calculated with the proposed method value.
In that paper, we a suggest lightweight filtering neural network, which implements the filtering stage in the Filtered Back-Projection algorithm (FBP), but good reconstruction results are achieved not only in ideal data but also in noisy data, which a usual FBP algorithm cannot achieve. Thus, our neural network is not an only variation of Ramp filter, which is usually used then FBP algorithm, but also a denoising filter. The neural network architecture was inspired with the idea of the possibility of the Ramp filtering operation’s approximation with sufficient accuracy. The efficiency of our network was shown on the synthetic data, which imitate tomographic projections collected with low exposition. In the generation of synthetic data, we have taken into account the quantum nature of X-ray radiation, exposition time of one frame, and non-linear detector response. The FBP reconstruction time with our neural network was 13 times faster than the time of reconstruction neural network from Learned Primal-Dual Reconstruction, and our reconstruction quality 0.906 by SSIM metric, which is enough to identify most significant objects.
The algorithm for 3D vector image reconstruction from a set of spectral tomographic projections collected with CT set-up completed with an optical element or elements inside the optical path behind the sample is proposed. The purpose of their placement into the optical path is to divide the integral polychromatic projection into a series of monochromatic projections, i.e., to get a multi-channel image. Understanding of the reconstruction results in the monochromatic case is beyond question, the relationship between the reconstructed spatial distribution of the linear attenuation coefficient and the discrete description of the elemental structure of the probed object is linear. In difference with monochromatic case the result of the reconstruction from polychromatic projections is a spatial distribution of the so-called effective or average attenuation coefficient, its connection to a discrete description of the elemental structure is nontrivial. However, if the distribution of the averaged coefficient is supplemented by distributions of linear coefficients for several energies, then it is possible to estimate of the local composition of the object. We present a model for the formation of spectral multi-channel projection based on crystal analyzer usage and describe the steps needed to solve the tomography inverse problem.
Despite significant progress in computer vision, pattern recognition, and image analysis, artifacts in imaging still hampers the progress in many scientific fields relying on the results of image analysis. We here present an advanced image-based artifacts suppression algorithm for high-resolution tomography. The algorithm is based on guided filtering of a reconstructed image mapped from the Cartesian to the polar coordinates space. This postprocessing method efficiently reduces both ring- and radial streak artifacts in a reconstructed image. Radial streak artifacts can appear in tomography with an off-center rotation of a large object over 360 degrees used to increase the reconstruction field of view. We successfully applied the developed algorithm for improving x-ray phase-contrast images of human post-mortem pineal gland and olfactory bulbs.
The method of Computed Tomography (CT) has progressed throughout the past decade with advances in CT apparatus and program parts that have resulted in an increasing number of CT applications. Today innovative CT Xray detectors have high spatial resolution till a tenth or hundredth of a micron. However, itsfield of view is significantly limited. The object being scanned with a high resolution does not always completely enter in (covered by) the field of view of the detector. The collected projections data may be incomplete. The use of incomplete data in classical reconstruction methods leads to image quality loss. This paper provides a new advanced reconstruction method that demonstrates image quality improvements compared with classical methods when incomplete data collected. The method uses the hypothesis about the consistency of object description in sinogram space and reconstruction space. Input data for the algorithm proposed are incomplete data, and the output data are the reconstructed image and the confidence values for all pixels of the image (reconstruction reliability). A detailed description of the algorithm is presented. Its quality characteristics are based on Shepp-Logan phantom studies.
Usage of common reconstruction algorithms like Filtered Back Projection and Algebraic Reconstruction Technique to the projection data acquired with poly-chromatic probing radiation leads to the appearance of a cup-like distortion of the value profile in reconstructed images. While many methods of the poly-chromatic probing artifacts suppression are suggested, the numerical estimation algorithm of the “Cupping effect” typically is not considered to be important. Described methods imply manual regions selection where the intensity will be compared, or just use experts’ opinion on the effect presence. In this paper, we suggest automatic estimation of the “Cupping effect” method based on utilizing the distance transform built using the objects mask. As a result, we obtain a numeric estimation of the intensity change from the border to the center of the object. As the final image index, a weighted sum of the ratings of all objects is used. While positive value shows the magnitude of the “Cupping effect”, a negative value, on the contrary, shows magnitude of the reverse “Cupping effect”. In the paper, we demonstrate the method used on simulated data and compare it with several different techniques for distortion evaluation due to poly-chromatic probing. Finally, we show method effectiveness on real data acquired with laboratory tomography.
Computer vision for biomedical imaging applications is fast developing and at once demanding field of computer science. In particular, computer vision technique provides excellent results for detection and segmentation problems in tomographic imaging. X-ray phase contrast Tomography (XPCT) is a noninvasive 3D imaging technique with high sensitivity for soft tissues. Despite a considerable progress in XPCT data acquisition and data processing methods, the problem in degradation of image quality due to artifacts remains a widespread and often critical issue for computer vision applications. One of the main problems originates from a sample alteration during a long tomographic scan. We proposed and tested Simultaneous Iterative Reconstruction algorithm with Total Variation regularization to reduce the number of projections in high resolution XPCT scans of ex-vivo mouse spinal cord. We have shown that the proposed algorithm allows tenfold reducing the number of projections and, therefore, the exposure time, with conservation of the important morphological information in 3D image with quality acceptable for computer graphics and computer vision applications. Our research paves a way for more effective implementation of advanced computer technologies in phase contrast tomographic research.
Porous materials are widely used in different applications, in particular they are used to create various filters. Their quality depends on parameters that characterize the internal structure such as porosity, permeability and so on. Сomputed tomography (CT) allows one to see the internal structure of a porous object without destroying it. The result of tomography is a gray image. To evaluate the desired parameters, the image should be segmented. Traditional intensity threshold approaches did not reliably produce correct results due to limitations with CT images quality. Errors in the evaluation of characteristics of porous materials based on segmented images can lead to the incorrect estimation of their quality and consequently to the impossibility of exploitation, financial losses and even to accidents. It is difficult to perform correctly segmentation due to the strong difference in voxel intensities of the reconstructed object and the presence of noise. Image filtering as a preprocessing procedure is used to improve the quality of segmentation. Nevertheless, there is a problem of choosing an optimal filter. In this work, a method for selecting an optimal filter based on attributive indicator of porous objects (should be free from "levitating stones" inside of pores) is proposed. In this paper, we use real data where beam hardening artifacts are removed, which allows us to focus on the noise reduction process.
In this work, we propose a method for tomography reconstruction in case of a limited field of view, when the whole image of the investigated sample does not fit on the detector. Proposed technique based on iterative procedure with corrections on each step in sinogram space and reconstruction space. On synthetic and experimental data shown, that proposed technique allows to improve tomography reconstruction quality and extends the field of view.
The present paper is devoted to the solution of a tomographic reconstruction problem of using a regularized algebraic approach for large scale data. The paper explores the issues related to the use of cone beam polychromatic computed tomography. An algorithm for regularized solution of the linear operator equation is described. The minimizing parametric composite function is given and step of the iterative procedure developed is written. The reconstructed volumetric image is about 60 billions voxels. It forces to divide the task of reconstruction of the full volume into subtasks for the efficient implementation of the reconstruction algorithm on the GPU. In each of the subtasks the current solution for the local volume of a given size is calculated. An approach to local volumes selection and solutions crosslinking is described. We compared the image quality of the proposed algorithm with results of Filtered Back Projection (FBP) algorithm.
Digital X-ray imaging became widely used in science, medicine, non-destructive testing. This allows using modern digital images analysis for automatic information extraction and interpretation. We give short review of scientific applications of machine vision in scientific X-ray imaging and microtomography, including image processing, feature detection and extraction, images compression to increase camera throughput, microtomography reconstruction, visualization and setup adjustment.
Artifacts caused by intensely absorbing inclusions are encountered in computed tomography via polychromatic scanning and may obscure or simulate pathologies in medical applications. Тo improve the quality of reconstruction if high-Z inclusions in presence, previously we proposed and tested with synthetic data an iterative technique with soft penalty mimicking linear inequalities on the photon-starved rays. This note reports a test at the tomographic laboratory set-up at the Institute of Crystallography FSRC “Crystallography and Photonics” RAS in which tomographic scans were successfully made of temporary tooth without inclusion and with Pb inclusion.
Motion blur caused by camera vibration is a common source of degradation in photographs. In this paper we study the problem of finding the point spread function (PSF) of a blurred image using the tomography technique. The PSF reconstruction result strongly depends on the particular tomography technique used. We present a tomography algorithm with regularization adapted specifically for this task. We use the algebraic reconstruction technique (ART algorithm) as the starting algorithm and introduce regularization. We use the conjugate gradient method for numerical implementation of the proposed approach. The algorithm is tested using a dataset which contains 9 kernels extracted from real photographs by the Adobe corporation where the point spread function is known. We also investigate influence of noise on the quality of image reconstruction and investigate how the number of projections influence the magnitude change of the reconstruction error.
The presence of errors in tomographic image may lead to misdiagnosis when computed tomography (CT) is used in medicine, or the wrong decision about parameters of technological processes when CT is used in the industrial applications. Two main reasons produce these errors. First, the errors occur on the step corresponding to the measurement, e.g. incorrect calibration and estimation of geometric parameters of the set-up. The second reason is the nature of the tomography reconstruction step. At the stage a mathematical model to calculate the projection data is created. Applied optimization and regularization methods along with their numerical implementations of the method chosen have their own specific errors. Nowadays, a lot of research teams try to analyze these errors and construct the relations between error sources. In this paper, we do not analyze the nature of the final error, but present a new approach for the calculation of its distribution in the reconstructed volume. We hope that the visualization of the error distribution will allow experts to clarify the medical report impression or expert summary given by them after analyzing of CT results. To illustrate the efficiency of the proposed approach we present both the simulation and real data processing results.
Obtaining high quality images from Computed Tomography (CT) is important for correct image interpretation. In this paper, we propose novel procedures that can be used for a quantitative description of the degree of artifact expressiveness in CT images, and show that the use of this type of metric allows to assess the dynamics of image degradation. We perform different image reconstruction tests in order to analyse our approach, and the obtained results confirm the usefulness of the proposed method. We conclude that the use of the proposed estimates allows moving from image quality assessment based on visual scoring to a quantitative approach and consequently to support a CT setup providing high quality reconstructed images obtained by appropriate changes of the reconstruction parameters or algorithms.
The artifacts (known as metal-like artifacts) arising from incorrect reconstruction may obscure or simulate pathology in medical applications, hide or mimic cracks and cavities in the scanned objects in industrial tomographic scans. One of the main reasons caused such artifacts is photon starvation on the rays which go through highly absorbing regions. We indroduce a way to suppress such artifacts in the reconstructions using soft penalty mimicing linear inequalities on the photon starved rays. An efficient algorithm to use such information is provided and the effect of those inequalities on the reconstruction quality is studied.
The goal of the X-ray Fluorescence Computed Tomography (XFCT) is to give the quantitative description of an object under investigation (sample) in terms of the element composition. However, light and heavy elements inside the object give different contribution to the attenuation of the X-ray probe and of the fluorescence. It leads to the elements got in the shadow area do not give any contribution to the registered spectrum. Iterative reconstruction procedures will try to set to zero the variables describing the element content in composition of corresponding unit volumes as these variables do not change system's condition number. Inversion of the XFCT Radon transform gives random values in these areas. To evaluate the confidence of the reconstructed images we first propose, in addition to the reconstructed images, to calculate a generalized image based on Jacobian matrix. This image highlights the areas of doubt in case if there are exist. In the work we have attempted to prove the advisability of such an approach. For this purpose, we analyzed in detail the process of tomographic projection formation.
Since 1998 we have developed X-Ray fluorescence tomography for microanalysis. All aspects were tackled starting with the reconstruction performed by FBP or ART methods. Self-absorption corrections were added and combined with Compton, transmission and fluorescence tomographies to obtain fully quantitative results. Automatic "smart scans" minimized overhead time scanning/aligning non-cylindrical objects. The scans were performed step-by-step or continuously with no overhead time. Focusing went from 5 to 1 micron range, using FZP or CRL lenses, and finally KB bent mirrors which yield sub-micron high intensity beams. Recently, we have performed the first quantitative 3D fluo-tomography by helical scanning. We are now studying energy dependent fluo-tomography for chemically-sensitive imaging of the internal structure of samples. This chronology yielded the present level of sophistication for both experiments and data treatment and finally a method ready for wide dissemination among scientists.
Focusing properties of Si planar refractive lenses including experimental tests and theoretical analysis have been studied. Computer simulations of the X-ray wave field distribution near the focal plane have been performed for different lens designs. Comparison of the experimental results with the computer simulation allows establishing the reasons for deviation of focusing from ideal performance. The deviation of the lens vertical sidewall profile was minimized by additional correction in the lens design and special efforts in optimization of etching process. Optimized lenses were manufactured, tested at the ESRF and brought out the dramatic enhancement in focusing properties.
We have studied magneto-oscillations of the tunnelling current through a quantum well (QW) incorporating InAs self-assembled quantum dots in magnetic fields up to 28 T applied normal to the QW plane. We find evidence for the strong modification of the Landau levels in the host GaAs quantum well in the presence of dots embedded at the center of the well, which we attribute to electron-electron interactions.
We present recent results of fluorescence tomography experiments obtained on a variety of samples originating from the fields of Mineralogy, Space Sciences or Botany. The ID22 hard X-ray microanalysis beamline of the ESRF was used in scanning beam mode to record fluorescence spectra in pencil-beam collection mode for energies of 14 to 22.5 keV and micron-sized beamspots. Trace element concentrations of a few hundred ppm were successfully imaged in inhomogenous samples of less than 500 microns and resolutions up to 2 microns.
First experimental results of fluorescence microtomography with 6 micrometer resolution obtained at the ESRF are described. The set-up comprises high quality optics (monochromator, mirrors, focusing lenses) coupled to the high energy/brilliance/coherence of the ID 22 undulator beamline. The tomographic set-up allows precise measurements in the 'pencil-beam' geometry. The image reconstruction is based either on the filtered back-projection (FBT) method or on a modification of the algebraic reconstruction method (ART) but includes simplifications of the model. The quality and precision of the 2D reconstructed elemental images of two phantom sample are encouraging. The method will be further refined and applied for the analysis of more complex inhomogeneous samples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.