COVID-19 has spread around the world since 2019. Approximately 6.5% of COVID-19 a risk of developing severe disease with high mortality rate. To reduce the mortality rate and provide appropriate treatment, this research established an integrated models with to predict the clinical outcome of COVID-19 patients with clinical, deep learning and radiomics features. To obtain the optimal feature combination for prediction, 9 clinical features combination was selected from all available clinical factors after using LASSO, 18 deep learning features from U-Net architecture, and9radiomics features from segmentation result. A total of 213 COVID-19 patients and 335 non-COVID-19 patients from5hospitals were enrolled and used as training and test sample in this research. The proposed model obtained an accuracy, precision, recall, specificity, F1-score and ROC curve of 0.971, 0.943, 0.937, 0.974, 0.941 and 0.979, respectively, which exceeds the related work using only clinical, deep learning or radiomics factors.
Vascular structures are important information for education purpose, surgical planning and analysis. Extraction of blood vessels of the organ is a challenging task in the area of medical image processing and it is the first step before obtaining the structure. It is difficult to get accurate vessel segmentation results even with manually labeling by human being. The difficulty of vessels segmentation is the complicated structure of blood vessels and its large variations that make them hard to recognize. In this paper, we present deep artificial neural network architecture to automatically segment the vessels from computed tomography (CT) image. We proposed deep neural network (DNN) architecture for vessel segmentation from a medical CT volume, which consists of multi deep convolution neural networks to extract features from difference planes of CT data. Due to the problem of varies constrains that we cannot control, we add normalization process to make sure our network will well perform on clinical data. To validate effectiveness and efficiency of our proposed method, we conduct experiments on 20 clinical CT volumes. Our network can yield an average dice coefficient 0.879 on clinical data which better than state-of-the-art methods such as level set, Frangi, and submodular graph cuts.
High-resolution medical images are crucial for medical diagnosis, and for planning and assisting surgery. Micro computed tomography (micro CT) can generate high-resolution 3D images and analyze internal micro-structures. However, micro CT scanners can only scan small objects and cannot be used for in-vivo clinical imaging and diagnosis. In this paper, we propose a super-resolution method to reconstruct micro CT-like images from clinical CT images based on learning a mapping function or relationship between the micro CT and clinical CT. The proposed method consists of following three steps: (1) Pre-processing: This involves the collection of pairs of clinical CT images and micro CT images for training and the registration and normalization of each pair. (2) Training: This involves learning a non-linear mapping function between the micro CT and clinical CT by using training pairs. (3) Processing (testing) step: This involves enhancing a new CT image, which is not included in the training data set, by using the learned mapping function.
Multimodal magnetic resonance images (e.g., T1-weighted image (TIWI) and T2-weighted image (T2WI)) are used for accurate medical imaging analysis. Different modal images have different resolution depending on pulse sequence parameters under limited data acquisition time. Therefore, interpolation methods are used to match the low-resolution (LR) image with the high-resolution (HR) image. However, the interpolation causes blurring that affects analysis accuracy. Although some recent works such as non-local-means (NLM) filter have manifested impressive super-resolution (SR) performance with available HR modal images, the filter has high computational cost. Therefore, we propose a fast SR framework with iterative-guided back projection, which incorporates iterative back projection with a guided filter (GF) method for resolution enhancement of LR images (e.g., T2WI) by referring HR images in another modality image (e.g., T1WI). The proposed method not only achieves both high accuracy than conventional interpolation methods and original GF and computational efficiency by applying an integral 3D image technique. In addition, although the proposed method is slightly inferior in accuracy visually than the state-of-the-art NLM filter, it can run 22 times faster than the state-of-theart method in expanding three times in the slice-select direction from 180 × 216 × 60 voxels to 180 × 216 × 180 voxels. The computational time of our method is about 1 min only. Therefore, the proposed method will be applied to various applications in practice, including not only multimodal MR images but also multimodal image analysis such as computed tomography (CT) and positron emission tomography (PET).
Extraction of blood vessels of the organ is a challenging task in the area of medical image processing. It is really difficult to get accurate vessel segmentation results even with manually labeling by human being. The difficulty of vessels segmentation is the complicated structure of blood vessels and its large variations that make them hard to recognize. In this paper, we present deep artificial neural network architecture to automatically segment the hepatic vessels from computed tomography (CT) image. We proposed novel deep neural network (DNN) architecture for vessel segmentation from a medical CT volume, which consists of three deep convolution neural networks to extract features from difference planes of CT data. The three networks have share features at the first convolution layer but will separately learn their own features in the second layer. All three networks will join again at the top layer. To validate effectiveness and efficiency of our proposed method, we conduct experiments on 12 CT volumes which training data are randomly generate from 5 CT volumes and 7 using for test. Our network can yield an average dice coefficient 0.830, while 3D deep convolution neural network can yield around 0.7 and multi-scale can yield only 0.6.
Probabilistic atlas based on human anatomical structure has been widely used for organ segmentation. The challenge is how to register the probabilistic atlas to the patient volume. Additionally, there is the disadvantage that the conventional probabilistic atlas may cause a bias toward the specific patient study due to a single reference. Hence, we propose a template matching framework based on an iterative probabilistic atlas for organ segmentation. Firstly, we find a bounding box for the organ based on human anatomical localization. Then, the probabilistic atlas is used as a template to find the organ in this bounding box by using template matching technology. Comparing our method with conventional and recently developed atlas-based methods, our results show an improvement in the segmentation accuracy for multiple organs (p < 0:00001).
KEYWORDS: Lawrencium, Magnetic resonance imaging, Super resolution, Visualization, Image fusion, Image resolution, Image processing, Associative arrays, Data acquisition, 3D vision
Magnetic resonance imaging can only acquire volume data with finite resolution due to various factors. In particular, the resolution in one direction (such as the slice direction) is much lower than others (such as the in-plane direction), yielding un-realistic visualizations. This study explores to reconstruct MRI isotropic resolution volumes from three orthogonal scans. This proposed super- resolution reconstruction is formulated as a maximum a posterior (MAP) problem, which relies on the generation model of the acquired scans from the unknown high-resolution volumes. Generally, the deviation ensemble of the reconstructed high-resolution (HR) volume from the available LR ones in the MAP is represented as a Gaussian distribution, which usually results in some noise and artifacts in the reconstructed HR volume. Therefore, this paper investigates a robust super-resolution by formulating the deviation set as a Laplace distribution, which assumes sparsity in the deviation ensemble based on the possible insight of the appeared large values only around some unexpected regions. In addition, in order to achieve reliable HR MRI volume, we integrates the priors such as bilateral total variation (BTV) and non-local mean (NLM) into the proposed MAP framework for suppressing artifacts and enriching visual detail. We validate the proposed robust SR strategy using MRI mouse data with high-definition resolution in two direction and low-resolution in one direction, which are imaged in three orthogonal scans: axial, coronal and sagittal planes. Experiments verifies that the proposed strategy can achieve much better HR MRI volumes than the conventional MAP method even with very high-magnification factor: 10.
KEYWORDS: Lawrencium, Associative arrays, Magnetic resonance imaging, Super resolution, Chemical species, Prototyping, Data modeling, Image resolution, Image processing, Feature extraction
This study addresses the problem of generating a high-resolution (HR) MRI volume from a single low-resolution (LR)
MRI input volume. Recent researches have proved that sparse coding can be successfully applied for single-frame super-resolution for natural images, which is based on good reconstruction of any local image patch with a sparse linear combination of atoms taken from an appropriate over-complete dictionary. This study adapts the basic idea of sparse code-based
super-resolution (SCSR) for MRI volume data, and then improves the dictionary learning strategy in the conventional
SCSR for achieving the precise sparse representation of HR volume patches. In the proposed MRI super-resolution strategy, we only learn the dictionary of the HR MRI volume patches with sparse coding algorithm, and then propagate the HR
dictionary to the LR dictionary by mathematical analysis for calculating the sparse representation (coefficients) of any LR
local input volume patch. The unknown corresponding HR volume patch can be reconstructed with the sparse coefficients
from the LR volume patch and the corresponding HR dictionary. We validate that the proposed SCSR strategy through
dictionary propagation can recover much clearer and more accurate HR MRI volume than the conventional interpolated
methods.
In this paper, a method called generalized N-dimensional PCA (GND-PCA) is proposed for statistical volume modeling
of medical volumes in this paper. The medical volume is treated as a series of 3rd-order tensors and the bases on each
mode-subspace are calculated in order to approximate the tensors accurately. The GND-PCA is successfully applied into
the construction of the statistical volume models for 3D CT lung volumes. Experiments show that the constructed
models have good performance on generalization even though the training samples are quite few.
Interventional Radiology (IVR) is an important technique to visualize and diagnosis the vascular disease. In real
medical application, a weak x-ray radiation source is used for imaging in order to reduce the radiation dose, resulting in a
low contrast noisy image. It is important to develop a method to smooth out the noise while enhance the vascular
structure. In this paper, we propose to combine an ICA Shrinkage filter with a multiscale filter for enhancement of IVR
images. The ICA shrinkage filter is used for noise reduction and the multiscale filter is used for enhancement of vascular
structure. Experimental results show that the quality of the image can be dramatically improved without any blurring in edge by the proposed method. Simultaneous noise reduction and vessel enhancement have been achieved.
Ablation is a kind of successful treatment for cancer. The technique inserts a special needle into a tumor and produces
heat from Radiofrequency at the needle tip to ablate the tumor. Open configure MR system can take MR images almost
real time and now is applied in liver cancer treatments. During a surgery, surgeons select images in which liver tumors
are seen clearly, and use them to guide the surgery. However, in some cases with severe chirrhosis, the tumors can't be
visualized in the MR images. In such cases, the combination of preoperative CT images will be greatly helpful, if CT
images can be registered to the position of MR images accurately. It is a difficult work since the shape of the liver in the
MR image is different from that of CT images due to the influent of the surgery. In this paper, we use Bspline based
FFD nonrigid image registration to attack the problem. The method includes four steps. Firstly the MRI inhomogeneity
is corrected. Secondly, parametric active contour with the gradient vector flow is used to extract the liver as region of
interest (ROI) because the method is robust and can obtain satisfied results. Thirdly, affine registration is use to match
CT and MR images roughly. Finally, Bspline based FFD nonrigid registration is applied to obtain accuracy registration.
Experiments show the proposed method is robust and accuracy.
In the X-ray coronary digital subtraction angiography, there are serious motion artifacts and noises, and backgrounds
such as ribs, spine, cathers and etc, which are tube structures and like vessels. It's difficult to separate vessels from the
background automatically if they are close each other. In this paper, an automatic extraction of coronary vessels from X-ray
digital subtraction angiography is proposed. We used edge preserving smooth filter to reduce the noises in the images
and keep the vessel edge firstly. Then affine and B-spline based FFD nonrigid registration is applied to the images.
Compared with the segmentation method, the proposed method can remove background greatly and extract the coronary vessel very well.
Radiological imaging such as x-ray CT is one of the most important tools for medical diagnostics. Since the radiological images are always with some quantum noise and the reduction of quantum noise or Poisson noise in medical images is an important issue. In this paper, we propose a new filtering based on independent component analysis (ICA) for reduction of noise. In the proposed filtering, the image (projection) is first transformed to ICA domain and then the components of scattered x-ray are removed by a soft thresholding (Shrinkage). The proposed method has been demonstrated by using both standard images and Monte Carlo simulations. Experimental results show that the quality of the image can be dramatically improved without any blurring in edge by the proposed filter
In this paper, we apply independent component analysis (ICA) to the reduction of spatially correlated additive noise in images. We take a degraded image as the mixture of the noise and the original image, which are statistically independent. From a view of blind signal separation, we try to restore the original image from two linear mixtures. Motivated by the fact that autocorrelation exists in the neighborhoods of the image and the noise; we design another mixture using the diffusion equation. Then we employ independent component analysis to separate the image and the noise from the two mixtures. Simulation experiments are carried out to remove the Poisson noise from images. Experimental results indicate and impressive performance of the proposed method. Furthermore, the proposed method can be combined with the Wavelet Shrinkage method to improve the denoising performance.
Digital watermarking is a technology proposed to address the issue of copyright protection for digital content. In this
paper, we have developed a new robust logo watermarking technique. Watermark embedding is performed in the wavelet
domain of the host image. The human visual system (HVS) is exploited by building a spatial mask based on stochastic
model for content adaptive digital watermarking. Independent component analysis (ICA) is introduced to extract the logo
watermark. Our simulation results suggest that ICA can be used to extract exactly watermark that was hidden in image
and show that our system performs robustness well under various important types of attacks.
Image segmentation is one of the most challenging problems in image processing. While significant progress has been made in gray-scale texture segmentation and color segmentation problem separately, the combined color and texture segmentation problem is less considered. In this paper, we use independent component analysis to extract local color and texture features for segmentation. Experiments compared with gray-scale texture analysis method show that the proposed method is more effective in segmenting complex color and texture images.
This paper proposes a new method to denoise images corrupted by Poisson noise. Poisson noise is signal-dependent, and consequently, separating signals from noise is a very difficult task. In most current Poisson noise reduction algorithms, noise signal are pre-processed to approximate Gaussian noise, and then denoised by a conventional Gaussian denoising algorithm. In this paper, we propose to use adaptive basis functions derived from the data using modified ICA (Independent Component Analysis), and a maximum likelihood shrinkage algorithm based on the property of Poisson noise. This modified ICA method is based on a denoising method called "Sparse Code Shrinkage (SCS)" and wavelet-domain denoising. In denoising procedure of ICA-domain, the shrinkage function is determined by the property of Poisson noise that adapts to the intensity of signal. The performance of the proposed algorithm is validated with simulated data experiments, and the results demonstrate that the algorithm greatly improves the denoising performance in images contaminated by Poisson noise.
We propose a heuristic method for the reconstruction of x-ray penumbral images from a noisy coded source. The ill-posed noisy penumbral image reconstruction problem is modeled as an optimization problem where the reconstructed image can be obtained by minimizing the mean square error between the obtained penumbral image and the estimated penumbral image, and Laplacian values of the estimated image. The Laplacian operator is used here a smoothness constraint. Since with heuristic methods, complicated a priori constraints can be easily incorporated by the appropriate modification of the cost function, the proposed method is well suited to the solution of this ill-posed problem. The proposed method has also been applied to real laser-plasma experiments.
The proposed system for CT image reconstruction is structured with three layers of neurons. In our previous work, we used the resilient backpropagation(Rprop) instead of the straight BP to modify the network weights. The basic idea is to minimize the error between the projections of the original image and of the reconstructed image. We noticed that the system performance depends on the initial status of the network. Based on this observation, we propose a novel approach for choosing optimal values of the connection weights. The experimental results indicate that the new method can find a satisfactory solution despite that only a few projections are available.
In order to enhance the reconstructed image fro kinoform, several kinoform synthesis methods based on optimization approaches have been proposed and used for 2D objects. In this paper, we propose a kinoform synthesis method based on simulated annealing for 3D object. The simulation results how that the reconstructed images can be significantly improved by use of the SA-based optimized kinoform.
In this paper, we propose an evolutionary neural network for blind source separation (BSS). The BSS is the problem to obtain the independent components of original source signals from mixed signals. The original sources that are mutually independent and are mixed linearly by an unknown matrix are retrieved by a separating procedure based on Independent Component Analysis (ICA). The goal of ICA is to find a separating matrix so that the separated signals are as independent as possible. In neural realizations, separating matrix is represented as connection weights of networks and usually updated by learning formulae. The effectiveness of the algorithms, however, is affected by the neuron activation functions that depend on the probability distribution of the signals. In our method, the network is evolved by Genetic Algorithm (GA) that does not need activation functions and works on evolutionary mechanism. The kurtosis that is a simple and original criterion for independence is used in the fitness function of GA. After learning, the network can be used to separate other mixed signals of the same mixing procedure. The applicability of the proposed method for blind source separation is demonstrated by the simulation results.
Several method have been proposed or used to optimize the phase distribution of a kinoform. In this paper, we propose a new approach for optimization of the kinoform based on simulated annealing to reduce the large computation cost.
In the research, we introduced an artificial neural network model named as coupled lattice neural network to reconstruct an original image from a degrade one in the blind deconvolution, where the original image and blurring function are not known. In the coupled lattice neural network, each neuron connects with its nearest neighbor neurons, the neighborhood corresponds to the weights of the neural network and is defined by a finite domain. Outputs of neurons shows the intensity distribution of an estimated original image. Weights of each neuron correspond to an estimated blur function and are the same for different neurons. The coupled lattice neural network includes two main operations, one is a nearest neighbor coupling or diffusion, the other is a local nonlinear reflection and learning. First a rule for a blur function growing is introduced. Then the coupled lattice neural network implements an estimated original image evolving based on an estimated blur function. Moreover we define a growing error criterion to control the evolution of the coupled lattice neural network. Whenever the error criterion is minimized, the coupled neural network gets stable, then outputs of the neural network correspond to the reconstructed original image, the weights are the blur function. In addition we demonstrate a method for the option of initial state variables of the coupled lattice neural network. The new approach to blind deconvolution can recover a digital binary image successful. Moreover the coupled lattice neural network can be used in the reconstruction of a gray-scale image.
Computer generated hologram as a basic optical diffractive element can be used widely in many fields. In designing computer generated holograms, a coding and Fourier domain iterative optimization method are introduced in the paper. Each pair of real and imaginary part in the Fourier spectrum of an image is coded by a weighted cosine function. Then the binary computer generated hologram corresponds to the coefficient distribution of the cosine extension. Moreover we define a Fourier domain mean-squared error between the Fourier spectrum of an image and the binary computer generated hologram. By minimizing the Fourier domain mean- squared error iterative, we can implement the hologram coding and optimization.
We present an evolutionary approach for reconstructing CT images; the algorithm reconstructs two-dimensional unknown images from four one-dimensional projections. A genetic algorithm works on a randomly generated population of strings each of which contains encodings of an image. The traditional, as well as new, genetic operators are applied on each generation. The mean square error between the projection data of the image encoded into a string and original projection data is used to estimate the string fitness. A Laplacian constraint term is included in the fitness function of the genetic algorithm for handling smooth images. Two new modified versions of the original genetic algorithm are presented. Results obtained by the original algorithm and the modified versions are compared to those obtained by the well-known algebraic reconstruction technique (ART), and it was found that the evolutionary method is more effective than ART in the particular case of limiting projection directions to four.
A genetic algorithm is presented for the blind-deconvolution problem of image restoration. The restoration problem is modeled as an optimization problem, whose cost function is minimized based on mechanics of natural selection and natural genetics. The applicability of GA for blind-deconvolution problem was demonstrated.
A computer simulation code to treat atomic excitation and laser beam propagation simultaneously in an atomic laser isotope separation system has been developed. The three- level Bloch-Maxwell equations are solved numerically to analyze the change of pulse shapes, the modification of laser frequencies and the time-varying atomic populations. The near- resonant effects on the propagation of frequency chirped and non-chirped laser pulses have been analyzed. It was found that there are serious differences in pulse shapes, frequency modifications and propagation velocities between laser pulses with and without chirping.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.