In addition to low-energy-threshold images (TLIs), photon-counting detector (PCD) computed tomography (CT) can generate virtual monoenergetic images (VMIs) and iodine maps. Our study sought to determine the image type that maximizes iodine detectability. Adult abdominal phantoms with iodine inserts of various concentrations and lesion sizes were scanned on a PCD-CT system. TLIs, VMIs at 50 keV, and iodine maps were generated, and iodine contrast-to-noise ratio (CNR) was measured. A channelized Hotelling observer was used to determine the area under the receiver-operating-characteristic curve (AUC) for iodine detectability. Iodine map CNR (0.57 ± 0.42) was significantly higher (P < 0.05) than for TLIs (0.46 ± 0.26) and lower (P < 0.001) than for VMIs at 50 keV (0.74 ± 0.33) for 0.5 mgI/cc and a 35-cm phantom. For the same condition and an 8-mm lesion, iodine detectability from iodine maps (AUC = 0.95 ± 0.01) was significantly lower (P < 0.001) than both TLIs (AUC = 0.99 ± 0.00) and VMIs (AUC = 0.99 ± 0.01). VMIs at 50 keV had similar detectability to TLIs and both outperformed iodine maps. The lowest detectable iodine concentration was 0.5 mgI/cc for an 8-mm lesion and 1.0 mgI/cc for a 4-mm lesion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Photon-counting detectors are expected to bring a range of improvements to patient imaging with x-ray computed tomography (CT). One is higher spatial resolution. We demonstrate the resolution obtained using a commercial CT scanner where the original energy-integrating detector has been replaced by a single-slice, silicon-based, photon-counting detector. This prototype constitutes the first full-field-of-view silicon-based CT scanner capable of patient scanning. First, the pixel response function and focal spot profile are measured and, combining the two, the system modulation transfer function is calculated. Second, the prototype is used to scan a resolution phantom and a skull phantom. The resolution images are compared to images from a state-of-the-art CT scanner. The comparison shows that for the prototype 19 lp / cm are detectable with the same clarity as 14 lp / cm on the reference scanner at equal dose and reconstruction grid, with more line pairs visible with increasing dose and decreasing image pixel size. The high spatial resolution remains evident in the anatomy of the skull phantom and is comparable to that of other photon-counting CT prototypes present in the literature. We conclude that the deep silicon-based detector used in our study could provide improved spatial resolution in patient imaging without increasing the x-ray dose.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Current digital mammography systems primarily employ one of two types of detectors: indirect conversion, typically using a cesium-iodine scintillator integrated with an amorphous silicon photodiode matrix, or direct conversion, using a photoconductive layer of amorphous selenium (a-Se) combined with thin-film transistor array. The goal of this study was to evaluate a methodology for objectively assessing image quality to compare human observer task performance in detecting microcalcification clusters and extended mass-like lesions achieved with different detector types. The proposed assessment methodology uses a novel anthropomorphic breast phantom fabricated with ink-jet printing. In addition to human observer detection performance, standard linear metrics such as modulation transfer function, noise power spectrum, and detective quantum efficiency (DQE) were also measured to assess image quality. An Analogic Anrad AXS-2430 a-Se detector used in a commercial FFDM/DBT system and a Teledyne Dalsa Xineos-2329 with CMOS pixel readout were evaluated and compared. The DQE of each detector was similar over a range of exposures. Similar task performance in detecting microcalcifications and masses was observed between the two detectors over a range of clinically applicable dose levels, with some perplexing differences in the detection of microcalcifications at the lowest dose measurement. The evaluation approach presented seems promising as a new technique for objective assessment of breast imaging technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
TOPICS: Computed tomography, Image quality, X-ray computed tomography, Radiotherapy, Computer simulations, CT reconstruction, Monte Carlo methods, Data modeling, Signal to noise ratio, Medical imaging
Low-dose computed tomography (CT) is desirable for treatment planning and simulation in radiation therapy. Multiple rescanning and replanning during the treatment course with a smaller amount of dose than a single conventional full-dose CT simulation is a crucial step in adaptive radiation therapy. We developed a machine learning-based method to improve image quality of low-dose CT for radiation therapy treatment simulation. We used a residual block concept and a self-attention strategy with a cycle-consistent adversarial network framework. A fully convolution neural network with residual blocks and attention gates (AGs) was used in the generator to enable end-to-end transformation. We have collected CT images from 30 patients treated with frameless brain stereotactic radiosurgery (SRS) for this study. These full-dose images were used to generate projection data, which were then added with noise to simulate the low-mAs scanning scenario. Low-dose CT images were reconstructed from this noise-contaminated projection data and were fed into our network along with the original full-dose CT images for training. The performance of our network was evaluated by quantitatively comparing the high-quality CT images generated by our method with the original full-dose images. When mAs is reduced to 0.5% of the original CT scan, the mean square error of the CT images obtained by our method is ∼1.6 % , with respect to the original full-dose images. The proposed method successfully improved the noise, contract-to-noise ratio, and nonuniformity level to be close to those of full-dose CT images and outperforms a state-of-the-art iterative reconstruction method. Dosimetric studies show that the average differences of dose-volume histogram metrics are <0.1 Gy (p > 0.05). These quantitative results strongly indicate that the denoised low-dose CT images using our method maintains image accuracy and quality and are accurate enough for dose calculation in current CT simulation of brain SRS treatment. We also demonstrate the great potential for low-dose CT in the process of simulation and treatment planning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Automatic and reliable stroke lesion segmentation from diffusion magnetic resonance imaging (MRI) is critical for patient care. Methods using neural networks have been developed, but the rate of false positives limits their use in clinical practice. A training strategy applied to three-dimensional deconvolutional neural networks for stroke lesion segmentation on diffusion MRI was proposed. Infarcts were segmented by experts on diffusion MRI for 929 patients. We divided each database as follows: 60% for a training set, 20% for validation, and 20% for testing. Our hypothesis was a two-phase hybrid learning scheme, in which the network was first trained with whole MRI (regular phase) and then, in a second phase (hybrid phase), alternately with whole MRI and patches. Patches were actively selected from the discrepancy between expert and model segmentation at the beginning of each batch. On the test population, the performances after the regular and hybrid phases were compared. A statistically significant Dice improvement with hybrid training compared with regular training was demonstrated (p < 0.01). The mean Dice reached 0.711 ± 0.199. False positives were reduced by almost 30% with hybrid training (p < 0.01). Our hybrid training strategy empowered deep neural networks for more accurate infarct segmentations on diffusion MRI.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We investigate a new preprocessing approach for MRI glioblastoma brain tumors. Based on combined denoising technique (bilateral filter) and contrast-enhancement technique (automatic contrast stretching based on image statistical information), the proposed approach offers competitive results while preserving the tumor region’s edges and original image’s brightness. In order to evaluate the proposed approach’s performance, quantitative evaluation has been realized through the Multimodal Brain Tumor Segmentation (BraTS 2015) dataset. A comparative study between the proposed method and four state-of-the art preprocessing algorithm attests that the proposed approach could yield a competitive performance for magnetic resonance brain glioblastomas tumor preprocessing. In fact, the result of this step of image preprocessing is very crucial for the efficiency of the remaining brain image processing steps: i.e., segmentation, classification, and reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Primary tumors have a high likelihood of developing metastases in the liver, and early detection of these metastases is crucial for patient outcome. We propose a method based on convolutional neural networks to detect liver metastases. First, the liver is automatically segmented using the six phases of abdominal dynamic contrast-enhanced (DCE) MR images. Next, DCE-MR and diffusion weighted MR images are used for metastases detection within the liver mask. The liver segmentations have a median Dice similarity coefficient of 0.95 compared with manual annotations. The metastases detection method has a sensitivity of 99.8% with a median of two false positives per image. The combination of the two MR sequences in a dual pathway network is proven valuable for the detection of liver metastases. In conclusion, a high quality liver segmentation can be obtained in which we can successfully detect liver metastases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Dual-energy computed tomography (CT) has the potential to decompose tissues into different materials. However, the classic direct inversion (DI) method for multimaterial decomposition (MMD) cannot accurately separate more than two basis materials due to the ill-posed problem and amplified image noise. We propose an integrated MMD method that addresses the piecewise smoothness and intrinsic sparsity property of the decomposition image. The proposed MMD was formulated as an optimization problem including a quadratic data fidelity term, an isotropic total variation term that encourages image smoothness, and a nonconvex penalty function that promotes decomposition image sparseness. The mass and volume conservation rule was formulated as the probability simplex constraint. An accelerated primal-dual splitting approach with line search was applied to solve the optimization problem. The proposed method with different penalty functions was compared against DI on a digital phantom, a Catphan® 600 phantom, a quantitative imaging phantom, and a pelvis patient. The proposed framework distinctly separated the CT image up to 12 basis materials plus air with high decomposition accuracy. The cross talks between two different materials are substantially reduced, as shown by the decreased nondiagonal elements of the normalized cross correlation (NCC) matrix. The mean square error of the measured electron densities was reduced by 72.6%. Across all datasets, the proposed method improved the average volume fraction accuracy from 61.2% to 99.9% and increased the diagonality of the NCC matrix from 0.73 to 0.96. Compared with DI, the proposed MMD framework improved decomposition accuracy and material separation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Tissue window filtering has been widely used in deep learning for computed tomography (CT) image analyses to improve training performance (e.g., soft tissue windows for abdominal CT). However, the effectiveness of tissue window normalization is questionable since the generalizability of the trained model might be further harmed, especially when such models are applied to new cohorts with different CT reconstruction kernels, contrast mechanisms, dynamic variations in the acquisition, and physiological changes. We evaluate the effectiveness of both with and without using soft tissue window normalization on multisite CT cohorts. Moreover, we propose a stochastic tissue window normalization (SWN) method to improve the generalizability of tissue window normalization. Different from the random sampling, the SWN method centers the randomization around the soft tissue window to maintain the specificity for abdominal organs. To evaluate the performance of different strategies, 80 training and 453 validation and testing scans from six datasets are employed to perform multiorgan segmentation using standard 2D U-Net. The six datasets cover the scenarios, where the training and testing scans are from (1) same scanner and same population, (2) same CT contrast but different pathology, and (3) different CT contrast and pathology. The traditional soft tissue window and nonwindowed approaches achieved better performance on (1). The proposed SWN achieved general superior performance on (2) and (3) with statistical analyses, which offers better generalizability for a trained model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Accurate segmentation of the blood vessels from a retinal image plays a significant role in the prudent examination of the vessels. A supervised blood vessel segmentation technique to extract blood vessels from a retinal image is proposed. The uniqueness of the work lies in the implementation of feature-oriented dictionary learning and sparse coding for the accurate classification of the pixels in an image. First, the image is split into patches and for each patch, Gabor features are extracted at multiple scales and orientations to create a set of feature vectors (this is done for the whole training set). Then, an overcomplete feature-oriented dictionary is trained from the extracted Gabor features (selected on the basis of standard deviation) using the generalized K-means for singular value decomposition dictionary learning technique. Sparse representations are subsequently calculated for the corresponding features from the dictionary. The combination of feature vectors and sparse representations constitutes the final feature vector. This feature vector is then fed into the ensemble classifier for the classification of pixels into either blood vessel pixels or nonblood vessel pixels. The method is evaluated on publicly available DRIVE and STARE datasets, as they contain ground truth images precisely marked by experts. The results obtained on both of the datasets show that the proposed technique outperforms most of the state-of-the-art techniques reported in the literature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The dentate nucleus (DN) is a gray matter structure deep in the cerebellum involved in motor coordination, sensory input integration, executive planning, language, and visuospatial function. The DN is an emerging biomarker of disease, informing studies that advance pathophysiologic understanding of neurodegenerative and related disorders. The main challenge in defining the DN radiologically is that, like many deep gray matter structures, it has poor contrast in T1-weighted magnetic resonance (MR) images and therefore requires specialized MR acquisitions for visualization. Manual tracing of the DN across multiple acquisitions is resource-intensive and does not scale well to large datasets. We describe a technique that automatically segments the DN using deep learning (DL) on common imaging sequences, such as T1-weighted, T2-weighted, and diffusion MR imaging. We trained a DL algorithm that can automatically delineate the DN and provide an estimate of its volume. The automatic segmentation achieved higher agreement to the manual labels compared to template registration, which is the current common practice in DN segmentation or multiatlas segmentation of manual labels. Across all sequences, the FA maps achieved the highest mean Dice similarity coefficient (DSC) of 0.83 compared to T1 imaging (DSC = 0.76), T2 imaging (DSC = 0.79), or a multisequence approach (DSC = 0.80). A single atlas registration approach using the spatially unbiased atlas template of the cerebellum and brainstem template achieved a DSC of 0.23, and multi-atlas segmentation achieved a DSC of 0.33. Overall, we propose a method of delineating the DN on clinical imaging that can reproduce manual labels with higher accuracy than current atlas-based tools.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Convolutional neural networks (CNNs) offer a promising means to achieve fast deformable image registration with accuracy comparable to conventional, physics-based methods. A persistent question with CNN methods, however, is whether they will be able to generalize to data outside of the training set. We investigated this question of mismatch between train and test data with respect to first- and second-order image statistics (e.g., spatial resolution, image noise, and power spectrum). A UNet-based architecture was built and trained on simulated CT images for various conditions of image noise (dose), spatial resolution, and deformation magnitude. Target registration error was measured as a function of the difference in statistical properties between the test and training data. Generally, registration error is minimized when the training data exactly match the statistics of the test data; however, networks trained with data exhibiting a diversity in statistical characteristics generalized well across the range of statistical conditions considered. Furthermore, networks trained on simulated image content with first- and second-order statistics selected to match that of real anatomical data were shown to provide reasonable registration performance on real anatomical content, offering potential new means for data augmentation. Characterizing the behavior of a CNN in the presence of statistical mismatch is an important step in understanding how these networks behave when deployed on new, unobserved data. Such characterization can inform decisions on whether retraining is necessary and can guide the data collection and/or augmentation process for training.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
TOPICS: Image segmentation, 3D image processing, Magnetic resonance imaging, Convolutional neural networks, Magnetism, 3D acquisition, Computer programming, 3D modeling, Process modeling, Data modeling
High-resolution magnetic resonance imaging with fat suppression can obtain accurate anatomical information of all 35 lower limb muscles and individual segmentation can facilitate quantitative analysis. However, due to limited contrast and edge information, automatic segmentation of the muscles is very challenging, especially for athletes whose muscles are all well developed and more compact than the average population. Deep convolutional neural network (DCNN)-based segmentation methods showed great promise in many clinical applications, however, a direct adoption of DCNN to lower limb muscle segmentation is challenged by the large three-dimensional (3-D) image size and lack of the direct usage of muscle location information. We developed a cascaded 3-D DCNN model with the first step to localize each muscle using low-resolution images and the second step to segment it using cropped high-resolution images with individually trained networks. The workflow was optimized to account for different characteristics of each muscle for improved accuracy and reduced training and testing time. A testing augmentation technique was proposed to smooth the segmentation contours. The segmentation performance of 14 muscles was within interobserver variability and 21 were slightly worse than humans.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
A heuristic-based, multineural network (MNN) image analysis as a solution to the problematical diagnosis of hydatidiform mole (HM) is presented. HM presents as tumors in placental cell structures, many of which exhibit premalignant phenotypes (choriocarcinoma and other conditions). HM is commonly found in women under age 17 or over 35 and can be partial HM or complete HM. Appropriate treatment is determined by correct categorization into PHM or CHM, a difficult task even for expert pathologists. Image analysis combined with pattern recognition techniques has been applied to the problem, based on 15 or 17 image features. The use of limited data for training and validation set was optimized using a k-fold validation technique allowing performance measurement of different MNN configurations. The MNN technique performed better than human experts at the categorization for both the 15- and 17-feature data, promising greater diagnostic consistency, and further improvements with the availability of larger datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We have applied the Radon Cumulative Distribution Transform (RCDT) as an image transformation to highlight the subtle difference between left and right mammograms to detect mammographically occult (MO) cancer in women with dense breasts and negative screening mammograms. We developed deep convolutional neural networks (CNNs) as classifiers for estimating the probability of having MO cancer. We acquired screening mammograms of 333 women (97 unilateral MO cancer) with dense breasts and at least two consecutive mammograms and used the immediate prior mammograms, which radiologists interpreted as negative. We used fivefold cross validation to divide our dataset into a training and independent test sets with ratios of 0.8:0.2. We set aside 10% of the training set as a validation set. We applied RCDT on the left and right mammograms of each view. We applied inverse Radon transform to represent the resulting RCDT images in the image domain. We then fine-tuned a VGG16 network pretrained on ImageNet using the resulting images per each view. The CNNs achieved mean areas under the receiver operating characteristic (AUC) curve of 0.73 (standard error, SE = 0.024) and 0.73 (SE = 0.04) for the craniocaudal and mediolateral oblique views, respectively. We combined the scores from two CNNs by training a logistic regression classifier and it achieved a mean AUC of 0.81 (SE = 0.032). In conclusion, we showed that inverse Radon-transformed RCDT images contain information useful for detecting MO cancers and that deep CNNs could learn such information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Polyp classification is a feature selection and clustering process. Picking the most effective features from multiple polyp descriptors without redundant information is a great challenge in this procedure. We propose a multilayer feature selection method to construct an optimized descriptor for polyp classification with a feature-grouping strategy in a hierarchical framework. First, the proposed method makes good use of image metrics, such as intensity, gradient, and curvature, to divide their corresponding polyp descriptors into several feature groups, which are the preliminary units of this method. Then each preliminary unit generates two ranked descriptors, i.e., their optimized variable groups (OVGs) and preliminary classification measurements. Next, a feature dividing-merging (FDM) algorithm is designed to perform feature merging operation hierarchically and iteratively. Unlike traditional feature selection methods, the proposed FDM algorithm includes two steps for feature dividing and feature merging. At each layer, feature dividing selects the OVG with the highest area under the receiver operating characteristic curve (AUC) as the baseline while other descriptors are treated as its complements. In the fusion step, the FDM merges some variables with gains into the baseline from the complementary descriptors iteratively on every layer until the final descriptor is obtained. This proposed model (including the forward step algorithm and the FDM algorithm) is a greedy method that guarantees clustering monotonicity of all OVGs from the bottom to the top layer. In our experiments, all the selected results from each layer are reported by both graphical illustration and data analysis. Performance of the proposed method is compared to five existing classification methods by a polyp database of 63 samples with pathological reports. The experimental results show that our proposed method outperforms other methods by 4% to 23% gains in terms of AUC scores.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Image-Guided Procedures, Robotic Interventions, and Modeling
We present an anthropomorphically accurate left ventricular (LV) phantom derived from human computed tomography (CT) data to serve as the ground truth for the optimization and the spatial resolution quantification of a CT-derived regional strain metric (SQUEEZ) for the detection of regional wall motion abnormalities. Displacements were applied to the mesh points of a clinically derived end-diastolic LV mesh to create analytical end-systolic poses with physiologically accurate endocardial strains. Normal function and regional dysfunction of four sizes [1, 2/3, 1/2, and 1/3 American Heart Association (AHA) segments as core diameter], each exhibiting hypokinesia (70% reduction in strain) and subtle hypokinesia (40% reduction in strain), were simulated. Regional shortening (RSCT) estimates were obtained by registering the end-diastolic mesh to each simulated end-systolic mesh condition using a nonrigid registration algorithm. Ground-truth models of normal function and of hypokinesia were used to identify the optimal parameters in the registration algorithm and to measure the accuracy of detecting regional dysfunction of varying sizes and severities. For normal LV function, RSCT values in all 16 AHA segments were accurate to within ±5 % . For cases with regional dysfunction, the errors in RSCT around the dysfunctional region increased with decreasing size of dysfunctional tissue.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Major calcifications are of great concern when performing percutaneous coronary interventions because they inhibit proper stent deployment. We created a comprehensive software to segment calcifications in intravascular optical coherence tomography (IVOCT) images and to calculate their impact using the stent-deployment calcification score, as reported by Fujino et al. We segmented the vascular lumen and calcifications using the pretrained SegNet, convolutional neural network, which was refined for our task. We cleaned segmentation results using conditional random field processing. We evaluated the method on manually annotated IVOCT volumes of interest (VOIs) without lesions and with calcifications, lipidous, or mixed lesions. The dataset included 48 VOIs taken from 34 clinical pullbacks, giving a total of 2640 in vivo images. Annotations were determined from consensus between two expert analysts. Keeping VOIs intact, we performed 10-fold cross-validation over all data. Following segmentation noise cleaning, we obtained sensitivities of 0.85 ± 0.04, 0.99 ± 0.01, and 0.97 ± 0.01 for calcified, lumen, and other tissue classes, respectively. From segmented regions, we automatically determined calcification depth, angle, and thickness attributes. Bland–Altman analysis suggested strong correlation between manually and automatically obtained lumen and calcification attributes. Agreement between manually and automatically obtained stent-deployment calcification scores was good (four of five lesions gave exact agreement). Results are encouraging and suggest our classification approach could be applied clinically for assessment and treatment planning of coronary calcification lesions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Biomedical Applications in Molecular, Structural, and Functional Imaging
We created and evaluated a processing method for dynamic computed tomography myocardial perfusion imaging (CT-MPI) of myocardial blood flow (MBF), which combines a modified simple linear iterative clustering algorithm (SLIC) with robust perfusion quantification, hence the name SLICR. SLICR adaptively segments the myocardium into nonuniform super-voxels with similar perfusion time attenuation curves (TACs). Within each super-voxel, an α-trimmed-median TAC was computed to robustly represent the super-voxel and a robust physiological model (RPM) was implemented to semi-analytically estimate MBF. SLICR processing was compared with another voxel-wise MBF preprocessing approach, which included a spatiotemporal bilateral filter (STBF) for noise reduction prior to perfusion quantification. Image data from a digital CT-MPI phantom and a porcine ischemia model were evaluated. SLICR was ∼50-fold faster than voxel-wise RPM and other model-based methods while retaining sufficient resolution to show clinically relevant features, such as a transmural perfusion gradient. SLICR showed markedly improved accuracy and precision, as compared with other methods. At a simulated MBF of 100 mL/min-100 g and a tube current–time product of 100 mAs (50% of nominal), the MBF estimates were 101 ± 12, 94 ± 56, and 54 ± 24 mL / min-100 g for SLICR, the voxel-wise Johnson–Wilson model, and a singular value decomposition-model independent method with STBF, respectively. SLICR estimated MBF precisely and accurately (103 ± 23 mL / min-100 g) at 25% nominal dose, while other methods resulted in larger errors. With the porcine model, the SLICR results were consistent with the induced ischemia. SLICR simultaneously accelerated and improved the quality of quantitative perfusion processing without compromising clinically relevant distributions of perfusion characteristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We present a method to leverage the high fidelity of computed tomography (CT) to quantify regional left ventricular function using topography variation of the endocardium as a surrogate measure of strain. 4DCT images of 10 normal and 10 abnormal subjects, acquired with standard clinical protocols, are used. The topography of the endocardium is characterized by its regional values of fractal dimension (FD), computed using a box-counting algorithm developed in-house. The average FD in each of the 16 American Heart Association segments is calculated for each subject as a function of time over the cardiac cycle. The normal subjects show a peak systolic percentage change in FD of 5.9 % ± 2 % in all free-wall segments, whereas the abnormal cohort experiences a change of 2 % ± 1.2 % (p < 0.00001). Septal segments, being smooth, do not undergo large changes in FD. Additionally, a principal component analysis is performed on the temporal profiles of FD to highlight the possibility for unsupervised classification of normal and abnormal function. The method developed is free from manual contouring and does not require any feature tracking or registration algorithms. The FD values in the free-wall segments correlated well with radial strain and with endocardial regional shortening measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Isocitrate dehydrogenase (IDH) mutation status is an important marker in glioma diagnosis and therapy. We propose an automated pipeline for noninvasively predicting IDH status using deep learning and T2-weighted (T2w) magnetic resonance (MR) images with minimal preprocessing (N4 bias correction and normalization to zero mean and unit variance). T2w MR images and genomic data were obtained from The Cancer Imaging Archive dataset for 260 subjects (120 high-grade and 140 low-grade gliomas). A fully automated two-dimensional densely connected model was trained to classify IDH mutation status on 208 subjects and tested on another held-out set of 52 subjects using fivefold cross validation. Data leakage was avoided by ensuring subject separation during the slice-wise randomization. Mean classification accuracy of 90.5% was achieved for each axial slice in predicting the three classes of no tumor, IDH mutated, and IDH wild type. Test accuracy of 83.8% was achieved in predicting IDH mutation status for individual subjects on the test dataset of 52 subjects. We demonstrate a deep learning method to predict IDH mutation status using T2w MRI alone. Radiologic imaging studies using deep learning methods must address data leakage (subject duplication) in the randomization process to avoid upward bias in the reported classification accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Paravertebral and intercostal nerve blocks have experienced a resurgence in popularity. Ultrasound has become the gold standard for visualization of the needle during injection of the analgesic, but the intercostal artery and vein can be difficult to visualize. We investigated the use of spectral analysis of raw radiofrequency (RF) ultrasound signals for identification of the intercostal vessels and six other tissue types in the intercostal and paravertebral spaces. Features derived from the one-dimensional spectrum, two-dimensional spectrum, and cepstrum were used to train four different machine learning algorithms. In addition, the use of the average normalized spectrum as the feature set was compared with the derived feature set. Compared to a support vector machine (SVM) (74.2%), an artificial neural network (ANN) (68.2%), and multinomial analysis (64.1%), a random forest (84.9%) resulted in the most accurate classification. The accuracy using a random forest trained with the first 15 principal components of the average normalized spectrum was 87.0%. These results demonstrate that using a machine learning algorithm with spectral analysis of raw RF ultrasound signals has the potential to provide tissue characterization in intercostal and paravertebral ultrasound.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Tomographic image reconstruction requires precise geometric measurements and calibration for the scanning system to yield optimal images. The isocenter offset is a very important geometric parameter that directly governs the spatial resolution of reconstructed images. Due to system imperfections such as mechanical misalignment, an accurate isocenter offset is difficult to achieve. Common calibration procedures used during isocenter offset tuning, such as pin scan, are not able to reach precision of subpixel level and are also inevitably hampered by system imperfections. We propose a purely data-driven method based on Fourier shift theorem to indirectly, yet precisely, estimate the isocenter offset at the subpixel level. The solution is obtained by applying a generalized M-estimator, a robust regression algorithm, to an arbitrary sinogram of axial scanning geometry. Numerical experiments are conducted on both simulated phantom data and actual data using a tungsten wire. Simulation results reveal that the proposed method achieves great accuracy on estimating and tuning the isocenter offset, which, in turn, significantly improves the quality of final images, particularly in spatial resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Human epidermal growth factor receptor 2 (HER2), a transmembrane tyrosine kinase receptor encoded by the ERBB2 gene on chromosome 17q12, is a predictive and prognostic biomarker in invasive breast cancer (BC). Approximately 20% of BC are HER2-positive as a result of ERBB2 gene amplification and overexpression of the HER2 protein. Quantification of HER2 is performed routinely on all invasive BCs, to assist in clinical decision making for prognosis and treatment for HER2-positive BC patients by manually counting gene signals. We propose an automated system to quantify the HER2 gene status from chromogenic in situ hybridization (CISH) whole slide images (WSI) in invasive BC. The proposed method selects untruncated and nonoverlapped singular nuclei from the cancer regions using color unmixing and machine learning techniques. Then, HER2 and chromosome enumeration probe 17 (CEP17) signals are detected based on the RGB intensity and counted per nucleus. Finally, the HER2-to-CEP17 signal ratio is calculated to determine the HER2 amplification status following the ASCO/CAP 2018 guidelines. The proposed method reduced the labor and time for the quantification. In the experiment, the correlation coefficient between the proposed automatic CISH quantification method and pathologist manual enumeration was 0.98. The p-values larger than 0.05 from the one-sided paired t-test ensured that the proposed method yields statistically indifferent results to the reference method. The method was established on WSI scanned by two different scanners. Through the experiments, the capability of the proposed system has been demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Digital screening and diagnosis from cytology slides can be aided by capturing multiple focal planes. However, using conventional methods, the large file sizes of high-resolution whole-slide images increase linearly with the number of focal planes acquired, leading to significant data storage and bandwidth requirements for the efficient storage and transfer of cytology virtual slides. We investigated whether a sequence of focal planes contained sufficient redundancy to efficiently compress virtual slides across focal planes by applying a commonly available video compression standard, high-efficiency video coding (HEVC). By developing an adaptive algorithm that applied compression to achieve a target image quality, we found that the compression ratio of HEVC exceeded that obtained using JPEG and JPEG2000 compression while maintaining a comparable level of image quality. These results suggest an alternative method for the efficient storage and transfer of whole-slide images that contain multiple focal planes, expanding the utility of this rapidly evolving imaging technology into cytology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.