PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12930, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D printing in Health Care Facilities (HCFs) has evolved from a set of experimental techniques and situational engineering applications employed at leading academic institutions to a relatively mature but expanding field with well-defined workflows and recognition at major medical societies. This project introduces the term ‘Final Anatomic Representation’ that refers to the final surface mesh files used in patient care. It also introduces the term ‘Patient Specific Realization’ to characterize how the Final Anatomic Representation is used, for example the creation of a 3D PDF, virtual reality display with shared experiences, augmented reality to include procedure simulation, or 3D printed parts. This project focuses on 3D printing in HCFs, and it includes a wide scope of use cases with literature support. Many intended uses have progressed to guideline support for appropriateness; these are organized by patient presentation or clinical scenario. One benefit of using clinical scenarios is that direct feedback can be translated from the engineering of 3D printed parts to the data generation from those parts used in the medical value equation. Continuing with the direct feedback, established value then supports guidelines for patient care such as clinical appropriateness, and those guidelines can then be applied to realize that value added for future patients who present with the same clinical scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The identification of pathologies in CT Angiography (CTA) is a laborious process. Even with the use of advanced post-processing techniques (such as Maximum Intensity Projection (MIP) and Volume Rendering (VR)) the analysis of the head and neck vasculature is a challenging task due to the interference from the surrounding osseous anatomy. To address these issues, we introduce an innovative solution in the form of an Artificial Intelligence (AI) reconstruction system. This system is supported by a 3D convolutional neural network, specially trained to automate the process of CTA reconstruction in healthcare services. In this study, we demonstrate a novel solution based on Deep Learning (DL) for the purpose of automatically segmenting skeletal structures, calcified plaque, and arterial vessels within CTA images. The advanced DL segmentation models that have been developed can perform accurately across different anatomies, scans, and reconstruction settings and allow superior visualization of vascular anatomy and pathology compared to other conventional techniques. These models have shown remarkable performance with a mean dice score of 0.985 for the bone structures. This high score, attained on an independent validation dataset that was kept separate during the training process, reflects the model's strength and potential for reliable application in real-world settings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The recently introduced Photon Counting CT (PCCT) offers major advances in spatial resolution and material discrimination compared to conventional multi-detector CT. We investigate whether these new capabilities may enable accurate in vivo quantification of the trabecular microstructure of human bone. Human femoral bone was imaged using reference HR-pQCT (isotropic 60 μm voxels) and PCCT operated in a High Resolution mode (HR, 80 μm in-plane voxel size. 200 μm slice thickness) and in a Calcium-selective mode (CA, isotropic 390 μm voxels). 468 spherical Regions-of-Interest (ROIs) of 5 mm diameter were placed at corresponding locations in the HR-pQCT and PCCT volumes. The bone voxels of HR-pQCT and CA PCCT ROIs were segmented (binarized) using global Otsu thresholding; local Bernsen segmentation was used for HR PCCT. Trabecular thickness (TbTh), spacing (TbSp), number (TbN), and bone volume fraction (BV/TV) were measured in the binarized ROIs. The performance of PCCT morphometrics was evaluated in terms of correlation coefficient and numerical agreement with HR-pQCT. For ROIs with mean TbTh⪆250 μm (approaching the nominal resolution of HR PCCT), the average trabecular measurements obtained from HR PCCT achieved excellent correlations with the reference HR-pQCT: 0.88 for BvTv, 0.89 for TbTh, 0.81 for TbSp and 0.78 for TbN. For ROIs with mean TbTh of 200 μm – 250 μm, the correlations were slightly worse, ranging from 0.61 for TbTh to 0.84 for BvTv. The spatial resolution of CA PCCT in its current implementation is insufficient for microarchitectural measurements, but the material discrimination capability appears to enable accurate estimation of BvTv (correlation of 0.89 to HR-pQCT). The results suggest that the introduction of PCCT may enable microstructural evaluation of the trabecular bone of the lumbar spine and hip, which are inaccessible to current in vivo high-resolution bone imaging technologies. The findings of this work will inform the development of clinical indications for PCCT trabecular bone assessment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a method to generate random synthetic trabecular bone microstructures sufficiently diverse for modeling a dataset of human femur bones. We further demonstrate that using a random forest regressor, we can also generate synthetic bones with prespecified microstructure metric values. This tunability allows for the user to generate synthetic datasets with arbitrary distributions of microstructure metrics that can be useful for modeling trabecular bone in other anatomical sites or disease states. Virtual imaging studies can be applied to simulate high resolution CT image data and used for developing new texture-based models for the evaluation of bone health.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Micro-CT imaging enables noninvasive and longitudinal assessment of mouse lung pathology in genetically engineered lung cancer models, which is crucial for evaluating the effectiveness of potential therapeutics. However, manual lung analysis is time-consuming, and an automated workflow is needed. We present a strategy to optimize a deep learning-based workflow for lung tumor analysis using limited annotations. A 2D UNet model (M1) was trained for chest cavity segmentation using an existing dataset with lung, heart, and vasculature segmentations from wild-type mice (n = 10) and chest cavity segmentations of mice with lung tumors (n = 5). M1 then generated chest cavity segmentations for 20 additional lung tumor burdened mice. Next, non-rigid registration aligned wild-type segmentations with tumor burdened lung scans (n = 25) using the chest cavity mask predicted by M1. Subsequently, M1 was fine-tuned, and a heart segmentation model (M2) was trained with 10 wild-type and 25 tumor burdened lung scans. Heart segmentation was then subtracted from the chest segmentation, and a threshold-based algorithm (-1000 to -300 a.u.) was applied to reveal functional lung volume. Finally, tumor segmentation was estimated by subtracting functional lung and heart volumes from chest cavity volume in a cohort of lung tumor burdened mice. The resulting workflow provides “chest”, “heart”, “functional lung” and “tumor plus vasculature” segmentations for quantification and visualization. The models generate segmentations in approximately 13 seconds per mouse, with high accuracy (Dice ratios: 0.96 for chest cavity, 0.90 for heart). This workflow enables longitudinal monitoring of tumor progression, supporting applications in oncology drug discovery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Non-contrast, cardiac CT Calcium Score (CTCS) images provide a low-cost cardiovascular disease screening exam to guide therapeutics. We are extending standard Agatston score to include cardiovascular risk assessments from features of epicardial adipose tissue, pericoronary adipose tissue, heart size, and more, which are currently extracted from Coronary CT Angiography (CCTA) images. To aid such determinations, we developed a deep-learning method to synthesize Virtual CT Angiography (VCTA) images from CTCS images. We retrospectively collected 256 patients who underwent CCTA and CTCS from our hospitals (MacKay and UH). Training on 205 patients from UH, we used the contrastive, unpaired translation method to create VCTA images. Testing on 51 patients from Mackay, we generated VCTA images that compared favorably to the matched CCTA images with enhanced coronaries and ventricular cavity that were well delineated from surrounding tissues (epicardial adipose tissue and myocardium). The automated segmentation of myocardium and left-ventricle cavity in VCTA showed strong agreement with the measurements obtained from CCTA. The measured percent volume differences between VCTA and CCTA segmentation were 2±8% for the myocardium and 5±10% for the left-ventricle cavity, respectively. Manually segmented coronary arteries from VCTA and CTCS (with guidance from registered CCTA) aligned well. Centerline displacements were within 50% of coronary artery diameter (4mm). Pericoronary adipose tissue measurements using the axial disk method showed excellent agreements between measurements from VCTA ROIs and manual segmentations (e.g., average HU differences were typically <3HU). Promising results suggest that VCTA can be used to add assessments indicative of cardiovascular risk from CTCS images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lymphatics are crucial in maintaining cardiovascular health and facilitating immune surveillance, yet their significance is often overlooked in medical practice. One notable consequence of cancer treatment, affecting a growing population of survivors, is cancer-acquired lymphedema (LE), a prevalent and incurable condition diagnosed by increased tissue volumes. A recent diagnostic finding, using Near-Infrared Fluorescence (NIRF) lymphatic imaging, indicates that dermal backflow, the retrograde flow of lymphatic fluid from collecting lymphatic vessels into the lymphatic capillaries, is predictive of LE. Dermal backflow contributes to the development of irreversible tissue changes associated with LE, including tissue swelling, accumulation of subcutaneous adipose tissue, and ultimately fibrosis. Evidence suggests that early intervention, prior to tissue swelling, may ameliorate LE, unfortunately, diagnostic methods for detecting early lymphatic dysfunction and monitoring the effectiveness of early interventions on the onset of LE are limited. In this work, we build a dedicated, quantitative NIRF lymphatic imaging system to assess dermal backflow and the impact of early physiotherapy on the progression of LE in head and neck cancer survivors. This system integrates NIRF and RGB-D stereo camera hardware and software for image acquisition, 3D rendering, stereo calibration, registration, and visualization of dermal backflow. Additionally, we develop software solutions to automate the segmentation and quantification of lymphatic dysfunction over complex 3D surface profiles. Our preliminary results demonstrate the accurate reconstruction of 3D models with a NIRF texture overlay using data from our NIRF and RGB-D stereo camera device. Furthermore, dermal backflow segmentation automation was accomplished in 2D NIRF images and 3D reconstructions of clinically relevant surfaces and then incorporated into the process of dermal backflow quantification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Generative models have been used to combat medical image scarcity by generating new samples to increase the size of the training data. The hypothesis of this paper is that Denoising Diffusion Probabilistic Models (DDPM), have the ability to synthesize high quality flow images and potentially obviate the need to rely on time-consuming CFD simulations. As a first step, in this paper, we concentrate on data generation at only the peak flow rate of the simulation and propose a DDPM that, with high accuracy, can generate velocity fields for many unseen flow rates in a fixed in-vitro phantom geometry with rigid walls modeling a vascular stenosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High-Speed-Angiography (HSA) 1000 fps imaging was successfully used previously to visualize contrast media/blood flow in neurovascular anatomies. In this work we explore its usage in cardiovascular anatomies in a swine animal model. A 5 French catheter was guided into the right coronary artery of a swine, followed by the injection of iodine contrast through a computer-controlled injector at a controlled rate of 40 (ml/min). The injection process was captured using high-speed angiography at a rate of 1000 fps. The noise in the images was reduced using a custom-built machine-learning model consisting of long short-term memory networks. From the noise reduced images, velocity profiles of contrast/blood flow through the artery was calculated using Horn-Schunck optical flow (OF) method. From the high-speed coronary angiography (HSCA) images, the bolus of contrast could be visually tracked with ease as it traversed from the catheter tip through the artery. The imaging technique's high temporal resolution effectively minimized motion artifacts resulting from the heart's activity. The OF results of the contrast injection show velocities in the artery ranging from 20 – 40 cm/s. The results demonstrate the potential of 1000 fps HSCA in cardiovascular imaging. The combined high spatial and temporal resolution offered by this technique allows for the derivation of velocity profiles throughout the artery's structure, including regions distal and proximal to stenoses. This information can potentially be used to determine the need for stenoses treatment. Further investigations are warranted to expand our understanding of the applications of HSCA in cardiovascular research and clinical practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Epicardial Adipose Tissue (EAT) volume has been associated with risk of cardiovascular events, but manual annotation is time-consuming and only performed on gated Computed Tomography (CT). We developed a Deep Learning (DL) model to segment EAT from gated and ungated CT, then evaluated the association between EAT volume and death or Myocardial Infarction (MI). We included 7712 patients from three sites, two with ungated CT and one using gated CT. Of those, 500 patients from one site with ungated CT were used for model training and validation and 3,701 patients from the remaining two sites were used for external testing. Threshold for abnormal EAT volume (⪆144mL) was derived in the internal population based on Youden’s index. DL EAT measurements were obtained in ⪅2 seconds compared to approximately 15 minutes for expert annotations. Excellent Spearman correlation between DL and expert reader on an external subset of N=100 gated (0.94, p⪅0.001) and N=100 ungated (0.91, p⪅0.001). During median follow-up of 3.1 years (IQR 2.1 – 4.0), 306(8.3%) patients experienced death or MI in the external testing populations. Elevated EAT volume was associated with an increased risk of death or MI for gated (hazard ratio [HR] 1.72, 95% CI 1.11-2.67) and ungated CT (HR 1.57, 95% CI 1.20 – 2.07), with no significant difference in risk (interaction p-value 0.692). EAT volume measurements provide similar risk stratification from gated and ungated CT. These measurements could be obtained on chest CT performed for a large variety of indications, potentially improving risk stratification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Coronary calcium Agatston score and Epicardial Adipose Tissue (EAT) volume, as measured from CT Calcium Score (CTCS) images, are known risk factors for Major Adverse Cardiovascular Events (MACE). Here, we present greatly-expanded analysis using Coronary Artery Calcification (CAC) features, which more thoroughly capture pathophysiology of atherosclerosis, and EAT features, including HU thought to reflect inflammation, a harbinger of atherosclerosis. MACE-enriched dataset (2316 patients, 13.6% MACE) was divided into balanced training/testing (70/30). We employed manually segmented CACs and automatically segmented EAT using DeepFat. Calcium-omics and fat-omics features were crafted to capture pathophysiology. Elastic-net was employed for feature reduction, and Cox proportional hazards model was used to design novel calcium-fat-omics model. Baseline Agatston and EAT volume models yielded two-year-AUC training/testing results of (72.7%/68.2%) and (60.7%/55.6%), respectively. Our novel comprehensive analyses with some readily available clinical features gave improved results: calcium-omics (82.6%/72.2%), fat-omics (76.7%/71.7%), and calcium-fat-omics (83.7%/73.6%). In Kaplan-Meier survival analysis, the calcium-fat-omics model greatly improved risk stratification as compared to the standard Agatston model with five-risk intervals, suggesting improvement for personalized medicine.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Predicting Major Adverse Cardiovascular Events (MACE) accurately is crucial for implementing personalized medicine interventions effectively. Recent research has highlighted the significance of thoracic fat deposits, specifically Epicardial Adipose Tissue (EAT) and Paracardial Adipose Tissue (PAT), in predicting MACE. Their proximity to the coronary arteries and potential role in stimulating inflammation and atherosclerosis development contribute to their predictive utility. In this study, we developed a MACE prediction model based on Cox proportional hazards model with elastic net regularization, incorporating hand-crafted image features derived from EAT and PAT in non-contrast, CT Calcium Score (CTCS) exams. We constructed and collected morphological, intensity, and spatial features from manually corrected, deep learning-based adipose segmentation. To highlight the influence of imaging features, our preliminary study utilized a MACE-enriched cohort of 400 individuals (56% MACE) from a CLARIFY study of the University Hospitals of Cleveland. We divided the cohort into training (80%) and held-out testing (20%). We obtained c-index (training/testing) results for EAT-omics (0.66/0.69), and PAT-omics (0.64/0.67) models, respectively, both much better than the traditional EAT volume model gave (0.53/0.53). Notably, we identified high-risk features, including negative HU skewness in EAT, likely an indicator of fatty inflammation. Similar measurements in PAT did not. The improved discrimination with EAT and its connection to inflammation markers is consistent with its direct vascular communication with the myocardium and coronary vasculature. As PAT is outside the pericardial sac, it does not have direct vascular communication. These promising preliminary findings suggest that an AI adipose analysis can be a useful add-on to improve MACE prediction from CTCS exams.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Whole Heart Segmentation (WHS) aims to extract individual heart substructure and it is a critical step for the diagnosis, treatment planning, and evaluation of various cardiac disease. In this study, we examine the potential of Deep Learning (DL) neural networks to segment eight heart substructures from Computed Tomography (CT) scans. The four heart chambers, myocardium, aorta, pulmonary artery and left atrial appendage were manually annotated manually in 211 cardiac CTA exams and inspected by clinical experts. Those exams were used to train a multi-class 3D DL segmentation model. We investigated the impact of different network architectures, including UNet and its variants, CE-UNet, CE-A-Unet, with different input patch sizes and resolution. A test dataset comprising of 51 fully annotated exams was used to evaluate the model performance. Our findings indicate that, compared to UNet, neither CE-UNet or CE-A-Unet show superior performance. Moreover, the model with larger physical input patch size with coarser pixel resolution tends to achieve higher performance. The averaged dice score across all substructures was 0.91, which exceeds the current state-of-the-art.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image Processing, Detection, Segmentation, Registration, and Analysis
Patients with Diabetes Mellitus (DM) have an increased risk of Major Adverse Cardiac Events (MACE), commonly stratified via the Agatston score. In this study, we investigated Coronary Artery Calcification (CAC) patterns in patients with DM, to understand its impact on cardiovascular health. By individually segmenting CACs from over 25,000 Computed Tomography Calcium Score (CTCS) images and integrating clinical data, we created a propensity-matched cohort to isolate the effect of DM on calcification patterns by controlling for confounding variables such as age, gender, BMI, and medication use. We hand-crafted a novel set of 67 calcium-omics imaging features capturing the distribution, shape, density, and more of individual CACs and aggregated CAC features per artery. Diabetic patients, compared to nondiabetic, exhibited significantly higher Agatston (p-value: ⪅0.0005) and larger volume scores (p-value: 0.0018). Interestingly, diabetic patients had fewer calcified arteries (p-value: ⪅0005) and lower density calcifications than nondiabetic patients. Although previous studies have reported that DM leads to higher Agatston scores, to our knowledge, no studies have reported that this is driven by high-volume, low-density calcifications present in only one to two arteries. These findings suggest a distinct phenotype indicative of the continuing development of new lesions in affected arteries, possibly contributing to the increased incidence of MACE. This exploration of diabetes-related CAC patterns enhanced our understanding of the mechanisms of atherosclerotic cardiovascular disease in DM, emphasizing the need for targeted interventions and redefined cardiovascular risk models for this vulnerable population.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a deep learning approach to automated segmentation of cardiac structures in 5D (3D + Time + Energy) Photon-Counting micro-CT (PCCT) imaging sets. We have acquired, reconstructed, and fully segmented a preclinical dataset of cardiac micro-PCCT scans in APOE mouse models. These APOE genotypes serve as models of varying degrees of risk of Alzheimer’s disease and cardiovascular disease. The dataset of user-guided segmentations served as the training data for a deep learning 3D UNet model capable of segmenting the four primary cardiac chambers, the aorta, pulmonary artery, inferior and superior vena cava, myocardium, and the pulmonary tree. Experimental results demonstrate the effectiveness of the proposed methodology in achieving reliable and efficient cardiac segmentation. We demonstrate the difference in performance when using single-energy PCCT images versus decomposed iodine maps as input. We achieved an average Dice score of 0.799 for the network trained on single-energy images and 0.756 for the network trained using iodine maps. User-guided segmentations took approximately 45 minutes/mouse while CNN segmentation took less than one second on a system with a single RTX 5000 GPU. This novel deep learning-based cardiac segmentation approach holds significant promise for advancing phenotypical analysis in mouse models of cardiovascular disease, offering a reliable and time-efficient solution for researchers working with photon-counting micro-CT imaging data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This manuscript describes an image-based scheme for automatic segmentation and measurement of thrombosis. Biologists inject drugs that can cause thrombosis in mice and use a Confocal Laser Scanning Microscope (CLSM) to observe changes in blood vessels to understand the mechanism of thrombosis. However, it is difficult to segment the thrombus region in CLSM images because the thrombus region is very similar to the background. Therefore, computer vision-based methods are used to analyze thrombosis and assist biologists. A previous method used the difference between a preset reference frame (fixed frame) and a frame (current frame) to locate the thrombus region. However, this method did not take into account that the thrombus always grows inside the blood vessels, resulting in mis-segmented thrombus regions. Therefore, we use the anatomical structure relationship of the mouse to increase the accuracy of thrombus segmentation. We use the difference between the current frame and a reference frame to segment the thrombus region. The blood vessel, which is a representative anatomical structure in the CLSM image, is found using Otsu-based thresholding and is used to remove the false positive thrombus regions. The remaining thrombus region is used to calculate the size, the centroid coordinate of the thrombus, and the growth rate of the thrombus region. We created the ground truth of the thrombus regions to validate the proposed method. Experimental results showed that the DICE value of the proposed method was 0.76 ± 0.13.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Catheter tubes and lines are one of the most common abnormal findings on a chest x-ray. Misplaced catheters can cause serious complications, such as pneumothorax, cardiac perforation, or thrombosis, and for this reason, assessment of catheter position is of utmost importance. In order to prevent these problems, radiologists usually examine chest x-rays to evaluate their positions after insertion and throughout intensive care. However, this process is both time-consuming and prone to human error. Efficient and dependable automated interpretations have the potential to lower the expenses of surgical procedures, lessen the burden on radiologists, and enhance the level of care for patients. To address this challenge, we have investigated the task of accurate segmentation of catheter tubes and lines in chest x-rays using deep learning models. In this work, we have utilized transfer learning and transformer-based networks where we utilized two different models: a U-Net++-based model with ImageNet pre-training and an efficientnet encoder, which leverages diverse visual features in ImageNet to improve segmentation accuracy, and a transformer-based U-Net architecture due to its capability to handle long-range dependencies in complex medical image segmentation tasks. Our experiments reveal the effectiveness of the U-Net++-based model in handling noisy and artifact-laden images and TransUNET’s potential for capturing complex spatial features. We compare both models using the dice coefficient as the evaluation metric and determine that U-Net++ outperforms TransUNET in terms of these segmentation metrics. Our aim is to achieve more robust and reliable catheter tube detection in chest x-rays, ultimately enhancing clinical decision-making and patient care in critical healthcare settings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Federated Learning (FL) is a promising machine learning approach for development of data-driven global model using collaborative local models across multiple local institutions. However, the heterogeneity of medical imaging data is one of the challenges within FL. This heterogeneity is caused by the variation in imaging scanner protocols across institutions, which may result in weight shift among local models leading to deterioration in predictive accuracy of global model. The prevailing approaches involve applying different FL averaging techniques to enhance the performance of the global model, ignoring the distinct imaging features of the local domain. In this work, we address both the local and global model weight shift by introducing multiscale amplitude harmonization of the imaging in the local models utilizing Haar and harmonic wavelets. First, we tackle the local model weight shift by transforming the image feature space into multiscale frequency space using multiscale based harmonization. This aims to achieve harmonized image feature space across local models. Second, based on harmonized image feature space, a weighted regularization term is applied to local models, effectively mitigating weight shifts within these models. This weighted regularization assists in managing global model shifts by aggregating the optimized local models. We evaluate the proposed method using publicly available histopathological dataset MoNuSAC2018, TNBC for nuclei segmentation, and Camelyon17 dataset for tumor tissue classification. The average testing accuracies are 96.55%, and 92.47% for classification of tumor tissue while Dice co-efficients are 84.33%, and 84.46% for segmentation of nuclei with Haar and harmonic multiscale based harmonization, respectively. The comparison results for nuclei segmentation and tumor tissue classification using histopathological data show that our proposed methods perform better than the state-of-the-art FL methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Whole brain segmentation with Magnetic Resonance Imaging (MRI) enables the non-invasive measurement of brain regions, including Total Intracranial Volume (TICV) and Posterior Fossa Volume (PFV). Enhancing the existing whole brain segmentation methodology to incorporate intracranial measurements offers a heightened level of comprehensiveness in the analysis of brain structures. Despite its potential, the task of generalizing deep learning techniques for intracranial measurements faces data availability constraints due to limited manually annotated atlases encompassing whole brain and TICV/PFV labels. In this paper, we are enhancing the hierarchical transformer UNesT for whole brain segmentation to achieve segmenting whole brain with 133 classes and TICV/PFV simultaneously. To address the problem of data scarcity, the model is first pretrained on 4859 T1-weighted (T1w) 3D volumes sourced from eight different sites. These volumes are processed through a multiatlas segmentation pipeline for label generation, while TICV/PFV labels are unavailable. Subsequently, the model is finetuned with 45 T1w 3D volumes from Open Access Series Imaging Studies (OASIS) where both 133 whole brain classes and TICV/PFV labels are available. We evaluate our method with Dice Similarity Coefficients (DSC). We show that our model is able to conduct precise TICV/PFV estimation while maintaining the 132 brain regions performance at a comparable level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The precise placement of catheter tubes and lines is crucial for providing optimal care to critically ill patients. However, the challenge of mispositioning these tubes persists. The timely detection and correction of such errors are extremely important, especially considering the increased demand for these interventions, as seen during the COVID-19 pandemic. Unfortunately, manual diagnosis is prone to error, particularly under stressful conditions, highlighting the necessity for automated solutions. This research addresses this challenge by utilizing deep learning techniques to automatically detect and classify the positions of endotracheal tubes (ETTs) in chest x-ray images. Our approach builds upon recent advancements in deep learning for medical image analysis, providing a sophisticated solution to a critical healthcare challenge. The proposed model achieves remarkable performance, with the area under the ROC scores ranging from 0.961 to 0.993 and accuracy values ranging from 0.961 to 0.999. These results emphasize the effectiveness of the model in accurately classifying ETT positions, highlighting its potential clinical utility. Through this study, we introduce an innovative application of AI in medical diagnostics, with considerations for advancing healthcare practices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Convolutional Neural Network (CNN)-based models using Computed Tomography (CT) images classify Chronic Obstructive Pulmonary Disease (COPD) patients with high accuracy, but studies have used various different input images and it is unclear what input images are optimum, particularly in a milder COPD cohort. We propose a novel approach using 2D airway-optimized topological multi-planar reformat (airway-optimized tMPR) images as well as novel 3D fusion methods and compared the performance of these models with various established 2D/3D CNN-based methods in a population-based mild COPD cohort. Participants from the CanCOLD study were evaluated. We implemented several 2D/3D models adapted from the literature. Existing CNN-based models were trained using 2D collages of axial/coronal/sagittal slices, and colored and binary airway images. 3D models consisting of 15 axial inspiratory/expiratory slices were selected, and input and output combination methods were investigated. For the proposed models, 2D airway-optimized tMPR images were constructed using cut-surface renderings to convey shape and interior/contextual information. 3D output fusion of axial/coronal/sagittal images, as well as output fusion of the axial and 3D airway tree, were also investigated. Finally, the output fusion of 2D airway-optimized tMPR methods and 3D lungs combined method was investigated. 742 participants were used for training/validation and 309 for testing. The 2D and 3D methods adapted from the literature had accuracy ranging from 61%-72% in the mild COPD cohort. The 2D airway-optimized tMPR model achieved 73% accuracy. The proposed 3D model of combining axial/coronal/sagittal images had an accuracy of 75%. The proposed model output combining 2D colored airways and inspiratory combined 3D images, and the 3D collage of axial/coronal/sagittal images, resulted in 74% and 73% accuracy, respectively. However, the output fusion of the airway-optimized tMPR and 3D lung model of combining axial/coronal/sagittal images reached the highest accuracy of 78%. While the CNN model with 2D airway/lung-optimized images had improved performance with reduced computational resources as compared to the 3D models proposed, as well as the other published CNN-based models, the combination of this 2D method with the 3D CNN model of combining axial/coronal/sagittal images achieved the highest performance in this mild cohort.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Brain region segmentation and morphometry in mouse models of Alzheimer’s Disease (AD) risk allow us to understand how various factors affect the brain. Photon-Counting Detector (PCD) micro-CT can provide faster brain imaging than MRI and superior contrast and spatial resolution to Energy-Integrating Detector (EID) micro-CT. This paper demonstrates a PCD micro-CT based approach for mouse brain imaging, segmentation, and morphometry. We extracted and stained the brains of 26 mice from three genotypes (APOE22HN, APOE33HN, APOE44HN). We scanned these brains with PCD and EID micro-CT, performed hybrid (PCD and EID) iterative reconstruction, and used the Small Animal Multivariate Brain Analysis (SAMBA) tool to segment the brains via registration to our new PCD CT mouse brain atlas. We used the outputs of SAMBA to run region-based and voxel-based comparisons by genotype and sex. Together, PCD and EID scanning take approximately five hours and produce images with a voxel size of 22 μm, which is faster than prior MRI protocols that produce images with a voxel size above 40 μm. PCD images from hybrid iterative reconstruction have minimal artifacts and higher spatial resolution and contrast than EID images. Qualitative and quantitative analyses confirmed that our PCD atlas is similar to the prior MRI atlas and that it successfully transfers labels to PCD brains in SAMBA. Male and female mice had significant difference in relative size in 26 brain regions. APOE22HN brains were larger than APOE44HN brains in clusters from the hippocampus. This study successfully establishes a PCD micro-CT approach for mouse brain analysis that can be used for future AD research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the present research, the effectiveness of large-scale Augmented Granger Causality (lsAGC) as a tool for gauging brain network connectivity was examined to differentiate between marijuana users and typical controls by utilizing resting-state functional Magnetic Resonance Imaging (fMRI). The relationship between marijuana consumption and alterations in brain network connectivity is a recognized fact in scientific literature. This study probes how lsAGC can accurately discern these changes. The technique used integrates dimension reduction with the augmentation of source time-series in a model that predicts time-series, which helps in estimating the directed causal relationships among fMRI time-series. As a multivariate approach, lsAGC uncovers the connection of the inherent dynamic system while considering all other time-series. A dataset of 60 adults with an ADHD diagnosis during childhood, drawn from the Addiction Connectome Preprocessed Initiative (ACPI), was used in the study. The brain connections assessed by lsAGC were utilized as classification attributes. A Graph Attention Neural Network (GAT) was chosen to carry out the classification task, particularly for its ability to harness graph-based data and recognize intricate interactions between brain regions, making it appropriate for fMRI-based brain connectivity data. The performance was analyzed using a five-fold cross-validation system. The average accuracy achieved by the correlation coefficient method was roughly 52.98%, with a 1.65 standard deviation, whereas the lsAGC approach yielded an average accuracy of 61.47%, with a standard deviation of 1.44. A random guess method yielded an average accuracy of about 47.05%, with a standard deviation of around 6.25. The study indicates that lsAGC, when paired with a Graph Attention Neural Network, has the potential to serve as a novel biomarker for pinpointing marijuana users, offering a superior and consistent classification strategy over traditional functional connectivity techniques, including the random guess method. The suggested method enhances the body of knowledge in the field of neuroimaging-based classification and emphasizes the necessity to consider directed causal connections in brain network connectivity analysis when studying marijuana’s effects on the brain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Brain connectivity is usually analyzed based on graph theory and pinning control theory. Previous studies suggested that the topological properties of structural and functional networks for brain networks may be altered in association with neurodegnerative diseases. To better understand and characterize these alterations, we introduce a new approach - robustness of network controllability to evaluate network robustness, and identify the critical nodes, whose removals maximally destroys the network’s functionality. These alterations are due to external or internal changes in the network. Understanding and describing these interactions at the level of large-scale brain circuitry may be a significant step towards unraveling dementia disease evolution. In this study, we analyze structural and functional brain networks for healthy controls, MCI and AD patients such that we reveal the connection between network robustness and architecture and the differences between patients’ groups. We determine the critical and driver nodes of these networks as the key components for robustness of network controllability. Our results suggest that healthy controls for both functional and structural connectivity have more critical nodes than AD and MCI networks, and that these critical nodes appear clustered in almost all networks. Our findings provide useful information for determining disease evolution in dementia under the aspects of controllability and robustness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sleep disturbances are commonly reported among patients with Alzheimer’s Disease (AD). Further, the disruption of subcortical areas such as the Basal Forebrain (BF) and its constituent Nucleus Basalis of Meynert (NBM), which play an important role in maintaining wakefulness or alertness (also known as vigilance), occurs early in AD. In this study, we delineate vigilance-linked fMRI patterns in an aging population and determine how these patterns relate to subcortical integrity and cognition. We used fMRI data from the Vanderbilt Memory and Aging Project dataset, consisting of 49 MCI patients and 75 healthy controls. Since external measures of vigilance are not present during fMRI, we used a data-driven technique for extracting vigilance information directly from fMRI data. With this approach, we derived subject-specific spatial maps reflecting a whole-brain activity pattern that is correlated with vigilance. We first assessed the relationships between cognitive measures (subject memory composite and executive function scores) and structural measures (BF and NBM volumes obtained from subject-specific segmentation methods) using Pearson correlations. BF and NBM volumes were found to be significantly correlated with memory composite in MCI subjects and with executive function in HCs. We then performed a mediation analysis to evaluate how NBM volume may mediate fMRI-derived vigilance effects on memory composite scores in MCI subjects. fMRI vigilance activity and memory composite were significantly associated in the hippocampus, posterior cingulate cortex, and anterior cingulate cortex, regions involved in the default-mode and salience networks. These results suggest that cognitive decline in AD may be linked with both subcortical structural changes and vigilance-related fMRI signals, opening new directions for potential functional biomarkers in pathological aging populations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Magnetic Resonance Imaging (MRI): Methods Development, MRI Quantitation
The long acquisition time required for high-resolution Magnetic Resonance Imaging (MRI) leads to patient discomfort, increased likelihood of voluntary and involuntary movements, and reduced throughput in imaging centers. This study proposed a novel method that leverages MRI physics to incorporate data consistency during the training of a conditional diffusion probabilistic model, which we refer to as the data consistency-guided conditional diffusion probabilistic model (DC-CDPM). This model aimed to reconstruct high-resolution contrast enhanced T1W MRI from partially sampled data. The DC-CDPM utilized the conjugate gradient optimization method to minimize data consistency loss between reconstructed MRI images and fully sampled unknown MRI images. Further, a diffusion probabilistic model conditioned on the optimization’s output was trained to reconstruct the fully sampled MRI. The publicly available dataset of 230 post-surgery patients with different brain tumors was used in this study to train the model. The equidistant under-sampling method was implemented to simulate four different under-sampling levels. The qualitative and quantitative comparisons were done between DC-CDPM and an exactly similar CDPM model except not conditioned on the optimization output. Qualitatively, the DC-CDPM could reconstruct fully sampled images compared with CDPM. Furthermore, the image profile along a tumor indicated better performance of DC-CDPM. Quantitatively, the DC-CDPM outperformed CDPM in four out of six quantitative metrics and had a consistent performance throughout the different under-sampling levels. Our method could allow us to perform brain imaging with substantially lower acquisition time while achieving similar image quality of fully sampled MRI images with a long acquisition time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stroke is a global health concern, positioning itself as one of the chief culprits behind both disability and mortality worldwide. There is currently no treatment for ischemic stroke beside re-perfusion therapy. To evaluate new treatment options, researchers conduct studies on rat models of ischemic stroke using the same MRI approach as in patients. MR images and parametric maps are used to define the ischemic core, the area-at-risk of infarction and the final lesion. Historically, this process has relied heavily on manual segmentation conducted by experts, which not only consumes a significant amount of time but also often lacks the desired level of reliability. This drawback establishes an urgent requirement for robust, dependable, and automated tools to stimulate progress in stroke research. Addressing this pressing need, we introduce a novel pipeline that automates lesion segmentation in a rat stroke model of ischemic stroke. This innovative solution ingeniously amalgamates steps of pre-processing, optimal thresholding, and the state-of-the-art UNet deep learning method. From our knowledge, we are the first to propose an automatic regions segmentation from T2, DWI and PWI MRI imaging. The integrated approach of optimal thresholding and UNet employed in this pipeline delivers high-quality results. We evaluated performance on 58 rats using four measures of segmentation quality and also correlation curves between lesion size and manual versus automatic segmentation. With this robust tool, the segmentation of abnormalities in rat model is made both efficient and precise, saving valuable time and resources. Therefore, our results hold potential to propel advancements in stroke research and stimulate the development of pioneering treatment strategies. Our code and data (with manual annotations) are available online.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Diffusion tensor cardiac magnetic resonance (DT-CMR) is a method capable of providing non-invasive measurements of myocardial microstructure. Image registration is essential to correct image shifts due to intra and inter breath-hold motion and imperfect cardiac triggering. Registration is challenging in DT-CMR due to the low signal-to-noise and various contrasts induced by the diffusion encoding in the myocardium and surrounding organs. Traditional deformable registration corrects through-plane motion but at the risk of destroying the texture information while rigid registration inefficiently discards frames with local deformation. In this study, we explored the possibility of deep learning-based deformable registration on DT-CMR. Based on the noise suppression using low-rank features and diffusion encoding suppression using variational auto encoder-decoder, a B-spline based registration network extracted the displacement fields and maintained the texture features of DT-CMR. In this way, our method improved the efficiency of frame utilization, manual cropping, and computational speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Preclinical, Clinical Imaging, and Co-Clinical Imaging
Accurately predicting the clinical outcome of patients with aneurysmal subarachnoid hemorrhage (aSAH) presents notable challenges. This study sought to develop and assess a Computer-Aided Detection (CAD) scheme employing a deep-learning classification architecture, utilizing brain Computed Tomography (CT) images to forecast aSAH patients' prognosis. A retrospective dataset encompassing 60 aSAH patients was collated, each with two CT images acquired upon admission and after ten to 14 days of admission. The existing CAD scheme was utilized for preprocessing and data curation based on the presence of blood clot in the cisternal spaces. Two pre-trained architectures, DenseNet-121 and VGG16, were chosen as convolutional bases for feature extraction. The Convolution Based Attention Module (CBAM) was introduced atop the pre-trained architecture to enhance focus learning. Employing five-fold cross-validation, the developed prediction model assessed three clinical outcomes following aSAH, and its performance was evaluated using multiple metrics. A comparison was conducted to analyze the impact of CBAM. The prediction model trained using CT images acquired at admission demonstrated higher accuracy in predicting short-term clinical outcomes. Conversely, the model trained using CT images acquired on ten to 14 days accurately predicted long-term clinical outcomes. Notably, for short-term outcomes, high sensitivity performances (0.87 and 0.83) were reported from the first scan, while the sensitivity of (0.65 and 0.75) was reported from the last scan, showcasing the viability of predicting the prognosis of aSAH patients using novel deep learning-based quantitative image markers. The study demonstrated the potential of integrating deep-learning architecture with attention mechanisms to optimize predictive capabilities in identifying clinical complications among patients with aSAH.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study investigated the application of VivoVist™, a high-contrast micro-CT contrast agent, in spectral Photon-Counting (PC) micro-CT imaging in mouse models. With a long blood half-life, superior concentration, and reduced toxicity VivoVist, composed of barium (Ba)-based nanoparticles, offers a cost-effective solution for enhancing Computed Tomography (CT) imaging. To evaluate its efficacy, we employed an in-house developed spectral micro-CT with a photon-counting detector. VivoVist was administered through retro-orbital injection in a non-tumor-bearing C57BL/6 mouse and in two mice with MOC2 buccal tumors, with scans taken at various post-injection intervals. We used a multi-channel iterative reconstruction algorithm to provide multi-energy tomographic images with a voxel size of 125 microns or 75 microns for high-resolution scans; we performed post-reconstruction spectral decomposition with water, calcium (Ca), iodine (I), and barium (Ba) as bases. Our results revealed effective separation of Ba from I-based contrast agents with minimal cross-contamination and superior contrast enhancement for VivoVist at 39 keV. We also observed VivoVist's potential in delineating vasculature in the brain and its decreasing concentration in the blood over time post-injection, with increased uptake in the liver and spleen. We also explored the simultaneous use of VivoVist and liposomal iodinated nanoparticles in a cancer study involving radiation therapy. Our findings reveal that VivoVist, combined with radiation therapy, did not significantly increase liposomal iodine accumulation within head and neck squamous cell carcinoma tumors. In conclusion, our work confirms VivoVist's promising role in enhancing PCCT imaging and its potential in studying combination therapy, warranting further investigation into its applications in diagnostics and radiotherapy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
F-Fluorodeoxyglucose-positron emission tomography (FDG-PET) imaging is a valuable diagnostic tool in oncology with a wide range of clinical applications for cancer diagnosis, staging, and monitoring treatment response. Accurate tumor segmentation from these images is vital for understanding the biochemical and physiological alterations within the tumors. End-to-end deep learning approaches enable rapid and reproducible tumor identification and extraction, surpassing manual and semi-automatic methods. Compared to other organs, intestinal tumor segmentation poses a significant challenge due to its complex anatomical shape and acute non-malignant findings. This study aims to investigate the impact of training data homogeneity on the segmentation results of intestinal tumors using Convolutional Neural Networks (CNNs). To achieve this, we propose an organ-based approach where the training data is limited to the small intestine region. We will compare the results obtained by the organ-based approach with those from a model trained on the whole-body PET/CT data. In the whole-body approach, tumor segmentation predictions for the intestine are extracted from the results obtained by training on the whole-body data. Quantitative results show that the organ-based approach outperform the whole-body method in segmentation of intestinal tumors. Whole-body and organ-based approaches generated a dice score (mean±std) of 0.63±0.30 and 0.78±0.21 for the whole-body and organ-based approaches respectively with p-value less than 0.0001. The lesion level analysis yielded F1 scores of 0.79 for the whole-body approach and 0.86 for the organ-based approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Chronic respiratory diseases affect 10% of the adult population and account for the third leading cause of death. As diagnosis and monitoring of such diseases are typically performed based on functional metrics, x-ray Phase-Contrast Imaging (PCI) has been recently proposed as a method capable of capturing tissue microstructure, particularly in early stages of disease. In this study, we aim to design and develop an inflatable murine lung phantom that can house an ex vivo murine lung and support inflation of the tissue sample. The phantom consists of two sections – the phantom casing and vacuum system – which are respectively responsible for encasing the tissue sample and inducing inflation. Then, a lung sample with all lobes, trachea, and aorta intact is obtained from an adult mouse and placed within the phantom casing. X-ray intensity images are taken for two lung tissue samples pre- and post-inflation and measured for width and height of each left and right lobes. Results show discernable left and right lobes of each tissue sample with an average 9 μm increase in width from pre- to post-inflation. Qualitative assessment shows appreciable increase in size of both lobes in photos and intensity images from pre- to post-inflation, with additional visual color change from red to pink and loss of intensity in x-ray images. Thus, we have introduced a murine lung phantom with thorough design, construction, and assembly methods, demonstrated its effectiveness in x-ray imaging, and confirmed its capability to inflate a complete mouse lung.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To perform a retrospective study to characterize hypoplastic S1 and its correlation to early degenerative disc disease, disc and disc space abnormalities on routine MRI. We studied magnetic resonance images of 41 patients with posterior segment S1 hypoplasia and 47 controls. The Anterior-Posterior (A-P) diameter of the superior plate of S1 and the inferior plate of L5 was measured in each patient of the two groups and the mean difference between the diameters was compared. Age-of-presentation at the time of imaging was calculated and is reported. There is a significant mean difference between the A-P diameters in the two groups (student t-test; p ⪅; 0.05). There is also a significant difference in the age-at-presentation between the two groups. Hypoplastic S1 is a strong predictor of an earlier presentation of degenerative disc disease. These findings may have implications in early intervention in this population that may stabilize or mitigate symptoms and improve quality-of-life.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral Imaging (HSI) is an emerging imaging modality in medical applications, especially for intraoperative image guidance. A surgical microscope improves surgeons’ visualization with fine details during surgery. The combination of HSI and surgical microscope can provide a powerful tool for surgical guidance. However, to acquire high-resolution hyperspectral images, the long integration time and large image file size can be a burden for intraoperative applications. Super-resolution reconstruction allows acquisition of low-resolution hyperspectral images and generates high-resolution HSI. In this work, we developed a hyperspectral surgical microscope and employed our unsupervised super-resolution neural network, which generated high-resolution hyperspectral images with fine textures and spectral characteristics of tissues. The proposed method can reduce the acquisition time and save storage space taken up by hyperspectral images without compromising image quality, which will facilitate the adaptation of hyperspectral imaging technology in intraoperative image guidance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Agile development of reliable and accurate segmentation models during an infectious disease outbreak has the potential to reduce the need for already-strained human expertise. Global research and data-sharing efforts during the COVID-19 pandemic have shown how rapidly Deep-Learning (DL) models can be developed when public datasets are available for training. However, these efforts have been rare, usually limited by the unavailability of Computed Tomography (CT) imaging datasets from patients in the clinical setting. In the absence of human data, animal models faithful to human disease are used to investigate the imaging phenotype of high-consequence and emerging pathogens. As simultaneous access to both human and Nonhuman Primate (NHP) data for the same respiratory infection is unusual, we were interested in whether the inclusion of NHP data might enhance DL image segmentation of lung lesions associated with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection. Thus, we set out to evaluate DL performance and generalizability to a human test set. We found that combining human and NHP data and utilizing pretrained NHP models to initialize model training outperformed a model trained solely on human CT data. By studying the interaction between human and NHP CT imaging in developing these models, we can assess the potential value of NHP datasets for known or novel viruses that emerge in settings where medical imaging capacity is limited. Understanding and leveraging NHP datasets to improve the agility and quality of model development capabilities could better prepare us to respond to disease outbreaks in the human population.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Novel Molecular and Functional Imaging Technologies
Emerging cancer therapies—such as photothermal ablation therapy and metal-mediated radiation therapy—utilize low concentrations of Gold Nanoparticles (GNPs). For safe and effective treatment, the spatial distribution and concentration of GNPs must be known. However, current clinical imaging modalities cannot simultaneously image both the concentration and spatial distribution of GNPs relevant to these therapies. This study presents a novel metal-mapping imaging modality, X-ray Fluorescence Emission Tomography (XFET), that addresses this need by directly detecting x-ray fluorescence emitted from GNPs after interaction with an x-ray pencil beam. This study compares XFET to Computed Tomography (CT) for trace metal mapping by comparing a realistic XFET Monte Carlo simulation to idealized, approximately dose-matched CT acquisitions. The Monte Carlo XFET simulation was performed on a digital MOBY mouse phantom containing gold concentrations ranging from 0.12% to 4% by weight in the kidneys, hind-leg tumor, and other organs. Contrast-to-Noise Ratios (CNRs) of the gold-containing kidneys and tumor were compared between XFET and CT images. XFET produced superior CNR values (CNRs = 24.5, 21.6, 3.4) compared to CT images obtained with both energy-integrating (CNR = 4.4, 4.6, 1.5) and photon-counting (CNR = 6.5, 7.7, 2.0) detection systems. The CNR improvements of XFET averaged 315% and 175%, respectively. Because the gold concentrations imaged here are modeled after a previous preclinical study, this work shows XFET’s feasibility for metal mapping in future preclinical applications and warrants further study to demonstrate proof of benefit and quantification of XFET’s detection limit.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our goal is development of an innovative brain-PET with effective sensitivity (8X) and higher spatial resolution vs. current advanced brain-PET systems by implementation of advanced ultrafast SiPM/readout systems coupled to thin monolithic crystals arranged in “onion ring” geometry with small air-gaps between the rings enabling accurate tracking of Compton Scatter (CS) events followed by photoelectric absorption (PE) events, forming “triplets” (PE =CS-PE). We performed Monte Carlo simulations of four concentric rings with diameters 250, 270, 290, 310 mm, and 508 mm axial length with monolithic 3-mm-thick LYSO thin-slab detector modules. The brain was simulated by a water sphere containing F-18. We considered only true-coincidence (PE=PE) and triplet (PE =CS-PE) events. For triplets, back-to-forward scatter ratio is 0.26. The triplet-to-true-coincidence events ratio is 0.30. Inclusion of triplets in addition to true-coincidence events allows sensitivity increase by ~30%. Because the point-of-first interaction is well defined, the improved spatial resolution is anticipated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pencil beam X-ray Luminescence Computed Tomography (XLCT) is a developing hybrid molecular imaging modality combining the merits of both x-ray imaging (high spatial resolution) and optical imaging (high sensitivity). Narrow x-ray beam-based XLCT imaging has shown promise for high spatial resolution imaging and high molecular sensitivity, but so far there is no quantitative study of XLCT imaging for x-ray excitable nanophosphor targets in deep tissue. In this study, we have for the first time performed quantitative study on the reconstructed nanophosphor target concentrations through phantom experiments. We have upgraded our XLCT imaging system by mounting four optical fiber cables to increase the efficiency in collecting x-ray induced optical photons. We also used a piece of scintillator crystal to monitor the x-ray pencil beam’s intensity to sense automatic phantom boundary and to perform parallel beam-based CT imaging simultaneously, which can be used to verify the true locations of the reconstructed XLCT targets. We have scanned a cylindrical agar phantom containing twelve targets filled with three different nanophosphor concentration (2.5 mg/ml, 5 mg/ml, and 10 mg/ml) and reconstructed XLCT images with the Filtered Back-Projection (FBP) algorithm. We have conducted quantitative analysis of the phantom experimental results employing different numbers of optical fiber cables and found the reconstructed signals ratio is calculated to be 1: 2.17: 3.55, which is close to the ground truth target concentrations ratio of 1:2:4.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, we explore the potential of large-scale Granger Causality (lsGC) estimated brain network connectivity as a biomarker for classifying marijuana users from typical controls using resting-state functional Magnetic Resonance Imaging (fMRI). It is well-established in the literature that marijuana use is associated with alterations in brain network connectivity, and we investigate whether lsGC can effectively capture such changes. The lsGC method, a multivariate approach based on dimension reduction and predictive time-series modeling, allows for estimating directed causal relationships among fMRI time series, considering the interdependence of time series within the underlying dynamic system. We employ a dataset consisting of 60 adult subjects with a childhood diagnosis of ADHD from the Addiction Connectome Preprocessed Initiative (ACPI) database. Brain connections estimated using lsGC are extracted as features for classification. We utilize a Graph Attention Neural Network (GAT) to accomplish the classification task. The GAT model is specifically chosen for its ability to leverage graph-based data and capture complex interactions between brain regions, making it well-suited for handling fMRI-based brain connectivity data. To assess the performance of our approach, we employ a cross-validation scheme with five-fold cross-validation. The mean accuracy computed for the correlation coefficient method is approximately 53.78%, with a standard deviation of about 4.80, while the mean accuracy for our approach, lsGC, is approximately 64.89%, with a standard deviation of 1.10. The findings suggest that lsGC, in conjunction with a Graph Attention Neural Network, holds promise as a potential biomarker for identifying marijuana users, providing a more effective and reliable classification approach than conventional functional connectivity measures. The proposed methodology offers a valuable contribution to neuroimaging-based classification studies and highlights the importance of considering directed causal relationships in brain network connectivity analysis when investigating the impact of marijuana use on the brain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The progression of obesity can be influenced by lipid metabolism and alterations in fatty acid levels. This study utilized Raman techniques to analyze the impact of High-Fat Diet (HFD) consumption on White Adipose Tissue (WAT) in an animal model (mice). Our results from statistical examination of Raman spectra indicated a substantial increase in unsaturated lipid levels in Visceral WAT (VWAT) fat pads when exposed to a high-fat die. The VWAT tissues were analyzed and mapped using a targeted Raman image analysis method employing Direct Classical Least Squares (DCLS) approaches to characterize lipid species such as ω-3 and ω-6 fatty acids. The analysis showed higher concentrations of ω3, ω6, cholesterol, and triglycerides in adipose tissues from the high-fat diet group compared to the Low-Fat Diet (LFD) group. The study demonstrated that Raman spectroscopy and microscopy, as a reliable and non-invasive technique, offered important understanding at the molecular level into the process of lipid species remodeling and the spatial distribution of adipose tissues during a high-fat diet.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Here we report on the development of non-hygroscopic and lead-free perovskite Cs3Cu2I5 (CCI) scintillator thin films for high-resolution x-ray imaging. The investigation is motivated by the bulk CCI crystal (density of 4.53 g/cm3), which has been reported to exhibit an ultrahigh light yield of 98,000 photons/MeV with reduced afterglow. Thin films were fabricated using two different techniques: hot-wall evaporation and casting of CCI – polymer composite materials. The films fabricated by both these techniques demonstrated high atmospheric stability. Independent of the fabrication method, the x-ray radioluminescence of undoped CCI films showed a bright blue emission at 450 nm from self-trapped exciton emission, whereas Tl-doped CCI films showed emission at 500 nm. Importantly, upon x-ray irradiation, a 100 μm thick CCI film exhibited a relatively high light output of 140% of Kodak Min-R 2000 film. The modulation transfer function (MTF(f)) was approximately 5 lp/mm at 10 % MTF for a 105 μm thick vapor deposited CCI film as well as 800 μm thick CCI – polymer composite film, measured using 70 kVp x-ray images of a tungsten slit phantom. The high air stability, nontoxicity, and high radioluminescence intensity with reduced afterglow, make CCI a potential replacement material for high-resolution, high-speed x-ray imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Photon Counting Detectors (PCD) have emerged as a transformative technology in CT and micro-CT imaging, offering enhanced contrast resolution and quantitative material separation in a single scan, a notable advancement from traditional energy-integrating detectors. The unique properties of bismuth tungstate (Bi2WO6) nanoparticles (NPs), hold promise in many applications, including contrast-enhanced CT imaging and photothermal therapy, especially in addressing tumor hypoxia challenges. However, despite these promising traits, the performance of PCCT imaging using Bi2WO6 NPs has not been fully explored. Our study bridges this gap by employing both simulations and real experiments. Using iterative PCCT reconstruction, we achieved significant noise reduction, from a noise standard deviation up to 786 Hounsfield Units (HU) down to 54 HU, enabling material decomposition. The dual K-edge of Bi2WO6, coupled with a precise 2:1 Bismuth to Tungsten ratio, offers a unique, quantifiable signature for PCCT imaging: the enhancement of Bi2WO6 remains largely constant over the diagnostic x-ray range (stddev: 1.24 HU/mg/mL over 25-91 keV energy thresholds, 125 kVp spectrum; iodine stddev: 11.62 HU/mg/mL). Improved separation of contrast material from intrinsic tissues promises to enhance all facets of clinical CT, including new avenues for radiation dose and metal artifact reduction. Potential new clinical applications include targeted radiation therapy, where Bi2WO6 NPs could intensify treatment efficacy and optimize chemotherapeutic delivery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Understanding detailed hemodynamics is critical in the treatment of aneurysms and other vascular diseases; however, traditional Digital Subtraction Angiography (DSA) does not provide detailed quantitative flow information. Instead, 1000 fps High-Speed Angiography (HSA) can be used for high-temporal visualization and evaluation of detailed blood flow patterns and velocity distributions. In the treatment of aneurysms, flow diverter expansion and positioning play a critical role in affecting the hemodynamics and optimal patient outcomes. Patient-specific aneurysm phantom imaging was done with a CdTe photon-counting detector (Aries, Varex). Treatment was done with a Pipeline Flex Embolization Device on a 3D-printed fusiform aneurysm phantom. The untreated aneurysm and two treatment stent expansions and positions were imaged, and velocity calculations were generated using Optical Flow (OF). Pre- and post-treatment images were then compared between different HSA image sequences and evaluated using OF with different stent positions. Differences in flow patterns due to changes in stent placement characteristics were identified and quantified with OF velocimetry. The velocity results within the aneurysm post-treatment showed significant flow reduction. Differences in stent placement result in substantial changes in velocities. The peak velocities found in the aneurysm dome show a reduction with the widened stent placement compared to the narrowed placement and both are reduced compared to the untreated aneurysm. The stent placements were compared quantitatively with the adjusted widened stent clearly better diverting the flow away from the aneurysm with decreased velocity in the aneurysm dome compared to both the narrowed stent placement and the untreated aneurysm. Providing this information in-clinic can help improve treatment and patient outcomes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical Imaging and Optical Coherence Tomography (OCT)
In assessing Endothelial Cell Density (ECD), a critical measure of corneal health, eye bank technicians rely on manual methods that are time-consuming and potentially inconsistent, typically analyzing only 100-300 of nearly 1,000 captured endothelial cells per image. We introduce a self-supervised vision transformer model that accurately segments 100-1,263 cells and calculates ECDs, with a mean difference of 9.74% and 87% alignment with eye-bank-determined ECD. Integrated into a robust software-editor, our system offers an efficient approach to ECD analysis, presenting a significant value proposition for eye banks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study aims to investigate the effectiveness of applying fractal analysis to pre-operative MRI images for prediction of glioma IDH mutation status. IDH mutation has been shown to provide more prognostic and therapeutic benefits to patients, so predicting it before surgery can provide useful information for planning the proper treatments. This study utilized the UCSF-PDGM dataset from the Cancer Image Archive. We used the modified box counting method to compute the fractal dimension of segmented tumor regions in pre- and post-contrast T1-weighted MRI. The results showed that the FD provided clear differentiation between tumor grades, with higher FD correlated to higher tumor grade. Additionally, FD demonstrated clear separation between IDH wildtype and IDH mutated tumors. Enhanced differentiation based on FD was observed with post-contrast T1-weighted images. Significant p-values from the Wilcoxon rank sum test validated the potential of using fractal analysis. The AUC of ROC for IDH mutation prediction reached 0.88 for both pre- and post-contrast T1-weighted images. In conclusion, this study shows fractal analysis is a promising technique for glioma IDH mutation prediction. Future work will include studies using more advanced MRI imaging contrasts as well as combination of multi-parametric images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surface Plasmon Resonance (SPR) microscopy was used for the imaging of inhomogeneous liquid mixtures and surface features. This study is based on the fact that the refractive index changes, resulting from the mixture inhomogeneity or surface imperfections, can be captured with a camera after a polarized light beam passes through a layer of the mixture and the plasmonic interface. Unlike conventional SPR sensing where data is averaged over the sensing area, the SPR microscopy technique produces an intensity variation in the SPR response at each pixel of the image, offering spatial mapping of any given sensing area. We use a technique that enables the removal of the background light entirely and only the SPR converted light can be detected where its intensity is dependent on the local refractive index.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quantitative analysis of Diffusion-Weighted Magnetic Resonance Imaging (DW-MRI) has been explored for many clinical applications since its development. In particular, the Intravoxel Incoherent Motion (IVIM) model for DW-MRI has been commonly utilized in various organs. However, due to the presence of excessive noise, the IVIM parametric images obtained from a pixel-wise biexponential fitting are often over-estimated and unreliable. In this study, we propose a kernelized total difference-based curve-fitting method to estimate the IVIM parameters. Both simulated and real DW-MRI data were used to evaluate the performance of the proposed method, and the results were compared with those obtained by two existing methods: Trust‐Region Reflective (TRR) algorithm and Bayesian Probability (BP). Our simulation results showed that the proposed method outperformed both the TRR and BP methods in terms of root-mean-square error. Moreover, the proposed method could preserve small details in the estimated IVIM parametric images. The experimental results showed that compared to the TRR method, both the proposed method and the BP method could reduce the over-estimation of the pseudo-diffusion coefficient and improve the quality of IVIM parametric images. The kernelized total difference-based curve-fitting method has the potential to improve the reliability of IVIM parametric imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bronchoscopy is an effective minimally invasive procedure for detecting early-stage lung cancer lesions along the airway walls. Related to this point, Narrow-Band Imaging (NBI) has been shown to be an especially effective bronchoscopic modality for bronchial lesion detection. Unfortunately, NBI bronchoscopy tends to be overly tedious for the physician to use routinely for early lung cancer detection, because of a lack of effective tools for facilitating an exam of the complex airway tree. In addition, no tools exist for documenting an airway exam. To address this problem, we propose an interactive graphical analysis system that offers: 1) navigational guidance to facilitate efficient bronchoscopic airway exams; 2) real-time lesion detection to enable automatic analysis of the incoming video; and 3) visualization tools to provide a comprehensive assessment of an airway exam. Using a series of NBI airway exam videos from lung cancer patients, our system demonstrated the potential ability to detect and localize lesions in real-time as the physician performs a systematic guided bronchoscopic navigation through the airways. The system also enables more efficient documentation of findings than current clinical practice. In particular, a profile is produced for each detected lesion comprising all video frames depicting the lesion. In addition, the profile is associated with an airway path generated from the patient’s 3D chest CT scan to provide airway location information. Associated navigational instructions enable the physician to reach the lesion during follow-up examinations. Lastly, our system summarized an entire NBI airway exam using only 2% of the video frames to denote lesions and excluded the remaining 98% of a video depicting normal findings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several Magnetic Resonance Imaging (MRI) sequences are acquired for diagnosis and treatment. MRI with excellent soft-tissue contrast is desired for post-processing algorithms such as tumor segmentation. However, their performance markedly dropped due to the variation in medical imaging protocols or missing information. This study proposed a co-training deep learning algorithm for segmenting the vestibular schwannoma (VS) cancer. Our model was trained on both contrast-enhanced T1W (ceT1W) and high-resolution T2W (hrT2W) MRI sequences to segment Vestibular Schwannoma (VS) cancer and cochlea. Our model utilized content and style matching mechanisms to infuse the informative features from the network trained using full modality into the network trained using missing modality. Our model was trained using the publicly available Vestibular-Schwannoma-SEG dataset, which consists of 242 patients with ceT1W and hrT2W MRI sequences. The dataset was split into two non-overlapping groups: training (n=210) and testing (n=32). Three metrics were reported, including Dice Score (DCS), Relative Volume Error (RVE), and area under the receiver operating characteristic (AUC-ROC) curve. Our method had a superior performance to segment tumor compared with the baseline with (DCS, RVE, AUC-ROC) of (0.89, 3.57, 0.96) and (0.94, 3.10, 0.97) when ceT1W and hrT2W were missed, respectively. Similar performance was observed for segmenting the cochlea when hrT2W was missed with (DCS, RVE, AUC-ROC) of (0.68, 14.06, 0.80). Our model is robust against missing sequences, which is common in clinical settings. It could benefit clinical centers with missing data or different imaging protocols.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Global brain-wide signals in functional magnetic resonance imaging (fMRI) are influenced by temporal variations in vigilance, peripheral physiological processes, head motion, and other potential neuronal and non-neuronal sources. These effects are challenging to disentangle as fluctuations in vigilance and peripheral physiology are difficult to detect with fMRI alone. In this study, we leveraged multimodal neuroimaging data (simultaneous fMRI, EEG, respiratory, and cardiac recordings) to investigate the ability of dimensionality reduction techniques to separate influences of vigilance, physiology, and other global effects in fMRI. Our study included resting-state fMRI from 30 subjects, parcellated into 317 brain regions. Two different methods, temporal independent component analysis (tICA) and a fully connected autoencoder, were used to project the atlas-based data into a lower dimensional latent space. The correlation of each latent component with the EEG alpha/theta power ratio (a marker of vigilance), physiological signals (respiratory volume and heart rate), and the global fMRI signal was computed. LASSO regression was additionally employed to reconstruct the alpha/theta ratio from the latent components. Our results showed that tICA, but not the autoencoder, was able to disentangle a vigilance-related component from other global effects. Both the vigilance and global components exhibited a moderate relationship with physiological activity. Therefore, tICA is useful for isolating vigilance-related influences in fMRI, which may aid in discovering novel clinical biomarkers linked to vigilance dysregulation as well as assist in explaining intersubject variability due to in-scanner state.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional stimulation-evoked cortical activity analysis mainly relies on the manual detection of the bright spots in a specific size regarded as active cells. However, there is limited research on the automatic detection of the cortical active cell in optical imaging of in vivo experiments which is much noisy. To address the laborious and hard annotation work, we propose a novel weakly supervised approach for active cell detection along the temporal frames. Compared to prevalent detection methods on common datasets, we formulate cell activation detection as a classification problem. We combine the techniques of clustering and deep neural network with little user indication of the Maximum Intensity Projection (MIP) of the time-lapse optical image sequence to realize the unsupervised classification model. The proposed approach achieves comparable performance on our optical image sequence with instant activation changing at each frame, which marks the cells using the fluorescent indicators. Although much noise is introduced during in vivo imaging, our algorithm is designed to accurately and effectively generate statistics on cell activation without requiring any prior training data preparation. This feature makes it particularly valuable for analyzing cell responses to psychopharmacological stimulation in subsequent analyses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Alzheimer’s Disease (AD) is a progressive neurodegenerative disorder leading to cognitive decline. [18F]-Fluoro-deoxyglucose positron emission tomography ([18F]-FDG PET) is used to monitor brain metabolism, aiding in the diagnosis and assessment of AD over time. However, the feasibility of multi-time point [18F]-FDG PET scans for diagnosis is limited due to radiation exposure, cost, and patient burden. To address this, we have developed a predictive image-to-image translation (I2I) model to forecast future [18F]-FDG PET scans using baseline and year-one data. The proposed model employs a convolutional neural network architecture with long-short term memory and was trained on [18F]-FDG PET data from 161 individuals from the Alzheimer’s Disease Neuroimaging Initiative. Our I2I network showed high accuracy in predicting year-two [18F]-FDG PET scans, with a mean absolute error of 0.031 and a structural similarity index of 0.961. Furthermore, the model successfully predicted PET scans up to seven years post-baseline. Notably, the predicted [18F]-FDG PET signal in an AD-susceptible meta-region was highly accurate for individuals with mild cognitive impairment across years. In contrast, a linear model was sufficient for predicting brain metabolism in cognitively normal and dementia subjects. In conclusion, both the I2I network and the linear model could offer valuable prognostic insights, guiding early intervention strategies to preemptively address anticipated declines in brain metabolism and potentially to monitor treatment effects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High Speed Angiography (HSA) at 1000 fps is a novel interventional-imaging technique that was previously used to visualize changes in vascular flow details before and after flow-diverter treatment of cerebral aneurysms in in-vitro 3D printed models.1 In this first pre-clinical work, we demonstrate the use of the HSA technique during flow-diverter treatment of in-vivo rabbit aneurysm models. An aneurysm was created in the right common carotid artery of each of two rabbits using previously published elastase aneurysm-creation methods.2 A 5 French catheter was inserted into the femoral artery and moved to the aneurysm location under the guidance of standard-speed 10 fps Flat Panel Detector (FPD) fluoroscopy. Following this, a flow diverter stent was placed in the parent vessel covering the aneurysm neck and diverting the flow away from the aneurysm. HSA was performed before and after placement of the flow diverter using a 1000 fps CdTe photon-counting detector (Aries, Varex). The detector was mounted on a motorized changer and was used with a commercial x-ray c-arm system. During these procedures Omnipaque iodinated contrast was injected into the aneurysm area using a computer-controlled injector at a steady rate of 50 ml/min or 70 ml/min depending on the rabbit to visualize blood flow detail. The contrast injection and x-ray image acquisition were synchronized manually. The x-ray image acquisition was for a duration of one second, from which 300 ms was used for velocity analysis during systole. Detailed differences in flow patterns in the Region of Interest (ROI) between pre and post flow-diverter deployment were visualized at the high frame rates. The Optical Flow (OF) method for velocity calculation was performed upon the acquired 1000 fps HSA image sequences to provide quantitative evaluation of flow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
White Matter Hyperintensities (WMH) are associated with stroke and cognitive decline in cerebral Small Vessel Diseases (cSVDs). Cerebral Autosomal Dominant Arteriopathy with Subcortical Infarcts and Leukoencephalopathy (CADASIL) is a genetic form of cSVD with observed pathology in small cerebral arteries. CADASIL is mostly studied through MRI lesions such as WMH, but these do not give insight on the disease processes happening in the small vessels. Characterizing the functioning of these vessels in vivo using 7T MRI can give us more insight in the disease processes in cSVDs. In our previous work, small vessel function was found to be impaired in CADASIL and was associated with baseline WMH. Here we studied if impaired small vessel function on 7T MRI was associated with WMH volume change in CADASIL. CADASIL patients (n=23) were recruited through the ZOOM@SVDs study, a prospective observational cohort study with two-year follow-up. Participants underwent a 3T brain MRI at baseline and follow-up to quantify WMHs. 7T brain MRI was performed at baseline to assess small vessel function. WMH volume change was defined as follow-up WMH minus baseline WMH. The potential relations between small vessel function and baseline WMH or WMH change were assessed using linear regression. WMH volume increased with 0.5% of intracranial volume (P<0.001) after two-year follow-up. As reported in previous work by our group, an inverse relationship was found between baseline WMH and small vessel function in terms of mean blood flow velocity within the perforating arteries of the semioval center. In this work, we found no significant associations between small vessel dysfunction and longitudinal WMH change. Further studies assessing the association between small vessel dysfunction and white matter injury markers on a voxelwise level might give us more understanding of the role of small vessel dysfunction in cSVDs pathology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Organoids are self-organized 3D cell clusters that closely mimic the architecture and function of in vivo tissues and organs. Quantification of organoid morphology helps in studying organ development, drug discovery, and toxicity assessment. Recent microscopy techniques provide a potent tool to acquire organoid morphology features, but manual image analysis remains a labor and time-intensive process. Thus, this paper proposes a comprehensive pipeline for microscopy analysis that leverages the SegmentAnything to precisely demarcate individual organoids. Additionally, we introduce a set of morphological properties, including perimeter, area, radius, non-smoothness, and non-circularity, allowing researchers to analyze the organoid structures quantitatively and automatically. To validate the effectiveness of our approach, we conducted tests on bright-field images of human induced Pluripotent Stem Cells (iPSCs) derived Neural-Epithelial (NE) organoids. The results obtained from our automatic pipeline closely align with manual organoid detection and measurement, showcasing the capability of our proposed method in accelerating organoids morphology analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study examines the potential of Self-Supervised Learning (SSL) for segmenting the aorta and coronary arteries in Coronary Computed Tomography Angiography (CCTA) volumes to facilitate automated Pericoronary Adipose Tissue (PCAT) analysis. Utilizing 83 CCTA volumes, we explored the efficacy of SSL in a limited dataset environment. Forty-nine CCTA volumes were designated for SSL and supervised learning, while the remaining 34 formed a held-out test set. The Deep Learning (DL) model’s encoder was pretrained on unlabeled CCTA volumes during SSL and subsequently fine-tuned on labeled volumes in the supervised learning phase. This process enabled the DL model to learn feature representations without extensive annotations. The segmentation performance was assessed by varying the percentage of the 49 CCTA volumes used in supervised learning. With SSL, the model demonstrated a consistently higher segmentation performance than that of non-pretrained (random) weights, achieving a Dice of 0.866 with only 15 labeled volumes (30%) compared to a Dice of 0.864 with 44 labeled volumes (90%) required by random weights. Additionally, we segmented Pericoronary Adipose Tissue (PCAT), finding no significant differences in mean Hounsfield Unit (HU) attenuation between ground truth and predictions. The mean attenuation of PCAT-LAD and PCAT-RCA for ground truth were -79.97 HU (SD = 9.54) and -85.47 HU (SD = 8.41) respectively, indicating no statistically significant differences when compared to the predicted values of -80.11 HU (SD = 9.35) for PCAT-LAD and -86.19 HU (SD = 8.36) for PCAT-RCA. The findings suggest that in-domain SSL pretraining required about 66% less labeled volumes for comparable segmentation performance, thus offering a more efficient approach to leveraging limited dataset for DL applications in medical imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this study is to reduce radiation exposure in PET imaging while preserving high-quality clinical PET images. We propose the PET Consistency Model (PET-CM), an efficient diffusion-model-based approach, to estimate full-dose PET images from low-dose PETs. PET-CM delivers synthetic images of comparable quality to state-of-the-art diffusion-based methods but with significantly higher efficiency. The process involves adding Gaussian noise to full-dose PETs through a forward diffusion process and then using a PET U-shaped network (PET-Unet) for denoising in a reverse diffusion process, conditioned on corresponding low-dose PETs. In experiments denoising one-eighth dose images to full-dose images, PET-CM achieved an MAE of 1.321±0.134%, a PSNR of 33.587±0.674 dB, an SSIM of 0.960±0.008, and an NCC of 0.967±0.011. In scenarios of reducing from 1/4 dose to full dose, PET-CM further showcased its capability with an MAE of 1.123±0.112%, a PSNR of 35.851±0.871 dB, an SSIM of 0.975±0.003, and an NCC of 0.990±0.003.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We use computer vision to accelerate the discovery of antiparasitic drug candidates. We trained supervised and semi-supervised deep learning models for identifying images in which the natural product extracts being screened as drug candidates have effectively impacted nematode development. We have developed a novel dataset comprising 12,800 images, consisting of 4,640 labeled and 8,160 unlabeled nematode images. We report the performance of a variety of deep neural networks and loss functions in this application and show that DenseNet provides an accuracy of 86%. We also extended the approach to a semi-supervised learning methodology, using high-confidence pseudo-labels from unlabeled data to augment the training set iteratively. This semi-supervised method allows for the use of unlabeled data and contributes to enhanced test classification performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study aims to enhance the resolution of Magnetic Resonance Imaging (MRI) using a cutting-edge diffusion probabilistic Deep Learning (DL) technique, addressing the challenges posed by long image acquisition times and limited scanning dimensions. In this research, we propose a novel approach utilizing a probabilistic DL model to synthesize High-Resolution MRI (HR-MRI) images from Low-Resolution (LR) inputs. The proposed model consists of two main steps. In the forward process, Gaussian noise is systematically introduced to LR images through a Markov chain. In the reverse process, a U-Net model is trained using a loss function based on Kullback-Leibler divergence, which maximizes the likelihood of producing ground truth images. We assess the effectiveness of our method on T2-FLAIR images from 120 brain patients in the public BraTS2020 T2-FLAIR database. To gauge performance, we compare our approach with a clinical bicubic model (referred to as Bicubic) and Conditional Generative Adversarial Networks (CGAN). On the BraTS2020 dataset, our framework enhances the Peak Signal-to-Noise Ratio (PSNR) of LR images by 7%, whereas CGAN results in a 3% reduction. The corresponding Multi-scale Structural similarity (MSSIM) values for the proposed method and CGAN are 0.972±0.017 and 0.966±0.024. In this study, we have examined the potential of a diffusion probabilistic DL framework to elevate MRI image resolution. Our proposed method demonstrates the capability to generate high-quality HR images while avoiding issues such as mode collapse or learning multimodal distributions, which are commonly observed in CGAN-based approaches. This framework has the potential to significantly reduce MRI acquisition times for HR imaging, thereby mitigating the risk of motion artifacts and crosstalk.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Organ segmentation is a crucial task in various medical imaging applications. Many deep learning models have been developed to do this, but they are slow and require a lot of computational resources. To solve this problem, attention mechanisms are used which can locate important objects of interest within medical images, allowing the model to segment them accurately even when there is noise or artifact. By paying attention to specific anatomical regions, the model becomes better at segmentation. Medical images have unique features in the form of anatomical information, which makes them different from natural images. Unfortunately, most deep learning methods either ignore this information or do not use it effectively and explicitly. Combined natural intelligence with artificial intelligence, known as hybrid intelligence, has shown promising results in medical image segmentation, making models more robust and able to perform well in challenging situations. In this paper, we propose several methods and models to find attention regions in medical images for deep learning-based segmentation via non-deep-learning methods. We developed these models and trained them using hybrid intelligence concepts. To evaluate their performance, we tested the models on unique test data and analyzed metrics including false negatives quotient and false positives quotient. Our findings demonstrate that object shape and layout variations can be explicitly learned to create computational models that are suitable for each anatomic object. This work opens new possibilities for advancements in medical image segmentation and analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Thoracic Insufficiency Syndrome (TIS) is a rare condition that results in restricted lung growth and impaired respiratory function. Investigation of the impact of scoliotic spinal curve on regional respiratory function in individuals with TIS is important to elucidate the underlying mechanisms behind restricted respiratory function and to optimize effective treatment approaches. However, there are currently no suitable parameters for quantifying pulmonary respiratory function that demonstrate a strong correlation with scoliotic spinal curve. A new study of the relationship between scoliotic spinal curve and diaphragm motion is proposed in this work to uncover how spinal scoliosis impacts respiration, providing new insights into the specific mechanisms for respiratory dysfunction. The diaphragm was delineated at End Inspiration (EI) and End Expiration (EE) time points in reconstructed 4D images via dynamic MRI and was divided into left and right hemi-diaphragms. To facilitate the regional description of motion, we partitioned each hemi-diaphragm into 13 distinct regions and computed the velocity and curvature for each of these regions. An analysis was conducted on 26 cases with Main Thoracic Curves (MTC), including 15 cases with right-sided MTC (MTC-R) and 11 cases with left-sided MTC (MTC-L). T-testing comparing the MTC-R group with the MTC-L group revealed the impact of spinal curve sidedness on the motion of the left hemi-diaphragm. The velocity cloud maps exhibited a restriction of left diaphragmatic motion due to leftward spinal curve. Furthermore, correlation analysis demonstrated a significant influence of major curve angles (TCA and LCA) on hemi-diaphragm velocities in specific regions. Such findings improve our understanding of the pathophysiological mechanisms that lead to abnormal respiratory function in TIS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate segmentation of the pericardium from Coronary Artery Calcium Scoring (CACS) scans is of prime importance in many emerging clinical-science and technology applications. To this end we propose an attention-based convolutional neural network for accurate detection and segmentation of the pericardium from non-contrast CT scans of the coronary artery (CT-CA). This would serve to be of paramount importance in a clinical routine for diagnosis, prognosis, and risk assessment of Cardio-Vascular Disease (CVD) - the highest cause of mortality worldwide, as it enables quick, reliable, and accurate classification of cardiac fat(s) and their quantification. This is in clear contrast to manual and analytical approaches which are not just time-consuming and laborious but are highly prone to errors. Our novel framework is a customized CNN based on a 3-D encoder-decoder architecture with attention blocks coupled with a context encoding block, and the deep learning model has leveraged a few hundred CACS stacks for training, validation, and out-of-sample testing. Through extensive experimentation, optimization and hyperparameter tuning, followed by a comprehensive validation of results, we have achieved a state-of-the-art clinically acceptable dice score of 0.94, along with a miss-rate (false negative rate) of 6% and a fall-out (false positive rate) of 0.5%. Our results indicate that this approach holds promise for a reliable and precise biomarker based cardiac risk-stratification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Brain metastases are a serious form of brain cancer that can significantly shorten a patient's life expectancy. Accurately detecting and tracking the volume of metastatic lesions is critical for patient prognosis. While transformer methods have been shown to be effective in natural images, they require large and annotated datasets to achieve state-of-the-art performance. Convolutional Neural Networks (CNNs), on the other hand, are easier to train and can achieve high performance even with smaller datasets, making them suitable for medical imaging data. Recently, the ConvNeXt architecture was proposed as a way to modernize the standard CNN by mirroring transformer blocks. In this work, we propose MLP-UNEXT, a hybrid architecture that combines CNNs and Multi-Layer Perceptrons (MLPs) for segmenting brain tumor metastases on MRI scans. We show that MLP-UNEXT achieves state-of-the-art performance on the BRATS METS dataset, outperforming both CNN and transformer methods. MLP-UNEXT also demonstrates faster training and inference speed, lower computational complexity, and higher data-efficiency than other methods. We believe that MLP-UNEXT is a promising new approach for brain metastasis segmentation since it is fast, efficient, and accurate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In response to the critical need for timely and precise detection of lung lesions, we explored an innovative active learning approach for optimally selecting training data for deep-learning segmentation of computed tomography scans from nonhuman primates. Our guiding hypothesis was that by maximizing the information within a training set—accomplished by choosing images uniformly distributed in n-dimensional radiomic feature space—we may attain similar or superior segmentation results to random dataset selection, despite the use of fewer labeled images. To test this hypothesis, we compared segmentation models trained on different subsets of the available training data. Subsets that maximized the diversity among datasets (i.e., diverse data) were compared with subsets that minimized diversity among datasets (i.e., concentrated data) and randomly chosen subsets (i.e., random data). A two-tiered feature-selection technique was used to reduce the radiomic feature space to reliable, relevant, and non-redundant features. We generated learning curves to assess the model performance as a function of the number of training dataset samples. We found that models trained on uniformly distributed data consistently outperformed those trained on concentrated data, achieving higher median test Dice scores with less variance. These results suggest that active learning and intelligent selection of data that are diverse and uniformly distributed within a radiomic feature space can significantly enhance segmentation model performance. This improvement has substantial implications for optimizing lung lesion characterization, disease management, and evaluation of treatments and underscores the potential benefit of active learning and intelligent data selection in medical imaging segmentation tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autosomal Dominant Polycystic Kidney Disease (ADPKD) presents a significant clinical challenge, demanding precise and efficient diagnostic tools. In this context, this research addresses the imperative need for accurate diagnosis of ADPKD through the refinement of deep learning models, specifically UNet++ and UNet3+, for precise segmentation of renal structures and cysts in T2W MRI images. By incorporating residual staging, switch normalization, and Concatenated Skip Connections (CSC), our proposed models, rUNet++ and rUNet3+, aim to enhance feature fusion, extraction, and segmentation accuracy. Utilizing a dataset of 760 MRI images from 95 patients, we trained, validated, and tested the models, assigning images exclusively to each set to eliminate bias. The ground truth was established through a semi-automated technique. Evaluation metrics, including Dice Similarity Coefficient (DSC) and Mean Surface Distance (MSD), demonstrated improved performance for rUNet++ and rUNet3+, with the latter exhibiting the highest test minimum DSCs for kidneys and the former for the cysts. The proposed models achieved average DSC scores of 0.95±0.02 and 0.94±0.03 respectively for kidneys and 0.88±0.04 and 0.86±0.05 for cysts. The clinical significance of these findings lies in the enhanced precision of total kidney volume quantification, a vital biomarker for ADPKD diagnosis. This automated approach not only streamlines the diagnostic process but also reduces manual involvement, addressing a crucial aspect in the clinical workflow. By presenting a modified deep learning architecture, this research contributes to advancing the technological landscape in medical imaging, offering a promising avenue for improving the clinical management of ADPKD through accurate and efficient segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent studies have highlighted the significance of Epicardial Adipose Tissue (EAT) on the development of Heart Failure (HF). Rather than simple EAT volume, we predicted HF from pathophysiologically-inspired EAT features opportunistically extracted from low-cost (no-cost at our institution) CT Calcium Score (CTCS) images. We segmented EAT using our deep learning algorithm, DeepFat, and collected 42 hand-crafted features (fat-omics), such as volume, spatial, thickness, and HU value distribution, where HU is thought to be an indicator of inflammation. We included readily available clinical features (e.g., Age, sex, and BMI). We used a large database of HF-enriched patients (N=1,988, HF: 5.13%) and a Cox proportional hazards model with elastic-net feature reduction and evaluated with training and testing of 80%/20% respectively. High-risk features (e.g., mean EAT thickness, EAT mean HU, and smoking) were identified using univariate analyses. Fat-omics + clinical features predicted HF with c-index (training/testing) of (78.1/72.7), respectively, exceeding results for BMI alone, EAT volume, sac volume, and clinical features. Importantly, the combined model (fatomics + clinical features) gave better stratification of patients into low- and high-risk groups using Kaplan-Meier plots with an NRI=0.11 compared to the model using clinical features alone. A univariate model based on the Agatston score gave training/testing (62.7/62.9), indicating that the fat and clinical features from CTCS images are more effective at predicting HF than traditional calcium scoring. Our combined model (fat-omics + clinical features) also showcases that the location and intensity of the EAT buildup is also a significant factor in predicting risk of HF onset and can change the relative importance of clinical features such as smoking status and sex.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we aimed to develop a deep-learning algorithm for segmentation of cardiac Magnetic Resonance Image (MRI) to facilitate contouring of Left Ventricle (LV), Right Ventricle (RV), and Myocardium (Myo). We proposed a Shifting Block Partition Multilayer Perceptron (SBP-MLP) network built upon a symmetric U-shaped encoder-decoder network. We evaluated this proposed network on a public cardiac MRI dataset, ACDC training dataset. The network performance was quantitatively evaluated using Hausdorff Distance (HD), Mean Surface Distance (MSD) and Residual Mean Square distance (RMS) as well as Dice score coefficient, sensitivity, and precision. The performance of the proposed network was compared with two other state-of-the-art networks known as dynamic UNet and Swin-UNetr. Our proposed network achieved the following quantitative metrics as HD = 1.521±0.090 mm, MSD = 0.287±0.080 mm, RMSD = 0.738±0.315 mm. as well as Dice = 0.948±0.020, precision = 0.946±0.017, sensitivity = 0.951±0.027. The proposed network showed statistically significant improvement compared to the Swin-UNetr and dynamic UNet algorithms across most metrics for the three segments. The SBP-MLP showed superior segmentation performance, as evidenced by higher Dice score and lower HD relative to competing methods. Overall, the proposed SBP-MLP demonstrates comparable or superior performance to competing methods. This robust method has the potential for implementation in clinical workflows for cardiac segmentation and analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dual-Energy CT (DECT) serves an important role in quantitative imaging applications due to its capability for material differentiation. Nevertheless, material decomposition is highly sensitive to noise due to the large condition number of the linear system. To address this, iterative decomposition methods employ regularization terms to enforce noise suppression on the decomposed images. However, these conventional techniques rely on handcrafted image priors and have limited capabilities to characterize the material image distribution. In recent years, deep learning-based methods have been proposed for better distribution learning performance and high computation efficiency. Diffusion models are emerging generative approaches that show great performance in medical image synthesis and translation. In this work, we propose an image-domain material decomposition method for DECT using the conditional Denoising Diffusion Probabilistic Model (DDPM). The preliminary results show its superiority and potential in quantitative imaging tasks of DECT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The advantage of proton therapy over photon therapy lies in the Bragg peak effect, which allows protons to deposit most of their energy precisely at the tumor site, minimizing damage to surrounding healthy tissue. Despite this, the standard approach to clinical treatment planning does not fully consider the differences in biological effectiveness between protons and photons. Currently, a uniform Relative Biological Effectiveness (RBE) value of 1.1 is used in clinical settings to compare protons to photons, despite evidence that proton RBE can vary significantly. This variation underscores the need for more refined proton therapy treatment planning those accounts for the variable RBE. A critical parameter in assessing the RBE of proton therapy is the Dose-Average Linear Energy Transfer (LETd), which is instrumental in optimizing proton treatment plans. Accurate LETd distribution calculations require complex physical models and the implementation of sophisticated Monte-Carlo (MC) simulation software. These simulations are both computationally intensive and time-consuming. To address these challenges, we propose a Deep Learning (DL)-based framework aimed at predicting the LETd distribution map from the dose distribution map. This framework utilizes Mean Absolute Error (MAE), Peak Signal-to-Noise Ratio (PSNR), and Normalized Cross Correlation (NCC) to measure discrepancies between MC-derived LETd and the LETd maps generated by our model. Our approach has shown promise in producing synthetic LETd maps from dose maps, potentially enhancing proton therapy planning through the provision of precise LETd information. This development could significantly contribute to more effective and individualized proton therapy treatments, optimizing therapeutic outcomes while further minimizing harm to healthy tissue.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The quality of brain MRI volumes is often compromised by motion artifacts arising from intricate respiratory patterns and involuntary head movements, manifesting as blurring and ghosting that markedly degrade imaging quality. In this study, we introduce an innovative approach employing a 3D deep learning framework to restore brain MR volumes afflicted by motion artifacts. The framework integrates a densely connected 3D U-net architecture augmented by generative adversarial network (GAN)-informed training with a novel volumetric reconstruction loss function tailored to 3D GAN to enhance the quality of the volumes. Our methodology is substantiated through comprehensive experimentation involving a diverse set of motion artifact-affected MR volumes. The generated high-quality MR volumes have similar volumetric signatures comparable to motion-free MR volumes after motion correction. This underscores the significant potential of harnessing this 3D deep learning system to aid in the rectification of motion artifacts in brain MR volumes, highlighting a promising avenue for advanced clinical applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mapping information from photographic images to volumetric medical imaging scans is essential for linking spaces with physical environments, such as in image-guided surgery. Current methods of accurate photographic image to Computed Tomography (CT) image mapping can be computationally intensive and/or require specialized hardware. For general purpose 3-D mapping of bulk specimens in histological processing, a cost-effective solution is necessary. Here, we compare the integration of a commercial 3-D camera and cell phone imaging with a surface registration pipeline. Using surgical implants and chuck-eye steak as phantom tests, we obtain 3-D CT reconstruction and sets of photographic images from two sources: Canfield Imaging's H1 camera and an iPhone 14 Pro. We perform surface reconstruction from the photographic images using commercial tools and open-source code for Neural Radiance Fields (NeRF) respectively. We complete surface registration of the reconstructed surfaces with the Iterative Closest Point (ICP) method. Manually placed landmarks were identified at three locations on each of the surfaces. Registration of the Canfield surfaces for three objects yields landmark distance errors of 1.747, 3.932, and 1.692 mm, while registration of the respective iPhone camera surfaces yields errors of 1.222, 2.061, and 5.155-mm. Photographic imaging of an organ sample prior to tissue sectioning provides a low-cost alternative to establish correspondence between histological samples and 3-D anatomical samples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Extracellular deposits of amyloid-β (Aβ) aggregates are pathological hallmarks of Alzheimer’s disease (AD). In our previous work, we showed that an amyloid-targeted liposomal gadolinium (Gd) contrast agent, ADx-001, demonstrated dose-related varying performance (accuracy 50% - 100%) for in vivo MRI-based detection of amyloid plaques in a mouse model of AD. The goal of this study was to determine if nano-radiomics (radiomic analysis of nanoparticle contrast-enhanced images) could improve performance in differentiating amyloid-positive transgenic (TG) APP/PSEN1 mice and age-matched amyloid-negative Wild Type (WT) mice. Nanoparticle contrast-enhanced MRI (nCE-MRI) was performed using a T1w-SE sequence in wild type (amyloid negative) and transgenic APP/PSEN1 mice (amyloid positive). The effect of ADx-001 dose and plaque burden on the performance of radiomics was determined. nCE-MRI was performed at three ADx-001 dose levels (0.10, 0.15, 0.20 mmol Gd/kg) in mice with high plaque burden and single ADx-001 dose level (0.20 mmol Gd/kg) in mice with low plaque burden. Following semi-automatic registration and segmentation of brain atlas on mouse MR images, two sets of radiomic features (RFs), including the RFs recommended by Image Biomarker Standardization Initiative, were calculated and evaluated for their performance in classifying TG and WT mice. Linear and nonlinear classifiers using RFs were examined to improve the model performance. 5-fold cross-validation was performed to confirm the accuracy of group separation. Nano-radiomic analysis in mice with high plaque burden achieved superb classification performance in terms of accuracy, sensitivity, and specificity, with one universal classifier for all dose levels of ADx-001. In comparison, conventional MR metric of signal enhancement demonstrated dose-related varying performance with suboptimal accuracy (⪅0.7) at lower dose levels. In mice with low plaque burden, radiomic analysis outperformed conventional MR metric for detection of amyloid pathology. In conclusion, nano-radiomics exhibited excellent performance for early detection and amyloid burden classification in a mouse model of Alzheimer’s disease.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
[18F]fluorodeoxyglucose (FDG) Positron Emission Tomography (PET) has emerged as a crucial tool in identifying the epileptic focus, especially in cases where Magnetic Resonance Imaging (MRI) diagnosis yields indeterminate results. FDG PET can provide the metabolic information of glucose and help identify abnormal areas that are not easily found through MRI. However, the effectiveness of FDG PET-based assessment and diagnosis depends on the selection of a healthy control group. The healthy control group typically consists of healthy individuals similar to epilepsy patients in terms of age, gender, and other aspects for providing normal FDG PET data, which will be used as a reference for enhancing the accuracy and reliability of the epilepsy diagnosis. However, significant challenges arise when a healthy PET control group is unattainable. Yaakub et al. have previously introduced a Pix2PixGAN-based method for MRI to PET translation. This method used paired MRI and FDG PET scans from healthy individuals for training, and produced pseudo normal FDG PET images from patient MRIs that are subsequently used for lesion detection. However, this approach requires a large amount of high-quality, paired MRI and PET images from healthy control subjects, which may not always be available. In this study, we investigated unsupervised learning methods for unpaired MRI to PET translation for generating pseudo normal FDG PET for epileptic focus localization. Two deep learning methods, CycleGAN and SynDiff, were employed, and we found that diffusion-based method achieved improved performance in accurately localizing the epileptic focus.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Magnetic Resonance Elastography (MRE) is a noninvasive method for quantitatively assessing the viscoelastic properties of tissues, such as the brain. MRE has been successfully used to measure the material properties and diagnose diseases based on the difference in mechanical properties between diseased and normal tissue. However, MRE is still an emerging technology that is not part of routine clinical imaging like structural Magnetic Resonance Imaging (MRI), and the acquisition equipment is not widely available. Thus, it is challenging to collect MRE, but there is an increasing interest in it. In this study, we explore using structural MRI images to synthesize the MRE-derived material properties of the human brain. We use deep networks that employ both MRI and Diffusion Tensor Imaging (DTI) to explore the best input images for MRE image synthesis. This work is the first study to report on the feasibility of MRE synthesis from structural MRI and DTI.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Anisotropic Low-Resolution (LR) Magnetic Resonance (MR) images are fast to obtain but hinder automated processing. We propose to use Denoising Diffusion Probabilistic Models (DDPMs) to super-resolve these 2D-acquired LR MR slices. This paper introduces AniRes2D, a novel approach combining DDPM with a residual prediction for 2D Super-Resolution (SR). Results demonstrate that AniRes2D outperforms several other DDPM-based models in quantitative metrics, visual quality, and out-of-domain evaluation. We use a trained AniRes2D to super-resolve 3D volumes slice by slice, where comparative quantitative results and reduced skull aliasing are achieved compared to a recent state-of-the-art self-supervised 3D super-resolution method. Furthermore, we explored the use of Noise Conditioning Augmentation (NCA) as an alternative augmentation technique for DDPM-based SR models, but it was found to reduce performance. Our findings contribute valuable insights to the application of DDPMs for SR of anisotropic MR images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During surgery of delicate regions, differentiation between nerve and surrounding tissue is crucial. Hyperspectral Imaging (HSI) techniques can enhance the contrast between types of tissue beyond what the human eye can differentiate. Whereas an RGB image captures three bands within the visible light range (e.g., 400 nm to 700 nm), HSI can acquire many bands in wavelength increments that highlight regions of an image across a wavelength spectrum. We developed a workflow to identify nerve tissues from other similar tissues such as fat, bone, and muscle. Our workflow uses Spectral Angle Mapper (SAM) and endmember selection. The method is robust for different types of environment and lighting conditions. We validated our workflow on two samples of human tissues. We used a compact HSI system that can image from 400 to 1700 nm to produce HSI of the samples. On these two samples, we achieved an intersection-over-union (IoU) segmentation score of 84.15% and 76.73%, respectively. We showed that our workflow identifies nerve segments that are not easily seen in RGB images. This method is fast, does not rely on special hardware, and can be applied in real time. The hyperspectral imaging and nerve detection approach may provide a powerful tool for image-guided surgery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dual-Energy CT (DECT) has risen to prominence as a valuable instrument in diagnostic imaging, boasting a range of clinical applications. Contrast-DECT (C-DECT) is particularly useful in clinical by generating iodine density map, which could benefit radiation oncologists in treatment planning process. However, DECT scanners are not widely equipped among the radiation therapy centers. Moreover, side effects from iodine agents restrict the use of DECT iodine contrast imaging for all patients. The purpose of this work is to generate synthetic C-DECT images based on non-contrast single-energy CT (SECT) via deep learning (DL) method. 108 head-and-neck cancer patients’ images were retrospectively investigated in this work. All patients were scanned with non-contrast SECT and contrast DECT protocols. A conditional Denoising Diffusion Probalistic Model (DDPM) was implemented to generate synthetic High-energy CT (H-CT) and Low-energy CT (L-CT). The training and application dataset was separated strictly, 100 patients’ data were used as the training dataset and the rest eight patients’ data were used as the application dataset. The performance of the proposed method was evaluated with three quantitative metrics including Mean Absolute Error (MAE), Structural Similarity Index (SSIM) and Peak Signal-to-Noise Ratio (PSNR). For H-CT and L-CT, the quantitative evaluation results of MAE, SSIM and PSNR are 19.15±2.23 (HU) and 23.34±3.45 (HU), 0.74±0.13 and 0.75±0.19, 28.13±2.83 (dB) and 28.18±3.55 (dB), respectively. This approach holds potential significance for radiation therapy facilities lacking DECT scanners, as well as for specific patients who may not be suitable candidates for iodine agent injection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deep neural networks have achieved unprecedented success in diagnosing patients with Alzheimer’s Disease from their MRI scans. Unfortunately, the decisions taken by these complex nonlinear architectures are difficult to interpret. Heatmap methods were introduced to visualize the deep learning models that are trained after classifying groups, but very few quantitative comparisons have been conducted so far to determine what approaches would be the most accurate in representing patterns learned by a deep learning model. In this work, we propose to use autoencoders to fuse the maps generated by different heatmap methods to produce a more reliable brain map. We establish that combining the heatmaps produced by Layer-wise Relevance Propagation, Integrated Gradients, and the Guided Grad-CAM method for a CNN trained using 502 T1 MRI scans provided by the Alzheimer’s Disease Neuroimaging Initiative produces brain maps better capturing the Alzheimer’s Disease effects reported in a large independent meta-analysis combining 77 voxel-based morphometry studies. These results suggest that our nonlinear maps fusion is a promising approach to take advantage of the great variety of heatmap methods recently published and produce a map with robust feature representation and less noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ischemic Myocardial Scarring (IMS) may lead to progressive myocardial dysfunction and life-threatening arrhythmias. While Convolutional Neural Networks (CNNs) have advanced IMS classification with their ability to automate feature learning and capture spatial hierarchies, complexity in tuning, performance variability, and poor explainability hinder their application. To address these concerns, we propose a novel Dynamic-threshold Template Matching (DTM) method and combine it with an Autodidactic Enhancement Algorithm (AEA) to make accurate high-speed IMS classifications that maintain transparency. We studied the application of DTM with and without AEA on cardiac MR images from 151 patients with IMS resulting from prior myocardial infarction and 128 controls with no evidence of IMS in cardiac MRI. The algorithm was benchmarked against a custom CNN considering accuracy, sensitivity, specificity, F1-score, area under the receiver operating characteristic curve (AUROC), and runtime using an external testing dataset. IMS with the CNN yielded 84.7% accuracy, 83.0% sensitivity, 85.3% specificity, 73.6% F1-score, with 0.899 AUROC. DTM yielded, 86.0%, 78.0%, 88.7%, 73.6%, and 0.810 for the same metrics, demonstrating comparable performance. With the inclusion of AEA, 86.0, 79.7, 88.1, 74.0, and 0.830 were the results, respectively. While the CNN took 134 seconds to run, DTM completed in about 21 seconds and DTM with AEA completed in under 18 seconds. These results indicate that DTM performs at a high speed compared to the CNN while AEA further accelerates that speed without compromising classification performance. Our results demonstrate that both DTM and AEA can be effective tools to provide accurate, high-speed IMS classification without relying on a black box. We anticipate that spatial focusing of DTM and AEA will provide even better IMS classification performance, potentially positioning these methods a viable alternative to CNNs, especially in applications where transparency is of paramount importance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-site diffusion MRI data is often acquired on different scanners and with distinct protocols. Differences in hardware and acquisition result in data that contains site dependent information, which confounds connectome analyses aiming to combine such multi-site data. We propose a data-driven solution that isolates site-invariant information whilst maintaining relevant features of the connectome. We construct a latent space that is uncorrelated with the imaging site and highly correlated with patient age and a connectome summary measure. Here, we focus on network modularity. The proposed model is a conditional, variational autoencoder with three additional prediction tasks: one for patient age, and two for modularity trained exclusively on data from each site. This model enables us to 1) isolate site-invariant biological features, 2) learn site context, and 3) re-inject site context and project biological features to desired site domains. We tested these hypotheses by projecting 77 connectomes from two studies and protocols (Vanderbilt Memory and Aging Project (VMAP) and Biomarkers of Cognitive Decline Among Normal Individuals (BIOCARD) to a common site. We find that the resulting dataset of modularity has statistically similar means (p-value ⪅0.05) across sites. In addition, we fit a linear model to the joint dataset and find that positive correlations between age and modularity were preserved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We are microscopically imaging and analyzing the human vagus nerve (VN) anatomy to create the first ever VN connectome to support modeling of neuromodulation therapies. Although micro-CT and MRI roughly identify vagus nerve anatomy, they lack the spatial resolution required to identify small fascicle splitting and merging, and perineurium boundaries. We developed 3D serial block-face Microscopy with Ultraviolet Surface Excitation (3D-MUSE), with 0.9- µm in-plane resolution and 3-μm cut thickness. 3D-MUSE is ideal for VN imaging, capturing large, myelinated fibers, connective sheaths, fascicle dynamics, and nerve bundle tractography. Each 3-mm 3D-MUSE ROI generates approximately 1,000 grayscale images, necessitating automatic segmentation as over 50-hrs were spent manually annotating fascicles, perineurium, and epineurium in every 20th image, giving 50 images. We trained three types of multi-class deep learning segmentation models. First, 25 annotated images trained a 2D U-Net and Attention U-Net. Second, we trained a Vision Transformer (ViT) using self-supervised learning with 200 unlabeled images before refining the ViT’s initialized weights of a U-Net Transformer with 25 training images and labels. Third, we created pseudo-3D images by concatenating each annotated image with an image ±k slices apart (k=1,10), and trained a 2D U-Net similarly. All models were tested on 25 held-out images and evaluated using Dice. While all trained models performed comparably, the 2D U-Net model trained on pseudo-3D images demonstrated highest Dice values (0.936). With sample-based-training, one obtains very promising results on thousands of images in terms of segmentation and nerve fiber tractography estimation. Additional training from more samples could obtain excellent results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Semantic segmentation plays an important role in enhancing the diagnostic accuracy from clinical angiographic images. We analyzed 800 cerebral diagnostic subtraction angiography images from 40 patients with Idiopathic Intracranial Hypertension (IIH) and Venous Sinus Stenosis (VSS) using the Segment Anything Model (SAM) with point and box prompting and MedSAM with box prompting techniques. Despite complexities in the pre-stent images, SAM consistently performed well. In comparison to expert delineated segmentations, SAM’s segmentations yielded favorable results with a DSC of 0.91 and an Intersection over Union (IoU) of 0.84 for post-stent images, indicating SAM’s robust capability in segmenting these images. Post-stent enhanced contrast opacification boosted SAM’s segmentation performance in DSA images, indicating contrast’s critical role in post-stent imaging. Our study demonstrates potential utility of out-of-the-box foundation models, SAM and MedSAM, in medical image analysis, a step towards advanced segmentation tools in clinical settings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radiomics has been widely recognized for its effectiveness in decoding tumor phenotypes through the extraction of quantitative imaging features. However, the robustness of radiomic methods to estimate clinically relevant biomarkers non-invasively remains largely untested. In this study, we propose Cascaded Data Processing Network (CDPNet), a radiomic feature learning method to predict tumor molecular status from medical images. We apply CDPNet to an epigenetic case, specifically targeting the estimation of O6-methylguanine-DNA-methyltransferase (MGMT) promoter methylation from Magnetic Resonance Imaging (MRI) scans of glioblastoma patients. CDPNet has three components: 1) Principal Component Analysis (PCA), 2) Fisher Linear Discriminant (FLD), and 3) a combination of hashing and blockwise histograms. The outlined architectural framework capitalizes on PCA to reconstruct input image patches, followed by FLD to extract discriminative filter banks, and finally using binary hashing and blockwise histogram module for indexing, pooling, and feature generation. To validate the effectiveness of CDPNet, we conducted an exhaustive evaluation on a comprehensive retrospective cohort comprising 484 IDH-wildtype glioblastoma patients with pre-operative multi-parametric MRI scans (T1, T1-Gd, T2, and T2-FLAIR). The prediction of MGMT promoter methylation status was cast as a binary classification problem. The developed model underwent rigorous training via 10- fold cross-validation on a discovery cohort of 446 patients. Subsequently, the model's performance was evaluated on a distinct and previously unseen replication cohort of 38 patients. Our method achieved an accuracy of 70.11% and an area under the curve of 0.71 (95% CI: 0.65 - 0.74).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deep learning algorithms using Magnetic Resonance (MR) images have demonstrated state-of-the-art performance in the automated segmentation of Multiple Sclerosis (MS) lesions. Despite their success, these algorithms may fail to generalize across sites or scanners, leading to domain generalization errors. Few-shot or one-shot domain adaptation is an option to reduce the generalization error using limited labeled data from the target domain. However, this approach may not yield satisfactory performance due to the limited data available for adaptation. In this paper, we aim to address this issue by integrating one-shot adaptation data with harmonized training data that includes labels. Our method synthesizes new training data with a contrast similar to that of the test domain, through a process referred to as “contrast harmonization” in MRI. Our experiments show that combining one-shot adaptation data with harmonized training data outperformed the use of either one of the data sources alone. Domain adaptation using only harmonized training data achieved comparable or even better performance compared to one-shot adaptation. In addition, all adaptations only required light fine-tuning of two to five epochs for convergence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three-dimensional (3D) rendering of biomedical volumes has become essential for faster comprehension of anatomy, better communication with patients, surgical planning, and training. However, depending on the algorithm, level of detail, volume size, and transfer function, rendering can be quite slow. A multi-target optimization method – voxelization – can be applied to biomedical volume rendering enhancement for empty space skipping, optimized maximum intensity calculation, and advanced Woodcock tracking. Empirical results indicate that the voxelization technique can increase the performance of Direct Volume Rendering (DVR) by up to ten times, Monte Carlo Path Tracing (MCPT) by five times, and Maximum Intensity Projection (MIP) by two times of the original velocity. In this study, we investigate the influence of a 3D fractal dimension of the rendered volumes to the rendering speed and the optimal super voxel size, used in voxelization process, to guarantee the best performance of DVR, MCPT, and MIP, using voxelization. 3D fractal dimensions are calculated for five common transfer functions applied to the Cone-Beam Computed Tomography (CBCT) scans of exotic animals and human extremities (postmortem). Preliminary findings suggest that volumes rendered with similar transfer functions have comparable 3D fractal dimension and, moreover, there is a statistically significant relationship between the DVR and MCPT speed and the 3D fractal dimension. Furthermore, the structures with higher 3D fractal dimension require the smaller super voxel sizes for empty space skipping, meanwhile, optimized maximum intensity calculation and advanced Woodcock tracking are 3D fractal dimension independent. The research encourages the further exploration of the structural complexity to 3D rendering optimization for biomedical volumes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Animal models are pivotal in disease research and the advancement of therapeutic methods. The translation of results from these models to clinical applications is enhanced by employing technologies which are consistent for both humans and animals, like Magnetic Resonance Imaging (MRI), offering the advantage of longitudinal disease evaluation without compromising animal welfare. However, current animal MRI techniques predominantly employ 2D acquisitions due to constraints related to organ size, scan duration, image quality, and hardware limitations. While 3D acquisitions are feasible, they are constrained by longer scan times and ethical considerations related to extended sedation periods. This study evaluates the efficacy of SMORE, a self-supervised deep learning super-resolution approach, to enhance the through-plane resolution of anisotropic 2D MRI scans into isotropic resolutions. SMORE accomplishes this by self-training with high-resolution in-plane data, thereby eliminating domain discrepancies between the input data and external training sets. The approach is tested on mouse MRI scans acquired across a range of through-plane resolutions. Experimental results show SMORE substantially outperforms traditional interpolation methods. Additionally, we find that pre-training offers a promising approach to reduce processing time without compromising performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Brain networks can be naturally divided into clusters or communities where the cluster’s nodes dynamics have similar trajectories in phase space. This process is known as synchronization, and represents characteristics of intragroup features and not between groups. Fractional calculus represents a generalization of ordinary differentiation and integration to arbitrary non-integer order, and can be thought of as a smooth interpolation between different orders of differentiation/integration, providing the ability to probe the system from many different viewpoints of the dynamics. Fractional calculus has been explored as an excellent tool for the description of memory in many processes and may be more accurate for modeling brain processes than traditional integer-order ones. We apply the concept of cluster synchronization in fractional-order structural brain networks ranging from healthy controls to Alzheimer’s disease subjects and determine whether cluster synchronization can be achieved in these networks. We observe the existence of a hypersynchronization only in AD structural networks and consider that this could represent an excellent non-invasive biomarker for tracking the disease evolution and decide upon therapeutic interventions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The segmentation of pulmonary arteries and veins in computed tomography scans is crucial for the diagnosis and assessment of pulmonary diseases. This paper discusses the challenges in segmenting these vascular structures, such as the classification of terminal pulmonary vessels relying on information from distant root vessels, and the complex branches and crossings of arteriovenous vessels. To address these difficulties, we introduce a fully automatic segmentation method that utilizes multiple 3D residual U-blocks module, a semantic embedding module, and a semantic perception module. The 3D residual U-blocks module can extract multi-scale features under a high receptive field, the semantic embedding module embeds semantic information to aid the network in utilizing the anatomical characteristics of parallel pulmonary artery and bronchi, and the SPM perceives semantic information and decodes it into classification results for pulmonary arteries and veins. Our approach was evaluated on a dataset of 57 lung CT scans and demonstrated competitive performance compared to existing medical image segmentation models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Identifying the major blood vessels during laparoscopic surgeries is important to prevent injuries to the vessel that could complicate the procedure. Current mitigation strategies involve the use of fluorescence or contrast dyes, but present challenges such as patient preparation time, potential adverse reactions, and the need for specialized imaging modalities. In this study, we explore the potential of Near InfraRed (NIR) bands for dye-free major blood vessel identification, the generation of a False-RGB image with NIR bands that closely resemble the RGB image of tissues, and the enhancement of this image using a proposed contrast enhancement technique. Ten multispectral images in the NIR spectrum were captured, and a False-RGB image was generated using the 702 nm, 821 nm, and 833 nm bands as the red, green, and blue channels, respectively. The contrast enhancement algorithm successfully increased the vein contrast with an average gain of 1.5, as measured by the Michelson contrast ratio.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gastrointestinal (GI) tract endoscopy plays a pivotal role in the detection of a spectrum of malignancies, including superficial lesions and vascular irregularities. While conventional White Light Imaging (WLI) delivers clear GI tract imagery, it often lacks the capability to adequately enhance the visualization of vascular structures, essential for precise disease diagnosis. Although Narrow Band Imaging (NBI) enhances the visualization of superficial vessels, its availability is not universal across endoscopy systems. In contexts where advanced imaging techniques like NBI are absent, enhancing visualization under white light illumination holds promise for improving diagnostic accuracy. An innovative approach involving approximate spectral color estimation has been proposed in this paper, which relies on the relative proportions of red, green, and blue (RGB) components in a spectral color to infer the spectral component from an RGB image. By applying a composite of three spectral estimates to the RGB channels, we generate pseudo-colored images that accentuate structural details. Enhanced images using diverse spectral estimate combinations were captured from two patients under both WLI and NBI and analyzed for visualizing various GI tract structures. The enhanced images show clear improvement when compared to the original image for the same region. The comparison of enhanced images captured under the two different light sources shows relatively higher improvement for enhancement under WLI compared to the predominantly monochromatic images yielded by NBI. The findings underscore that our proposed method generates a spectrum of distinctly colored images, in contrast to the predominantly monochromatic images yielded by NBI. This empowers clinicians to opt for their preferred color combinations, in turn simplifying the diagnostic process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image-to-image translation techniques can be used to synthesize brain image modalities that could provide complementary information about the organ. This image-generation task is often done with the use of Generative Adversarial Networks (GANs), which is a computationally expensive task. This study focuses on the synthesis of three-plane slices of fractional anisotropy maps from T1-weighted Magnetic Resonance Images through the use of a simplified GAN-based architecture that significantly reduces the number of parameters involved. Brain magnetic resonance images from 194 cognitively normal subjects from the ADNI database were used. The proposed GAN architecture was compared against two state-of-the-art networks, namely pix2pix and CycleGAN. Using almost 70% less parameters than those used in pix2pix, the proposed method showed competitive results in mean PSNR (20.21 ± 1.38) and SSIM (0.65 ± 0.07) when compared to pix2pix (PSNR: 20.46 ± 1.46, SSIM: 0.66 ± 0.07), outperforming quality metrics achieved by CycleGAN (PSNR: 18.65 ± 1.31, SSIM: 0.61 ± 0.08). By using a simplified GAN-based architecture that highlights the potential of parameter reduction through stacked convolutions, the presented model is competitive at generating three-plane fractional anisotropy maps from T1-weighted images when compared with state-of-the-art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Publisher's Note: This paper, originally published on 2 April 2024, was replaced with a corrected/revised version on 8 May 2024. If you downloaded the original PDF but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.