SignificanceTraditional diffuse optical tomography (DOT) reconstructions are hampered by image artifacts arising from factors such as DOT sources being closer to shallow lesions, poor optode-tissue coupling, tissue heterogeneity, and large high-contrast lesions lacking information in deeper regions (known as shadowing effect). Addressing these challenges is crucial for improving the quality of DOT images and obtaining robust lesion diagnosis.AimWe address the limitations of current DOT imaging reconstruction by introducing an attention-based U-Net (APU-Net) model to enhance the image quality of DOT reconstruction, ultimately improving lesion diagnostic accuracy.ApproachWe designed an APU-Net model incorporating a contextual transformer attention module to enhance DOT reconstruction. The model was trained on simulation and phantom data, focusing on challenges such as artifact-induced distortions and lesion-shadowing effects. The model was then evaluated by the clinical data.ResultsTransitioning from simulation and phantom data to clinical patients’ data, our APU-Net model effectively reduced artifacts with an average artifact contrast decrease of 26.83% and improved image quality. In addition, statistical analyses revealed significant contrast improvements in depth profile with an average contrast increase of 20.28% and 45.31% for the second and third target layers, respectively. These results highlighted the efficacy of our approach in breast cancer diagnosis.ConclusionsThe APU-Net model improves the image quality of DOT reconstruction by reducing DOT image artifacts and improving the target depth profile.
KEYWORDS: Polyps, Optical coherence tomography, In vivo imaging, Deep learning, Visual process modeling, Tumor growth modeling, Error control coding, Resection, Endoscopy, Visualization
We present the development of an optical coherence tomography (OCT) catheter designed for in vivo subsurface imaging during colonoscopy, along with the results of a clinical pilot study involving 36 subjects to assess its ability to characterize colorectal polyps real-time. High-resolution cross-sectional OCT imaging of polyp microsctructure revealed distinct morphological structures that correlated with histological findings, including tubular adenoma, tubulovillous adenoma, sessile serrated polyps, and cancer. To enhance the in vivo diagnostic capabilities, we integrated a Vision Transformer (ViT) based deep learning classifier to differentiate between cancerous and complex benign polyps, and achieved a 100% accuracy for 5 test cases. Our findings suggest that the OCT catheter combined with deep learning complements standard-of-care imaging and has the potential to enhance real-time polyp characterization and improve clinical decision-making.
KEYWORDS: Histograms, Tumor growth modeling, Image classification, Breast cancer, Feature extraction, Breast, Data modeling, Image restoration, Deep learning, Education and training
SignificanceUltrasound (US)-guided diffuse optical tomography (DOT) has demonstrated great potential for breast cancer diagnosis in which real-time or near real-time diagnosis with high accuracy is desired.AimWe aim to use US-guided DOT to achieve an automated, fast, and accurate classification of breast lesions.ApproachWe propose a two-stage classification strategy with deep learning. In the first stage, US images and histograms created from DOT perturbation measurements are combined to predict benign lesions. Then the non-benign suspicious lesions are passed through to the second stage, which combine US image features, DOT histogram features, and 3D DOT reconstructed images for final diagnosis.ResultsThe first stage alone identified 73.0% of benign cases without image reconstruction. In distinguishing between benign and malignant breast lesions in patient data, the two-stage classification approach achieved an area under the receiver operating characteristic curve of 0.946, outperforming the diagnoses of all single-modality models and of a single-stage classification model that combines all US images, DOT histogram, and imaging features.ConclusionsThe proposed two-stage classification strategy achieves better classification accuracy than single-modality-only models and a single-stage classification model that combines all features. It can potentially distinguish breast cancers from benign lesions in near real-time.
Ultrasound (US)-guided diffuse optical tomography (DOT) has demonstrated potential for breast cancer diagnosis. Previous diagnostic strategies all require image reconstruction, which hindered real-time diagnosis. In this study, we propose a deep learning approach to combine DOT frequency-domain measurement data and co-registered US images to classify breast lesions. The combined deep learning model achieved an AUC of 0.886 in distinguishing between benign and malignant breast lesions in patient data without reconstructing images.
Significance: “Difference imaging,” which reconstructs target optical properties using measurements with and without target information, is often used in diffuse optical tomography (DOT) in vivo imaging. However, taking additional reference measurements is time consuming, and mismatches between the target medium and the reference medium can cause inaccurate reconstruction.Aim: We aim to streamline the data acquisition and mitigate the mismatch problems in DOT difference imaging using a deep learning-based approach to generate data from target measurements only.Approach: We train an artificial neural network to output data for difference imaging from target measurements only. The model is trained and validated on simulation data and tested with simulations, phantom experiments, and clinical data from 56 patients with breast lesions.Results: The proposed method has comparable performance to the traditional approach using measurements without mismatch between the target side and the reference side, and it outperforms the traditional approach using measurements when there is a mismatch. It also improves the target-to-artifact ratio and lesion localization in patient data.Conclusions: The proposed method can simplify the data acquisition procedure, mitigate mismatch problems, and improve reconstructed image quality in DOT difference imaging.
We present initial results of OCT images of human fallopian tubes obtained from miniature OCT catheters. Two OCT catheters were fabricated to image from the outside and inside of the fallopian tube. The OCT catheter used to image from outside has an outer diameter of 3.8 mm, a lateral resolution of ~10 um, and an axial resolution of 6 um. Special attention was paid to the fimbriated end. The smaller OCT catheter used to image inner mucosa layer has an outer diameter of 1.5 mm. 3D structures of the normal and malignant human fallopian tubes were revealed.
In this study, we propose to combine miniaturized optical coherence tomography (OCT) catheter with a residual neural network (ResNet)-based deep learning model for differentiation of normal from cancerous colorectal tissue in fresh ex vivo specimens. The OCT catheter has an outer diameter of 3.8 mm, a lateral resolution of ~10 um, and an axial resolution of 6 um. A customized ResNet-based neural network structure was trained on both benchtop and catheter images. An AUC of 0.97 was achieved to distinguish between normal and cancerous colorectal tissue when testing on the rest of catheter images.
Significance: In general, image reconstruction methods used in diffuse optical tomography (DOT) are based on diffusion approximation, and they consider the breast tissue as a homogenous, semi-infinite medium. However, the semi-infinite medium assumption used in DOT reconstruction is not valid when the chest wall is underneath the breast tissue.
Aim: We aim to reduce the chest wall’s effect on the estimated average optical properties of breast tissue and obtain accurate forward model for DOT reconstruction.
Approach: We propose a deep learning-based neural network approach where a convolution neural network (CNN) is trained to simultaneously obtain accurate optical property values for both the breast tissue and the chest wall.
Results: The CNN model shows great promise in reducing errors in estimating the optical properties of the breast tissue in the presence of a shallow chest wall. For patient data, the CNN model predicted the breast tissue optical absorption coefficient, which was independent of chest wall depth.
Conclusions: Our proposed method can be readily used in DOT and diffuse spectroscopy measurements to improve the accuracy of estimated tissue optical properties.
A machine learning (ML) model with physical constraints is introduced to perform diffuse optical tomography (DOT) reconstruction. Here, for the first time, we combine ultrasound-guided DOT with ML to facilitate DOT reconstruction. Our method has two key components: (i) An unsupervised auto-encoder with transfer learning is adopted for clinical data without a ground truth, and (ii) physical constraints are implemented to achieve accurate reconstruction. Both qualitative and quantitative results demonstrate that the accuracy of the proposed method surpasses that of the existing model. In a phantom study, compared with the Born conjugate gradient descent (CGD) reconstruction method, the ML method improves the reconstructed maximum absorption coefficient by 18.3% on high contrast phantom and by 61.3% on low contrast phantom, with improved depth distribution of absorption maps. In a clinical study, better contrast was obtained from a treated breast cancer pre- and post- treatment.
In this study, we propose to combine miniaturized optical coherence tomography (OCT) catheter with pattern recognition (PR) OCT for differentiation of normal from neoplastic colorectal tissue in real-time. The OCT catheter has a lateral resolution of 17.15 um and an axial resolution of 6 um. The PR-OCT system is trained by RetinaNet for pattern recognition tasks. Our method leverages the recent advancement in object detection, which localizes and classifies the diagnostic features at real-time, and the integration of an endoscopy, which promises future in vivo studies. According to our previous reports, a sensitivity of 100% and specificity of 99.7% can be reached.
In this ex vivo study, we report the first use of texture features and computer vision-based image features acquired from en face scattering coefficient maps to diagnose colorectal diseases. From these maps, texture features were extracted from a gray-level co-occurrence matrix algorithm, and computer vision-based image features were derived using a scale-invariant feature transform algorithm. Twenty-five features were obtained and thirty-three patients were recruited. Machine learning models were trained using an optimal feature set. The trained models achieved 94.7% sensitivity and 94.0% specificity for differentiating abnormal from normal, and 86.9% sensitivity and 85.0% specificity when distinguishing adenomatous polyp from cancer.
Ultrasound (US)-guided diffuse optical tomography (DOT) has demonstrated potential value for screening and treatment monitoring of breast cancers. However, in clinical cases, the chest wall, bad probe-tissue contact, and tissue heterogeneity can create image artifacts, causing misinterpretation of lesion images. In the current work, realistic and flexible threedimensional numerical breast phantoms were generated using the Virtual Imaging Clinical Trials for Regulatory Evaluation (VICTRE) tools developed by U.S. Food and Drug Administration (FDA). By selecting physical attributes and tissue optical properties, the VICTRE breast phantoms were, for the first time, adopted in DOT for in silico studies. Monte Carlo simulations were conducted to generate the forward data. Edge artifacts (hot spots on the edge of the regions of interests) were found on the reconstructed images when there was a mismatch between the lesion-side breast and the contralateral reference-side breast. We propose a fully automated, connected components analysis-based algorithm that can remove these edge artifacts and improve lesion reconstruction.
A multi-spectral, portable, hand-held LED based spatial frequency domain imaging system was used for ex vivo imaging pretreatment and post treatment human colon and rectal tissues. Freshly excised human colon and rectal tissue samples were imaged with the hand-held SFDI probe with 9 wavelengths extending from visible to NIR (660-950 nm). Important tumor biomarkers such as hemoglobin, scatter amplitude, scatter spectral slope, water and lipid content were quantitatively extracted from the SFDI absorption and scattering images. Significant differences were observed between the absorption as well as scattering distribution of normal, tumor and polyp tissue as well as between pretreated and post-treated tumors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.