Endoscopic optical coherence tomography (OCT) is progressively used in endoluminal imaging because of its high scanning speed and near-cellular spatial resolution. Scanning of the endoscopic probe is implemented mechanically to achieve circumferential rotation and axial pullback. However, this scanning suffers from nonuniform rotational distortion (NURD) due to mechanical friction between the rotating probe and protecting sheath, irregular motor rotation, and so on. Correction of NURD is a prerequisite for endoscopic OCT imaging and its functional extensions, such as angiography and elastography. Previous work requires time-consuming feature tracking or cross-correlation calculations and thus sacrifices temporal resolution. In this work, we propose a cross-attention learning method for accelerating the NURD correction in endoscopic OCT. Our method is inspired by the recent success of the self-attention mechanism in natural language processing and computer vision. By leveraging its ability to model long-range dependencies, we can directly obtain the correlation between OCT A-lines at any distance, thus accelerating the NURD correction. We develop an end-to- end stacked cross-attention network and design three types of optimization constraints. We compare our method with two traditional feature-based methods and a CNN-based method, on two publicly-available endoscopic OCT datasets and a private dataset collected on our home-built endoscopic OCT system. Our method achieved a ~3 times speedup to real-time (26±3 fps), and superior correction performance.
Injection of fluorescent dye is a safety concern in fluorescein angiography (FA). This has led to the cautious use of this clinical diagnostic modality in certain populations (e.g., children, allergic populations). In recent years, the development of non-invasive functional imaging of fundus blood flow by computational means has become a hot spot in ophthalmic research, such as OCT angiography. Deep learning-based color fundus to FA prediction is another emerging approach, which takes advantage of the nonlinear and high-dimensional mapping capabilities of deep neural networks to establish the relationship of these two imaging modalities explicitly. Most of such studies use a small publicly-available dataset and rely on algorithm design to improve the prediction accuracy. However, the limited performance has attracted little attention and raised doubts about its viability. Here, we show that the prediction accuracy can be significantly improved by simply expanding the training dataset by a factor of ~10 without introducing new algorithms. While this result is expected based on the nature of the data-driven model, it suggests that the development of such deep learning-based prediction requires a more diverse approach rather than focusing only on algorithmic improvements.
Deep learning boosts the performance of automatic OCT segmentation, which is a prerequisite for standardized diagnostic and therapeutic procedures. However, training deep neural network requires laborious data labeling, and the trained models only work well on data from the same manufacturer, imaging protocol, and region of interest. Here we propose a novel learning method to reduce labeling costs. By labeling and training on a single image, we achieved segmentation accuracy comparable to that of a U-Net model trained on ~25 to 50 labeled images. This reduction in labeling costs could significantly improve the flexibility and generalization of deep-learning-based OCT segmentation.
By taking advantage of the inherent flexibility of low-cost 3D printing materials, we achieved an optical focus tuning accuracy of ~5 micron with a novel structure design. It shrinks the mechanical displacement by a factor of ~11 through a seesaw-like component. Combing with the built-in flashlight illumination and an off-the-shelf smartphone lens, the total manufacturing cost of our smartphone-based microscope is less than 4 USD. We demonstrated the capability of this design in imaging thick biological specimens. We further applied this device in the cell culture monitoring of VX2 tumor cells because of its portability and flexibility.
Significance: Reducing the bit depth is an effective approach to lower the cost of an optical coherence tomography (OCT) imaging device and increase the transmission efficiency in data acquisition and telemedicine. However, a low bit depth will lead to the degradation of the detection sensitivity, thus reducing the signal-to-noise ratio (SNR) of OCT images.
Aim: We propose using deep learning to reconstruct high SNR OCT images from low bit-depth acquisition.
Approach: The feasibility of our approach is evaluated by applying this approach to the quantized 3- to 8-bit data from native 12-bit interference fringes. We employ a pixel-to-pixel generative adversarial network (pix2pixGAN) architecture in the low-to-high bit-depth OCT image transition.
Results: Extensively, qualitative and quantitative results show our method could significantly improve the SNR of the low bit-depth OCT images. The adopted pix2pixGAN is superior to other possible deep learning and compressed sensing solutions.
Conclusions: Our work demonstrates that the proper integration of OCT and deep learning could benefit the development of healthcare in low-resource settings.
images. We propose to use 2D Fourier filters in different transform domains including Fourier, wavelet, and nonsubsampled contourlet domains to eliminate this kind of noise. We used image entropy and vessel density as the metrics to evaluate their performance on noise elimination, we found that filtering after the nonsubsampled contourlet transform (NSCT) was the best choice among these approaches. For vessel preservation, the wavelet-domain filtering has the advantage of keeping signal-to-noise ratio while the NSCT filtering can preserve structure similarity to the most extent.
Using optical coherence tomography angiography, we measured blood flow from the vessels in the lateral wall in the mouse cochlea directly through bone in mice with and without sympathetic neuronal function. We present in vivo imaging of blood flow and mechanical vibration in mice subjected to 30 min of loud sound. Loud sound caused blood flow reduction. In mice with superior cervical ganglion ablation, the loud sound-induced reduction in blood flow was partially ameliorated. These results demonstrate that sympathetic innervation likely plays a role in the pathological decrease in blood flow observed in the lateral wall vessels in response to loud sound.
Using a commercial available 200K swept source laser, we demonstrated high resolution wide field angiographic imaging of human retinal. 8mm by 8mm and 10mm by 6mm retina areas were imaged in a single scan within 4 seconds. By montaging four 10 x 6mm scan, 10 x 20mm wide field OCT angiography images were demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.