Presentation
13 March 2024 Analysis of cell behavior in videos of fluorescence imagery using uncertainty-based Bayesian transformer
Author Affiliations +
Abstract
The CNNs have significantly advanced in analyzing cellular movements. Unfortunately, the CNN-based networks incorporate the information loss caused by the intrinsic characteristics of the convolution operators, leading to degrading the performance of cell segmentation and tracking. Researchers have proposed consecutive CNNs to overcome these limitations, although these models are still in the preproduction stage. In this study, we present a novel approach that utilizes cumulative CNNs to segment and track cells in fluorescence videos. Our method incorporates the state-of-the-art Vision Transformer (ViT) and Bayesian Network to improve accuracy and performance. By leveraging the ViT architecture and Bayesian network, we aim to mitigate information losses and enhance the precision of cell segmentation and tracking tasks.
Conference Presentation
© (2024) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Kyungsu Lee, Moon Hwan Lee, and Jae Youn Hwang "Analysis of cell behavior in videos of fluorescence imagery using uncertainty-based Bayesian transformer", Proc. SPIE PC12846, Imaging, Manipulation, and Analysis of Biomolecules, Cells, and Tissues XXII, PC128460C (13 March 2024); https://doi.org/10.1117/12.3002073
Advertisement
Advertisement
KEYWORDS
Fluorescence

Transformers

Video

Deep learning

Convolutional neural networks

Fluorescence intensity

Image segmentation

Back to Top