Breast cancer cell analysis has traditionally focused on cell and intracellular organelle morphology. Recent research has demonstrated that organelle topology-based cancer cell classification is considerably more accurate when using handcrafted feature extraction and machine learning-based classifiers on fluorescent confocal microscopy images. However, feature extraction and classification through this methodology requires manual segmentation and computational organelle rendering. Herein, we employ convolutional neural networks (CNN) and Gradient-weighted Class Activation Mapping (GradCAM) for fast end-to-end classification and visual interpretation of confocal fluorescent microscopy images based on spatial organelle features. First, raw 3D images are filtered and preprocessed into 2D image patches for the CNN. To replicate feature analysis of the surface-surface contact area, marginal intermediate fusion CNN is implemented to classify each patch. GradCAM is then used post hoc to generate a representative heatmap of important areas used for classification. All relevant heatmap patches are then reconstructed based on the extraction of their respective patches to obtain an overall heatmap of the entire microscopy image. Furthermore, finer-grained heatmaps were obtained through the use of patch overlap and weighting during initial patch preprocessing. On a dataset consisting of 6 different breast cancer cell lines, this methodology resulted in a classification accuracy of 95.7% while also providing visualization of areas indicative of certain cancer cell lines. These findings demonstrate the efficacy of using deep learning and GradCAM for fast and interpretable organelle-based cancer cell classification.
Significance: Functional near-infrared spectroscopy (fNIRS), a well-established neuroimaging technique, enables monitoring cortical activation while subjects are unconstrained. However, motion artifact is a common type of noise that can hamper the interpretation of fNIRS data. Current methods that have been proposed to mitigate motion artifacts in fNIRS data are still dependent on expert-based knowledge and the post hoc tuning of parameters.
Aim: Here, we report a deep learning method that aims at motion artifact removal from fNIRS data while being assumption free. To the best of our knowledge, this is the first investigation to report on the use of a denoising autoencoder (DAE) architecture for motion artifact removal.
Approach: To facilitate the training of this deep learning architecture, we (i) designed a specific loss function and (ii) generated data to mimic the properties of recorded fNIRS sequences.
Results: The DAE model outperformed conventional methods in lowering residual motion artifacts, decreasing mean squared error, and increasing computational efficiency.
Conclusion: Overall, this work demonstrates the potential of deep learning models for accurate and fast motion artifact removal in fNIRS data.
This paper introduces a generative adversarial network (GAN) for low-dose CT (LDCT) simulation, which is an inverse process for network-based low-dose CT denoising. Within our GAN framework, the generator is an encoder-decoder network with a shortcut connection to produce realistic noisy LDCT images. To ensure satisfactory results, a conditional batch normalization layer is incorporated into the bottleneck between the encoder and the decoder. After the model is trained, a Gaussian noise generator serves as the latent variable controlling the noise in generated CT images. With the Mayo Low-dose CT Challenge dataset, the proposed network was trained on image patches, and then produced full-size low-dose CT images of different noise distributions at various noise levels. The network-generated low-dose CT images can be used to test the robustness of the current low-dose CT denoising models and also help perform other imaging tasks such as optimization of radiation dose to patients and evaluation of model observers.
Over the past few years, deep neural networks have made significant processes in denoising low-dose CT images. A trained denoising network, however, may not generalize very well to different dose levels, which follows from the dose-dependent noise distribution. To address this practically, a trained network requires re-training to be applied to a new dose level, which limits the generalization abilities of deep neural networks for clinical applications. This article introduces a deep learning approach that does not require re-training and relies on a transfer learning strategy. More precisely, the transfer learning framework utilizes a progressive denoising model, where an elementary neural network serves as a basic denoising unit. The basic units are then cascaded to successively process towards a denoising task; i.e. the output of one network unit is the input to the next basic unit. The denoised image is then a linear combination of outputs of the individual network units. To demonstrate the application of this transfer learning approach, a basic CNN unit is trained using the Mayo low- dose CT dataset. Then, the linear parameters of the successive denoising units are trained using a different image dataset, i.e. the MGH low-dose CT dataset, containing CT images that were acquired at four different dose levels. Compared to a commercial iterative reconstruction approach, the transfer learning framework produced a substantially better denoising performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.