Computer-aided diagnosis has been widely used in breast ultrasound images, and many deep learning-based models have emerged. However, the datasets used for breast ultrasound classification face the problem of category imbalance, which limits the accuracy of breast cancer classification. In this work, we propose a novel dual-branch network (DBNet) to alleviate the imbalance problem and improve classification accuracy. DBNet is constructed by conventional learning branch and re-balancing branch in parallel, which take universal sampling data and reversed sampling data as inputs, respectively. Both branches adopt ResNet-18 to extract features and share all the weights except for the last residual block. Additionally, both branches use the same classifier and share all the weights. The cross-entropy loss of each branch is calculated using the output logits and the corresponding groundtruth labels. The total loss of DBNet is designed as a linear weighted sum of two branches’ losses. To evaluate the performance of the DBNet, we conduct breast cancer classification on the dataset composed of 6309 ultrasound images with malignant nodules and 3527 ultrasound images with benign nodules. Furthermore, ResNet-18 and bilateral-branch network (BBN) are utilized as baselines. The results demonstrate that DBNet yields a result of 0.854 in accuracy, which outperforms the ResNet-18 and the BBN by 2.7% and 1.1%, respectively.
Breast cancer is the second leading cause of cancer-related death in women. Ultrasound imaging has been widely used for the early detection of breast cancer because of its superior ability in imaging dense breast tissue and its lack of ionizing radiation. However, ultrasound imaging heavily depends on practitioners’ experience and thus becomes relatively subjective. In this work, we proposed a novel multi-scale view-based convolutional neural network (MSVCNN) to assist doctors to diagnose and improve classification accuracy. MSV-CNN takes full images, regions of interest (ROI), and the tumor regions with two times size of the ROI as input. It adopts three complementary branches to learn multi-scale view features from different views. The sub-networks in all branches have the same structure but with different parameters. The output of three branches is finally concatenated and fused by a fully connected layer for automated nodule classification. To assess the performance of our proposed network, we implemented breast ultrasound classification on the dataset containing 1560 images with benign nodules and 5367 images with malignant nodules. Furthermore, ResNet-18 models trained with different views were utilized as baselines. Experimental results showed that MSV-CNN achieved an average classification accuracy of 0.907. This preliminary study demonstrated that our proposed method is effective in the discrimination of breast nodules.
Hepatocellular carcinoma (HCC) is the second leading cause of cancer-related death worldwide. The high probability of metastasis makes its prognosis very poor even after potentially curative treatment. Detecting high metastatic HCC will allow for the development of effective approaches to reduce HCC mortality. The mechanism of HCC metastasis has been studied using gene profiling analysis, which indicated that HCC with different metastatic capability was differentiable. However, it is time consuming and complex to analyze gene expression level with conventional method. To distinguish HCC with different metastatic capabilities, we proposed a deep learning based method with microscopic images in animal models. In this study, we adopted convolutional neural networks (CNN) to learn the deep features of microscopic images for classifying each image into low metastatic HCC or high metastatic HCC. We evaluated our proposed classification method on the dataset containing 1920 white-light microscopic images of frozen sections from three tumor-bearing mice injected with HCC-LM3 (high metastasis) tumor cells and another three tumor-bearing mice injected with SMMC-7721(low metastasis) tumor cells. Experimental results show that our method achieved an average accuracy of 0.85. The preliminary study demonstrated that our deep learning method has the potential to be applied to microscopic images for metastasis of HCC classification in animal models.
Fluorescence molecular tomography (FMT) is a promising imaging technique in applications of preclinical research. However, the complexity of radiative transfer equation (RTE) and the ill-poseness of the inverse problem limit the effectiveness of FMT reconstruction. In this research, we proposed a novel Deep Convolutional Neural Network (DCNN), Gated Recurrent Unit (GRU) and Multiple Layer Perception (MLP) based method (DGMM) for FMT reconstruction. Instead of establishing the photon transmission models and solving the inverse problem, the proposed method directly fits the nonlinear relationship between fluorescence intensity at the boundary and fluorescent source in biological tissue. For details, DGMM consists of three stages: In the first stage, the measured optical intensity was encoded into a feature vector by transferring the VGG16 model; In the second stage, we fused all encoded feature vectors into one feature vector by using GRU based network; In the last stage, the fused feature vector was used to reconstruct the fluorescent sources by MLP model. To evaluate the performance of our proposed method, a 3D digital mouse was utilized to generate FMT Monte Carlo simulation samples. In quantitative analysis, the results demonstrated that DGMM method has comparable performance with conventional method in tumor position locating. To the best of our knowledge, this is the first study that employed DCNN based methods for FMT reconstruction, which holds a great potential of improving the reconstruction quality of FMT.
The development of Bioluminescence Tomography (BLT) has allowed the quantitative three dimension (3D) whole body imaging and non-invasive study of biological behavior of cancer. However, the ill-posed problem of BLT reconstruction limits the quality of reconstruction result. In this work, we proposed a Bilateral Weight Laplace (BWL) method that utilizes a non-local Laplace regularization to improve the imaging quality of bioluminescence tomography reconstruction. The non-local Laplace regularization was constructed by spatial weight and range weight to penalize the neighborhood-variance of reconstructed source density in both spatial and range domain. To evaluate the performance of BWL method, both dual-source BLT reconstruction experiment in simulation data and in vivo BLT reconstruction in orthotopic glioma mouse model were designed. Furthermore, fast iterative shrinkage/threshold (FIST) method and Laplace method were utilized to compare with BWL. Both dual-source experiment and in vivo experiment demonstrate that the construction results of BWL method provide more accurate tumor position (BCE = 0.37mm in dual-source experiment) and better tumor morphological information.
The high sensitivity and low cost of fluorescence imaging enables fluorescence molecular tomography (FMT) as a powerful noninvasive technique in applications of tracer distribution visualization. With the development of targeted fluorescence tracer, FMT has been widely used to localize the tumor. However, the visualization of probe distribution in tumor and surrounding region is still a challenge for FMT reconstruction. In this study, we proposed a novel nonlocal total variation (NLTV) regularization method, which is based on structure prior information. To build the NLTV regularization term, we consider the first order difference between the voxel and its four nearest neighbors. Furthermore, we assume that the variance of fluorescence intensity between any two voxels has a non-linear inverse correlation with their Gaussian distance. We adopted the Gaussian distance between two voxels as the weight of the first order difference. Meanwhile, the split Bregman method was applied to minimize the optimization problem. To evaluate the robustness and feasibility of our proposed method, we designed numerical simulation experiments and in vivo experiments of xenograft orthotopic glioma models. The ex vivo fluorescent images of cryoslicing specimens were regarded as gold standard of probe distribution in biological tissue. The results demonstrated that the proposed method could recover the morphology of the tracer distribution more accurately compared with fast iterated shrinkage (FIS) method, Split Bregman-resolved TV (SBRTV) regularization method and Gaussian weighted Laplace prior (GWLP) regularization method. These results demonstrate the potential of our method for in vivo visualization of tracer distribution in xenograft orthotopic glioma models.
The ability of fast and single-neuron resolution imaging of neural activities enables light-sheet fluorescence microscopy (LSFM) as a powerful imaging technique in functional neural connection applications. The state-of-art LSFM imaging system can record the neuronal activities of entire brain for small animal, such as zebrafish or C. elegans at single-neuron resolution. However, the stimulated and spontaneous movements in animal brain result in inconsistent neuron positions during recording process. It is time consuming to register the acquired large-scale images with conventional method. In this work, we address the problem of fast registration of neural positions in stacks of LSFM images. This is necessary to register brain structures and activities. To achieve fast registration of neural activities, we present a rigid registration architecture by implementation of Graphics Processing Unit (GPU). In this approach, the image stacks were preprocessed on GPU by mean stretching to reduce the computation effort. The present image was registered to the previous image stack that considered as reference. A fast Fourier transform (FFT) algorithm was used for calculating the shift of the image stack. The calculations for image registration were performed in different threads while the preparation functionality was refactored and called only once by the master thread. We implemented our registration algorithm on NVIDIA Quadro K4200 GPU under Compute Unified Device Architecture (CUDA) programming environment. The experimental results showed that the registration computation can speed-up to 550ms for a full high-resolution brain image. Our approach also has potential to be used for other dynamic image registrations in biomedical applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.