In this paper, we consider the problem of Coronavirus disease (COVID-19) diagnosis from chest X-ray images in a multi-label classification scenario, where the ultimate goal is to distinguish among Healthy, non-COVID Pneumonia, and COVID-19 infection cases from the chest X-ray manifestations. Particularly, we establish the use of a rapid, non-invasive and cost-effective X-ray-based method as a key diagnosis and screening tool for COVID-19 at early and intermediate stages of the disease. To this end, we propose CoroNet, a deep learning framework that is built upon a two-stage learning methodology: 1) an AutoEncoder to extract the infected regions in the chest X-ray manifestation of COVID-19 and other Pneumonia-like diseases and 2) a deep convolutional neural network for the multi-label classification. We utilize this tailored deep architecture to extract the relevant features specific to each class to perform the task of automatic diagnosis and classification. The unsupervised part of the proposed framework helps with proper identification of the disease given the scarcity of quality datasets on COVID-19, and at the same time, facilitates exploiting the large X-ray datasets that are readily available for Healthy and non-COVID Pneumonia cases. Our numerical investigations demonstrate that the proposed framework outperforms the state-of-the-art methods for COVID-19 identification while employing approximately ten times fewer training parameters as compared to other existing methodologies. Furthermore, we make use of attribution maps, an explainable artificial intelligence tool, to interpret the diagnosis offered by the network. We have made the codes of our proposed CoroNet framework publicly available to the research community.
Study of connectivity of neural circuits is an essential step towards a better understanding of functioning of the nervous system. With the recent improvement in imaging techniques, high-resolution and high-volume images are being generated requiring automated segmentation techniques. We present a pixel-wise classification method based on Bayesian SegNet architecture. We carried out multi-class segmentation on serial section Transmission Electron Microscopy (ssTEM) images of Drosophila third instar larva ventral nerve cord, labeling the four classes of neuron membranes, neuron intracellular space, mitochondria and glia / extracellular space. Bayesian SegNet was trained using 256 ssTEM images of 256 x 256 pixels and tested on 64 different ssTEM images of the same size, from the same serial stack. Due to high class imbalance, we used a class-balanced version of Bayesian SegNet by re-weighting each class based on their relative frequency. We achieved an overall accuracy of 93% and a mean class accuracy of 88% for pixel-wise segmentation using this encoder-decoder approach. On evaluating the segmentation results using similarity metrics like SSIM and Dice Coefficient, we obtained scores of 0.994 and 0.886 respectively. Additionally, we used the network trained using the 256 ssTEM images of Drosophila third instar larva for multi-class labeling of ISBI 2012 challenge ssTEM dataset.
Cardiothoracic ratio (CTR) is a widely used radiographic index to assess heart size on chest X-rays (CXRs). Recent studies have suggested that also two-dimensional CTR might contain clinical information about the heart function. However, manual measurement of such indices is both subjective and time consuming. This study proposes a fast algorithm to automatically estimate CTR indices based on CXRs. The algorithm has three main steps: 1) model based lung segmentation, 2) estimation of heart boundaries from lung contours, and 3) computation of cardiothoracic indices from the estimated boundaries. We extended a previously employed lung detection algorithm to automatically estimate heart boundaries without using ground truth heart markings. We used two datasets: a publicly available dataset with 247 images as well as clinical dataset with 167 studies from Geisinger Health System. The models of lung fields are learned from both datasets. The lung regions in a given test image are estimated by registering the learned models to patient CXRs. Then, heart region is estimated by applying Harris operator on segmented lung fields to detect the corner points corresponding to the heart boundaries. The algorithm calculates three indices, CTR1D, CTR2D, and cardiothoracic area ratio (CTAR). The method was tested on 103 clinical CXRs and average error rates of 7.9%, 25.5%, and 26.4% (for CTR1D, CTR2D, and CTAR respectively) were achieved. The proposed method outperforms previous CTR estimation methods without using any heart templates. This method can have important clinical implications as it can provide fast and accurate estimate of cardiothoracic indices.
Accurate segmentation of lung fields on chest radiographs is the primary step for computer-aided detection of various
conditions such as lung cancer and tuberculosis. The size, shape and texture of lung fields are key parameters for chest
X-ray (CXR) based lung disease diagnosis in which the lung field segmentation is a significant primary step. Although
many methods have been proposed for this problem, lung field segmentation remains as a challenge. In recent years,
deep learning has shown state of the art performance in many visual tasks such as object detection, image classification
and semantic image segmentation. In this study, we propose a deep convolutional neural network (CNN) framework for
segmentation of lung fields. The algorithm was developed and tested on 167 clinical posterior-anterior (PA) CXR images
collected retrospectively from picture archiving and communication system (PACS) of Geisinger Health System. The
proposed multi-scale network is composed of five convolutional and two fully connected layers. The framework
achieved IOU (intersection over union) of 0.96 on the testing dataset as compared to manual segmentation. The
suggested framework outperforms state of the art registration-based segmentation by a significant margin. To our
knowledge, this is the first deep learning based study of lung field segmentation on CXR images developed on a
heterogeneous clinical dataset. The results suggest that convolutional neural networks could be employed reliably for
lung field segmentation.
Adipose tissue has been associated with adverse consequences of obesity. Total adipose tissue (TAT) is divided into subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT). Intra-abdominal fat (VAT), located inside the abdominal cavity, is a major factor for the classic obesity related pathologies. Since direct measurement of visceral and subcutaneous fat is not trivial, substitute metrics like waist circumference (WC) and body mass index (BMI) are used in clinical settings to quantify obesity. Abdominal fat can be assessed effectively using CT or MRI, but manual fat segmentation is rather subjective and time-consuming. Hence, an automatic and accurate quantification tool for abdominal fat is needed. The goal of this study is to extract TAT, VAT and SAT fat from abdominal CT in a fully automated unsupervised fashion using energy minimization techniques. We applied a four step framework consisting of 1) initial body contour estimation, 2) approximation of the body contour, 3) estimation of inner abdominal contour using Greedy Snakes algorithm, and 4) voting, to segment the subcutaneous and visceral fat. We validated our algorithm on 952 clinical abdominal CT images (from 476 patients with a very wide BMI range) collected from various radiology departments of Geisinger Health System. To our knowledge, this is the first study of its kind on such a large and diverse clinical dataset. Our algorithm obtained a 3.4% error for VAT segmentation compared to manual segmentation. These personalized and accurate measurements of fat can complement traditional population health driven obesity metrics such as BMI and WC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.