The coronavirus pandemic, also known as COVID-19 pandemic, has led to tens of millions of cases and over half of a million deaths as of August 2020. Chest CT is an important imaging tool to evaluate the severity of the lung involvement which often correlates with the severity of the disease. Quantitative analysis of CT lung images requires the localization of the infection area on the image or the identification of the region of interest (ROI). In this study, we propose an automatic ROI identification based on the recent feature selection method, called concrete autoencoder, that learns the parameters of concrete distributions from the given data to choose pixels from the images. To improve the discrimination of these features, we proposed a discriminative concrete autoencoder (DCA) by adding a classification head to network. This classification head is used to perform the image classification. We conducted a study with 30 CT image sets from 15 Covid-19 positive and 15 COVID19 negative cases. When we used the DCA to select the pixels of the suspected area, the classification accuracy was 76.27% for the image sets. Without DCA feature selection, the traditional neural network achieved an accuracy of 69.41% for the same image sets. Hence, the proposed DCA could detect significant features to identify the COVID-19 infected area of lung. Future work will focus on surveying more data, designing area selection layer towards group selection.
Gliomas are very heterogenous set of tumors that grow within the substance of brain and often mix with normal brain tissues. Due to its histologic complexity and irregular shapes, multiparametric magnetic resonance imaging is used to accurately diagnose brain tumor and their subregions. Current practice requires physicians to manually segment these regions on a large image dataset, which can be a very time consuming and complicated task especially with large variations among different tumor regions. Automatic segmentation of brain tumors in multimodal MRI holds a great potential in developing an effective treatment plan and improving brain tumor radiotherapy workflow. Despite continuous investigations on DL-based brain tumor segmentation, irregular shapes and histologic complexities of brain tumors introduces a major challenge in developing an effective automatic segmentation method. In this study, we develop a novel context U-net with deep supervision to segment both the whole brain tumor and their subregions of tumors. The context module was formed by an inception-like structure to extract more information regarding the brain tumors. The deep supervision in the encoder path was achieved by adding up the segmentation outputs at different levels of the network. We evaluated our method on Brain Tumor Segmentation Challenge (BraTS) 2019 training dataset in which 80% was used for training while the remaining 20% was used for performance testing. Our method achieved Dice similarity coefficients (DSC) were 0.8693, 0.8013 and 0.7782 for the whole tumor (WT), tumor core (TC) and enhancing tumor (ET), respectively. The results attained by our proposed network suggested that this technique could be used for segmentation of brain tumors and their subregions to facilitate the brain tumor radiotherapy workflow.
The aim of this study is to identify ultrasound texture features to characterize radiation-associated acute breast toxicity in women following radiotherapy treatment for breast cancer. We investigated a series of sonographic features obtained from the gray level co-occurrence matrix (GLCM) – a second order statistical method of texture analysis. These features were tested in a pilot study of 42 postradiotherapy patients. The mean follow-up time for the postradiotherapy patients was 6 weeks. Each participant underwent an ultrasound study in which ultrasound scans were performed on the bilateral breast, generating a total of 42 post irradiation and 42 contralateral normal breasts exams. The ultrasound scans of the irradiated breasts were graded as either mild (n=27) or severe (n=15). After specifying the region of interest on the B-mode images and computing the sonographic features based off the severity grading, we observed statistically significant differences in the quantification of
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.