The Coronavirus Disease 2019 (COVID-19) pandemic that affects the world since 2020 generated a great amount of research interest in how to provide aid to medical staff on triage, diagnosis, and prognosis. This work proposes an automated segmentation model over Computed Tomography (CT) scans, segmenting the lung and COVID-19 related lung findings at the same time. Manual segmentation is a time-consuming and complex task, especially when applied to high-resolution CT scans, resulting in a lack of gold standards annotation. Thanks to data provided by the RadVid19 Brazilian initiative, providing over a hundred annotated High Resolution CT (HRCT), we analyze the performance of three convolutional neural networks for the segmentation of lung and COVID findings: a 3D UNet architecture; a modified EfficientDet (2D) architecture; and 3D and 2D variations of the MobileNetV3 architecture. Our method achieved first place in the RadVid19 challenge, among 13 other competitors’ submissions. Additionally, we evaluate the model with the best result on the challenge in four public CT datasets, comparing our results against other related works, and studying the effects of using different annotations in training and testing. Our best method achieved on testing upwards of 0.98 Lung and 0.73 Findings 3D Dice and reached state-of-the-art performance on public data.
Surface electromyography (sEMG) is the most common technology used in gesture recognition for hand prosthesis control. Machine learning models based on Convolutional Neural Networks (CNN) that classify the gestures from the sEMG signals can reach high accuracy. However, common changes in the condition of use of the prosthesis, such as electrode shifts, can drastically impact these metrics, causing the need to retrain the model. Considering the application of a model based on such characteristics which was originally trained using data from subjects (source domain) other than the data from the user of the prosthesis (target domain), a domain adaptation must be employed. To lower the time spent during the retraining, only a small amount of data from the target domain must be considered. A relevant factor to be taken into account is that, for the prosthesis to be economically viable, the computational effort required to solve this problem must be supported by a common and inexpensive hardware device. In the current work, the CapgMyo sEMG dataset was used to fine-tune a 2D CNN-based model in an edge device. Inter-subject gesture recognition is performed, where the source domain is composed of the data from 17 of the 18 subjects of the dataset, while the target domain consists of data from the remaining subject. During the fine-tuning, only the classifier layer was retrained, while all the other layers were frozen. The fine-tuning was performed in a common Raspberry Pi 3. Results show that the computational power of the device is enough for a good accuracy using a small amount of data collected by the subject.
Current techniques trying to predict Alzheimer's disease at an early-stage explore the structural information of T1-weighted MR Images. Among these techniques, deep convolutional neural network (CNN) is the most promising since it has been successfully used in a variety of medical imaging problems. However, the majority of works on Alzheimer's Disease tackle the binary classification problem only, i.e., to distinguish Normal Controls from Alzheimer's Disease patients. Only a few works deal with the multiclass problem, namely, patient classification into one of the three groups: Normal Control (NC), Alzheimer's Disease (AD) or Mild Cognitive Impairment (MCI). In this paper, our primary goal is to tackle the 3-class AD classification problem using T1-weighted MRI and a 2D CNN approach. We used the first two layers of ResNet34 as feature extractor and then trained a classifier using 64 × 64 sized patches from coronal 2D MRI slices. Our extended-2D CNN proposal explores the MRI volumetric information, by using non-consecutive 2D slices as input channels of the CNN, while maintaining the low computational costs associated with a 2D approach. The proposed model, trained and tested on images from ADNI dataset, achieved an accuracy of 68.6% for the multiclass problem, presenting the best performance when compared to state-of-the-art AD classification methods, even the 3D-CNN based ones.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.