Subarachnoid Hemorrhage (SAH) detection is a critical, severe problem that confused clinical residents for a long time. With the rise of deep learning technologies, SAH detection made a significant breakthrough in recent ten years. Whereas, the performances are significantly degraded on imbalanced data, makes deep learning models have always suffered criticism. In this study, we present a DenseNet-LSTM network with Class-Balanced Loss and the transfer learning strategy to solve the SAH detection problem on an extremely imbalanced dataset. Compared to the previous works, the proposed framework not merely effectively integrate greyscale features the and spatial information from the consecutive CT scans, but also employ Class-Balanced loss and transfer learning to alleviate the adverse effects and broaden feature diversity respectively on an extreme SAH cases scarcity dataset, mimicking the actual situation of emergency departments. Comprehensive experiments are conducted on a dataset, consisted of 2,519 cases without hemorrhage cases and only 33 cases with SAH. Experimental results demonstrate the F-measure score of SAH detection achieved a remarkable improvement, the backbone DenseNet121 gained around 33% promotion after transfer learning, and on this basis, importing the Class-Balanced Loss and the LSTM structure, the F-measure score further increased 6.1% and 2.7% sequentially.
This paper presents segmentation of multiple organ regions from non-contrast CT volume based on deep learning. Also, we report usefulness of fine-tuning using a small number of training data for multi-organ regions segmentation. In medical image analysis system, it is vital to recognize patient specific anatomical structures in medical images such as CT volumes. We have studied on a multi-organ regions segmentation method from contrast-enhanced abdominal CT volume using 3D U-Net. Since non-contrast CT volumes are also usually used in the medical field, segmentation of multi-organ regions from non-contrast CT volume is also important for the medical image analysis system. In this study, we extract multi-organ regions from non-contrast CT volume using 3D U-Net and a small number of training data. We perform fine-tuning from a pre-trained model obtained from the previous studies. The pre-trained 3D U-Net model is trained by a large number of contrast enhanced CT volumes. Then, fine-tuning is performed using a small number of non-contrast CT volumes. The experimental results showed that the fine-tuned 3D U-Net model could extract multi-organ regions from non-contrast CT volume. The proposed training scheme using fine-tuning is useful for segmenting multi-organ regions using a small number of training data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.