Malaria, caused by Plasmodium parasites, continues to be a major burden on global health. Plasmodium falciparum (P. falciparum) and Plasmodium vivax (P. vivax) pose the greatest health threat among the five malaria species. Microscopy examination is considered as the gold standard for malaria diagnosis, but it requires a significant amount of time and expertise. In particular, the automated and accurate detection of P. vivax is difficult due to the low parasitemia levels as compared to P. falciparum. In this work, we develop a rapid and robust diagnosis system for the automated detection of P. vivax parasites using a cascaded YOLO model. This system consists of a YOLOv2 model and a classifier for hardnegative mining. Results from 2567 thin blood smear images of 171 patients show the cascaded YOLO model improves the mean average precision about 8% compared to the conventional YOLOv2 model.
Convolutional neural networks (CNNs) have become the architecture of choice for visual recognition tasks. However, these models are perceived as black boxes since there is a lack of understanding of the learned behavior from the underlying task of interest. This lack of transparency is a serious drawback, particularly in applications involving medical screening and diagnosis since poorly understood model behavior could adversely impact subsequent clinical decision-making. Recently, researchers have begun working on this issue and several methods have been proposed to visualize and understand the behavior of these models. We highlight the advantages offered through visualizing and understanding the weights, saliencies, class activation maps, and region of interest localizations in customized CNNs applied to the challenge of classifying parasitized and uninfected cells to aid in malaria screening. We provide an explanation for the models’ classification decisions. We characterize, evaluate, and statistically validate the performance of different customized CNNs keeping every training subject’s data separate from the validation set.
Automated image analysis of slides of thin blood smears can assist with early diagnosis of many diseases. Automated detection and segmentation of red blood cells (RBCs) are prerequisites for any subsequent quantitative highthroughput screening analysis since the manual characterization of the cells is a time-consuming and error-prone task. Overlapping cell regions introduce considerable challenges to detection and segmentation techniques. We propose a novel algorithm that can successfully detect and segment overlapping cells in microscopic images of stained thin blood smears. The algorithm consists of three steps. In the first step, the input image is binarized to obtain the binary mask of the image. The second step accomplishes a reliable cell center localization that utilizes adaptive meanshift clustering. We employ a novel technique to choose an appropriate bandwidth for the meanshift algorithm. In the third step, the cell segmentation purpose is fulfilled by estimating the boundary of each cell through employing a Gradient Vector Flow (GVF) driven snake algorithm. We compare the experimental results of our methodology with the state-of-the-art and evaluate the performance of the cell segmentation results with those produced manually. The method is systematically tested on a dataset acquired at the Chittagong Medical College Hospital in Bangladesh. The overall evaluation of the proposed cell segmentation method based on a one-to-one cell matching on the aforementioned dataset resulted in 98% precision, 93% recall, and 95% F1-score index.