With the ever growing occurrences of skin cancer and limited healthcare settings, a reliable computer assisted diagnostic system is needed to assist the dermatologists for lesion diagnosis. Skin lesion segmentation on dermo- scopic images can be an efficient tool to determine the differences between benign and malignant skin lesions. The dermoscopic images in the public skin lesion datasets are collected from various sources around the world. The color of lesions in dermoscopic images can be strongly dependent on the light source. In this work, we provide a new insight on the effect of color constancy algorithms on skin lesion segmentation with deep learning algorithm. We pre-process the ISIC Challenge Segmentation 2017 dataset using different color constancy algorithms and study the effect on a popular semantic segmentation algorithm, i.e. Fully Convolutional Networks. We evaluate the results with two evaluation metrics, i.e. Dice Similarity Coefficient and Jaccard Similarity Index. Overall, our experiments showed improvements in semantic segmentation of skin lesions when pre-processed with color constancy algorithms. Further, we investigate the effect of these algorithms on different types of lesions (Naevi, Melanoma and Seborrhoeic Keratosis). We found pre-processing with color constancy algorithms improved the segmentation results on Naevi and Seborrhoeic Keratosis, but not Melanoma. Future work will seek to investigate an adaptive color constancy algorithm that could improve the segmentation results.
The skin is the largest organ in our body. There is a high prevalence of skin diseases and a scarcity of dermatologists, the experts in diagnosing and managing skin diseases, making CAD (Computer Aided Diagnosis) of skin disease an important field of research. Many patients present with a skin lesion of concern, to determine if it is benign or malignant. Lesion diagnosis is currently performed by dermatologists taking a history and examining the lesion and the entire body surface with the aid of a dermatoscope. Automatic lesion segmentation and evaluation of the symmetry or asymmetry of structures and colors with the help of computers may classify a lesion as likely benign or as likely malignant. We have explored a deep learning program called Deep Extreme Cut (DEXTR) and used the Faster-RCNN-InceptionV2 network to determine extreme points (left-most, right-most, top and bottom pixels). We used the ISIC challenge-2017 images for the training set and received Jaccard index of 82.2% on the ISIC testing set 2017 and 85.8% on the PH2 dataset. The proposed method outperformed the winner algorithm of the competition by 5.7% for the Jaccard index.
With recent advances in the field of deep learning, the use of convolutional neural networks (CNNs) in medical imaging has become very encouraging. The aim of our paper is to propose a patch-based CNN method for automated mass detection in full-field digital mammograms (FFDM). In addition to evaluating CNNs pretrained with the ImageNet dataset, we investigate the use of transfer learning for a particular domain adaptation. First, the CNN is trained using a large public database of digitized mammograms (CBIS-DDSM dataset), and then the model is transferred and tested onto the smaller database of digital mammograms (INbreast dataset). We evaluate three widely used CNNs (VGG16, ResNet50, InceptionV3) and show that the InceptionV3 obtains the best performance for classifying the mass and nonmass breast region for CBIS-DDSM. We further show the benefit of domain adaptation between the CBIS-DDSM (digitized) and INbreast (digital) datasets using the InceptionV3 CNN. Mass detection evaluation follows a fivefold cross-validation strategy using free-response operating characteristic curves. Results show that the transfer learning from CBIS-DDSM obtains a substantially higher performance with the best true positive rate (TPR) of 0.98 ± 0.02 at 1.67 false positives per image (FPI), compared with transfer learning from ImageNet with TPR of 0.91 ± 0.07 at 2.1 FPI. In addition, the proposed framework improves upon mass detection results described in the literature on the INbreast database, in terms of both TPR and FPI.
Multistage processing of automated breast ultrasound lesions recognition is dependent on the performance of prior stages. To improve the current state of the art, we propose the use of end-to-end deep learning approaches using fully convolutional networks (FCNs), namely FCN-AlexNet, FCN-32s, FCN-16s, and FCN-8s for semantic segmentation of breast lesions. We use pretrained models based on ImageNet and transfer learning to overcome the issue of data deficiency. We evaluate our results on two datasets, which consist of a total of 113 malignant and 356 benign lesions. To assess the performance, we conduct fivefold cross validation using the following split: 70% for training data, 10% for validation data, and 20% testing data. The results showed that our proposed method performed better on benign lesions, with a top “mean Dice” score of 0.7626 with FCN-16s, when compared with the malignant lesions with a top mean Dice score of 0.5484 with FCN-8s. When considering the number of images with Dice score >0.5, 89.6% of the benign lesions were successfully segmented and correctly recognised, whereas 60.6% of the malignant lesions were successfully segmented and correctly recognized. We conclude the paper by addressing the future challenges of the work.
Manual identification of capillaries in transverse muscle sections is laborious and time consuming. Although the process of classifying a structure as a capillary is facilitated by (immuno)histochemical staining methods, human judgement is still required in a significant number of cases. This is mainly due to the fact that not all capillaries stain as strongly: they may have an elongated appearance and/or there may be staining artefacts that would lead to a false identification of a capillary. Here we propose two automated methods of capillary detection: a novel image processing approach and an existing machine learning approach that has been previously used to detect nuclei-shaped objects. The robustness of the proposed methods was tested on two sets of differently stained muscle sections. On average, the image processing approach scored a True Positive Rate of 0.817 and a harmonic mean (F1 measure) of 0.804 whilst the machine learning approach scored a True Positive Rate 0.843 and F1 measure of 0.846. Both proposed methods are thus able to mimic most of the manual capillary detection, but further improvements are required for practical applications.
Existing methods for automated breast ultrasound lesions detection and recognition tend to be based on multi-stage processing, such as preprocessing, filtering/denoising, segmentation and classification. The performance of these processes is dependent on the prior stages. To improve the current state of the art, we have proposed an end-to-end breast ultrasound lesions detection and recognition using a deep learning approach. We implemented a popular semantic segmentation framework, i.e. Fully Convolutional Network (FCN-AlexNet) for our experiment. To overcome data deficiency, we used a pre-trained model based on ImageNet and transfer learning. We validated our results on two datasets, which consist of a total of 113 malignant and 356 benign lesions. We assessed the performance of the model using the following split: 70% for training data, 10% for validation data, and 20% testing data. The results show that our proposed method performed better on benign lesions, with a Dice score of 0.6879, when compared to the malignant lesions with a Dice score of 0.5525. When considering the number of images with Dice score > 0.5, 79% of the benign lesions were successfully segmented and correctly recognised, while 65% of the malignant lesions were successfully segmented and correctly recognised. This paper provides the first end-to-end solution for breast ultrasound lesion recognition. The future challenges for the proposed approaches are to obtain additional datasets and customize the deep learning framework to improve the accuracy of this method.
Breast cancer is a threat to women worldwide. Manual delineation on breast ultrasound lesions is time-consuming and operator dependent. Computer segmentation of ultrasound breast lesions can be a challenging task due to the ill-defined lesions boundaries and issues related to the speckle noise in ultrasound images. The main contribution of this paper is to compare the performance of the computer classifier on the manual delineation and computer segmentation in malignant and benign lesions classification. This paper we implement computer segmentation using multifractal approach on a database consists of 120 images (50 malignant lesions and 70 benign lesions). The computer segmentation result is compared with the manual delineation using Jaccard Similarity Index (JSI). The result shows that the average JSI of 0.5010 (±0.2088) for malignant lesions and the average JSI of 0.6787 (±0.1290) for benign lesions. These results indicate lower agreement in malignant lesions due to the irregular shape while the higher agreement in benign lesions with regular shape. Further, we extract the shape descriptors for the lesions. By using logistic regression with 10 fold cross validation, the classification rates of manual delineation and computer segmentation are computed. The computer segmentation produced results with sensitivity 0.780 and specificity 0.871. However, the manual delineation produced sensitivity of 0.520 and specificity of 0.800. The results show that there are no clear differences between the delineation in MD and CS in benign lesions but the computer segmentation on malignant lesions shows better accuracy for computer classifier.
Automatic segmentation of anatomic structures of magnetic resonance thigh scans can be a challenging task due to the potential lack of precisely defined muscle boundaries and issues related to intensity inhomogeneity or bias field across an image. In this paper, we demonstrate a combination framework of atlas construction and image registration methods to propagate the desired region of interest (ROI) between atlas image and the targeted MRI thigh scans for quadriceps muscles, femur cortical layer and bone marrow segmentations. The proposed system employs a semi-automatic segmentation method on an initial image in one dataset (from a series of images). The segmented initial image is then used as an atlas image to automate the segmentation of other images in the MRI scans (3-D space). The processes include: ROI labeling, atlas construction and registration, and morphological transform correspondence pixels (in terms of feature and intensity value) between the atlas (template) image and the targeted image based on the prior atlas information and non-rigid image registration methods.
American College of Radiology introduces a standard in classification, the breast imaging reporting and data system
(BIRADS), standardize the reporting of ultrasound findings, clarify its interpretation, and facilitate communication between clinicians. The effective use of new technologies to support healthcare initiatives is important and current research is moving towards implementing computer tools in the diagnostics process. Initially a detailed study was carried out to evaluate the performance of two commonly used appearance based classification algorithms, based on the use of Principal Component Analysis (PCA), and two dimensional linear discriminant analysis (2D-LDA). The study showed
that these two appearance based classification approaches are not capable of handling the classification of ultrasound breast image lesions. Therefore further investigations in the use of a popular feature based classifier - Support Vector Machine (SVM) was conducted. A pre-processing step before feature based classification is feature extraction, which involve shape, texture and edge descriptors for the Region of Interest (ROI). The input dataset to SVM classification is from a fully automated ROI detection. We achieve the success rate of 0.550 in PCA, 0.500 in LDA, and 0.931 in SVM. The best combination of features in SVM classification is to combine the shape, texture and edge descriptors, with
sensitivity 0.840 and specificity 0.968. This paper briefly reviews the background to the project and then details the ongoing research. In conclusion, we discuss the contributions, limitations, and future plans of our work.
The PERFORMS self-assessment scheme measures individuals skills in identifying key mammographic features on sets of known cases. One aspect of this is that it allows radiologists' skills to be trained, based on their data from this scheme. Consequently, a new strategy is introduced to provide revision training based on mammographic features that the radiologist has had difficulty with in these sets. To do this requires a lot of random cases to provide dynamic, unique,
and up-to-date training modules for each individual. We propose GIMI (Generic Infrastructure in Medical Informatics) middleware as the solution to harvest cases from distributed grid servers. The GIMI middleware enables existing and legacy data to support healthcare delivery, research, and training. It is technology-agnostic,
data-agnostic, and has a security policy. The trainee examines each case, indicating the location of regions of interest, and completes an
evaluation form, to determine mammographic feature labelling, diagnosis, and decisions. For feedback, the trainee can
choose to have immediate feedback after examining each case or batch feedback after examining a number of cases. All
the trainees' result are recorded in a database which also contains their trainee profile. A full report can be prepared for
the trainee after they have completed their training. This project demonstrates the practicality of a grid-based individualised training strategy and the efficacy in generating dynamic training modules within the coverage/outreach of the GIMI middleware. The advantages and limitations of the approach are discussed together with future plans.
Effective use of new technologies to support healthcare initiatives is important and current research is moving towards
implementing secure grid-enabled healthcare provision. In the UK, a large-scale collaborative research project (GIMI:
Generic Infrastructures for Medical Informatics), which is concerned with the development of a secure IT infrastructure
to support very widespread medical research across the country, is underway. In the UK, there are some 109 breast
screening centers and a growing number of individuals (circa 650) nationally performing approximately 1.5 million
screening examinations per year. At the same, there is a serious, and ongoing, national workforce issue in screening
which has seen a loss of consultant mammographers and a growth in specially trained technologists and other non-radiologists.
Thus there is a need to offer effective and efficient mammographic training so as to maintain high levels of
screening skills. Consequently, a grid based system has been proposed which has the benefit of offering very large
volumes of training cases that the mammographers can access anytime and anywhere. A database, spread geographically
across three university systems, of screening cases is used as a test set of known cases. The GIMI mammography
training system first audits these cases to ensure that they are appropriately described and annotated. Subsequently, the
cases are utilized for training in a grid-based system which has been developed. This paper briefly reviews the
background to the project and then details the ongoing research. In conclusion, we discuss the contributions, limitations,
and future plans of such a grid based approach.
We propose a novel approach to fully automatic lesion boundary detection in ultrasound breast images. The novelty of
the proposed work lies in the complete automation of the manual process of initial Region-of-Interest (ROI) labeling and
in the procedure adopted for the subsequent lesion boundary detection. Histogram equalization is initially used to pre-process
the images followed by hybrid filtering and multifractal analysis stages. Subsequently, a single valued
thresholding segmentation stage and a rule-based approach is used for the identification of the lesion ROI and the point
of interest that is used as the seed-point. Next, starting from this point an Isotropic Gaussian function is applied on the
inverted, original ultrasound image. The lesion area is then separated from the background by a thresholding
segmentation stage and the initial boundary is detected via edge detection. Finally to further improve and refine the
initial boundary, we make use of a state-of-the-art active contour method (i.e. gradient vector flow (GVF) snake model).
We provide results that include judgments from expert radiologists on 360 ultrasound images proving that the final
boundary detected by the proposed method is highly accurate. We compare the proposed method with two existing state-of-
the-art methods, namely the radial gradient index filtering (RGI) technique of Drukker et. al. and the local mean
technique proposed by Yap et. al., in proving the proposed method's robustness and accuracy.