Radish is one of the major horticultural crops in Korea, occupying of the entire vegetable cultivation area. One of the most destructive and economically damaging diseases of radish is Fusarium wilt of radish. It is a vascular disease that causes a chlorosis, necrosis, and abscission of leaves and a discoloration of the vascular elements in roots, stems, and petioles, leading to death of the infected plant.1 Management and control of Fusarium wilt of radish is challenging for several reasons; for instance, its pathogen is soil inhibiting. Rapid spread of the disease is often observed, resulting in substantial harvest losses. Early detection of the disease could aid in preventing the spread of the disease and minimizing the damage. However, manual inspection is inaccurate, inefficient, and time-consuming. Therefore, an automated, fast, and precise surveillance system for detecting Fusarium wilt of radish is needed.
Remote sensing permits the acquisition and recording of information of agricultural produce and environment. Satellite- and aircraft-based technologies have been the two major remote sensing technologies. Satellite-based remote sensing has been widely studied and applied but suffers from insufficient information due to low resolution images, inaccurate (or poor quality) information due to local weather conditions, and a high cost for the system.2,3 Aircraft-based remote sensing is often equipped with multiple sensors or cameras, providing high quality information or images. However, the system is still costly and hard to operate.4 Alternatively, unmanned aerial vehicles (UAVs) are remote controlled aircraft that offer ad hoc remote sensing of the surface at relatively low altitudes.5 Due to the rapid advances in sensing, control, and positioning techniques, UAVs are now capable of acquiring high spatial resolution surface images at a low operational cost. With the greater capability and availability as well as cost-effectiveness, the applications of UAVs are rapidly growing6 such as traffic monitoring,7 forest fire monitoring,8 and search and rescue operations.9 UAVs also have great potential for improving agriculture.10,11 They can not only facilitate obtaining crop and field information in a timely manner but also assist farmers in improving crop management and farm planning.
Computer and information technologies can process and analyze the information or images obtained by remote sensing to monitor and assess the farming condition, e.g., crop health, crop yield, and harvest time. Several computer systems have been developed for improving agriculture. For example, plant disease detection,12–15 quality inspection of agriculture products,16 and vegetable classification.17 These systems were mainly developed based on standard computer vision and machine learning methods such as support vector machine (SVM). Deep learning is a new paradigm of machine learning. It has recently proved to be useful for several applications, for instance, image recognition,18–21 speech recognition,22 and drug discovery.23 The technique, especially, provides an efficient and effective means of handling large-scale datasets as well as discovering intrinsic feature representation of the datasets.
In this study, we propose a systematic approach that combines UAVs with computerized methods to detect Fusarium wilt of radish. Images of radish fields are obtained from UAVs at low altitudes. The state-of-the-art computer methods, including deep learning, are utilized to process and analyze the radish images. The rest of this paper is organized as follows. In Secs. 2 and 3, we describe the data acquisition, image processing, and classification procedures for detecting Fusarium wilt of radish. In Sec. 4, the performance of our approach in detecting Fusarium wilt of radish is presented. In Sec. 5, we conclude with the summary of our findings and perspectives on future directions.
Images of radish fields were acquired in Hongchun-gun and Jungsun-gun, Kangwon-do, Korea, from July to September 2016. A commercial UAV (Phantom 4, DJI co., Ltd.), equipped with an RGB camera (12 mega pixels), was used to obtain the field images at the altitudes of above ground level. Each image has a spatial dimension of with 72 dpi. In total, 139 images were attained. Figure 1 shows the exemplary images of radish fields that were acquired from the UAV.
Two types of datasets were constructed. The first dataset (dataset1) includes three distinctive regions of radish fields (radish, mulching film, and bare ground) (Fig. 1). Each image was manually reviewed, and the regions of interest (ROIs) corresponding to radish, bare ground, and mulching film were selected. In total, 1734 ROIs were selected from 139 images; 634 ROIs (average size of , ranging from to ) for radish, 580 ROIs (average size of , ranging from to ) for bare ground, and 520 ROIs (average size of , ranging from to ) for mulching film regions. This dataset is used for radish field classification (Sec. 3.1) and segmentation (Sec. 3.2). The second dataset (dataset2) contains ROIs for healthy radish and Fusarium wilt of radish (Fig. 1). Acquiring the images of radish fields, the infected regions were first identified. Afterward, the images were further examined with the prior knowledge of the infected regions, and then the ROIs corresponding to Fusarium wilt of radish and healthy radish were manually identified and delineated. 1158 ROIs (average size of , ranging from to ) for healthy radish and 904 ROIs (average size of , ranging from to ) for Fusarium wilt of radish were selected from 139 images. This dataset is used for detecting Fusarium wilt of radish (Sec. 3.3).
The overview of the proposed method is shown in Fig. 2. First, we conduct radish field classification using a softmax classifier. The classification of the radish field aims at identifying the class label of the respective radish, bare ground, and mulching film ROIs. The ROIs are provided in sataset1. Second, we perform the segmentation of a whole radish field. The whole radish field is partitioned into a number of disjoint regions, and their class labels are determined by the radish field classifier. Finally, a convolutional neural network (CNN) model is built for classifying Fusarium wilt of radish using dataset2. Fusarium wilt classification is only applied to the regions of radish that were preidentified in the radish field segmentation step.
Radish Field Classification
We extract texture- and color-based features from the radish, bare ground, and mulching film regions (dataset1). Texture-based features are extracted using local binary pattern (LBP).24 Color-based features are extracted by applying color-space conversion and an AutoEncoder (AE).25 Concatenating these two feature sets, a two-stage feature selection method is applied to choose the most discriminative features. A subset of features that are the most informative and useful in classifying radish field is designated as the most discriminative features. Utilizing the discriminative features, a softmax classifier is constructed for radish field classification. The trained softmax classifier provides the probability that a region belongs to each class label. The class label with the highest probability is assigned to each region. Figure 3 shows the entire process of radish field classification.
Texture feature extraction
LBP can provide texture descriptors that are invariant to rotation and illumination changes at low computational cost. Given a (center) pixel in an image, LBP examines its neighboring pixels (a set of regularly distributed pixels on a circle) () in a radius and generates a binary pattern code as follows:
Color feature extraction
Radish field images are initially obtained in RGB (red, green, and blue) color space and converted into HSV (hue, saturation, and value) and (lightness, green–red, blue–yellow) color spaces. Histograms are built on hue, , and channels (256 bins or features per histogram). Then, we concatenate these three color histograms into one color histogram, generating 768 color features. The color histogram features are further processed by adopting AE.25 AE is an unsupervised learning technique, typically used for dimensionality reduction. It consists of input and output layers (of the same dimensionality) and hidden layer(s). It tries to learn an approximation/representation of the input. The dimensionality of the hidden layers is smaller than the input and output layers. The hidden layers learn the compressed representation of the input (encoding), i.e., extracting meaningful features from the input. Finally, applying two-stacked AE26 on the color histogram features, we obtain reduced and compressed features (Fig. 4; ).
We perform a two-stage feature selection to choose the most discriminative features for radish field classification. In the first stage, Wilcoxon rank-sum test is used to select statistically significant features for classification (). In the second stage, random forests (RF) with 50 trees and out-of-bag (OOB) scheme are adopted to estimate the importance of the features. RF is one of the standard machine learning algorithms for classification and regression. It constructs multiple decision trees using bootstrap aggregating, combining classification models of a randomly generated training dataset and a random selection of features. The OOB error is a measure of prediction error based on random subsampling of the training dataset. To assess feature importance, each feature is permuted and the OOB error is computed again. The difference in OOB errors before and after the permutation becomes the importance of each feature. Only features with feature importance are considered. In total, 1770 features are selected from 703,404 features (Sec. 3.1.2).
A softmax classifier is a generalization of logistic function that can be used for multiclass classification. In an artificial neural network-based classifier, it is mainly adopted as a final classification layer. Given a feature vector , the softmax classifier outputs the probability for each class label as follows:
Radish Field Segmentation
The whole radish field is segmented into radish, bare ground, and mulching film using the radish field classifier (Sec. 3.1). The radish field classifier is built on ROIs, i.e., extracting texture- and color-based features from ROIs and assigning class labels to them. To apply the radish field classifier to the whole radish field image, we first identify a number of distinct regions and conduct radish field classification. -means clustering is adopted to divide a whole radish field image into a number of disjoint regions. Converting the color space of a radish field image (RGB) into HSV and color spaces, -means clustering is performed on the hue channel (HSV), and channels () (, 5, 10, 15, and 20). For each of the resultant clusters, the texture- and color-based features (Secs. 3.1.1 and 3.1.2) are extracted, and the radish field classifier (Sec. 3.1.3) assigns a class label (radish, bare ground, and mulching film) to each of the disjoint regions. Figure 5 shows the procedure of radish field segmentation.
Fusarium Wilt of Radish Detection
We employ a CNN model to detect Fusarium wilt of radish. Radishes are identified via radish field segmentation (Sec. 3.2). By sliding a rectangular window of a fixed size () over the identified radishes, the CNN model determines the disease status, stepping by 50 pixels.
Convolutional neural network
A VGG-A network27 is adopted to distinguish Fusarium wilt of radish from healthy radish. A VGG network has been successfully applied to image recognition.19 It consists of eight layers of convolutional layers and two layers of fully connected layers (Table 1). The original VGG-A network takes images of size as input. In this study, the images of size are fed to the network as input.
|Type||Patch size/stride||Output size/features|
Our CNN model is trained using dataset2 (healthy radish ROIs: 1158 and Fusarium wilt of radish ROIs: 904). Each ROI is drawn on a whole radish or the infected area of a radish. For each of the ROIs, an image patch is generated by drawing the smallest rectangular window encompassing the ROI. All the image patches are resized to a fixed size of (RGB), which is about the average size of Fusarium wilt of radish ROIs. 20% of the training dataset is randomly selected and left as the validation dataset. The validation dataset is used for tuning the learning rate. In training, we set the batch size to 90 and momentum to 0.9. The learning rate is initially set to 0.01. As the error rate on the validation set reaches a plateau, the learning rate decreases by a factor of 10. This is performed three times, i.e., the learning rate gradually reduces to 0.001. The training runs for 100 epochs, taking . The detailed training steps are available in Ref. 27.
For training our CNN model, NVIDA DIGITS 5 toolbox with Caffe framework was used. The experiments were performed on a Linux machine, with Ubuntu 14.04, Intel® Core i7-5930K processor, three NVIDIA Titan X 12GB GPUs, four 3072 cuda cores, and 64GB of DDR4 RAM.
Comparison to standard machine learning
We compared the classification performance of our CNN model to RF, a standard machine learning algorithm. RF with 50 trees are trained and tested for detecting Fusarium wilt of radish. Intensity- and texture- based features are extracted. Intensity-based features are the mean and standard deviation of each channel in HSV (hue, saturation, and value) color-space, generating six features. Texture-based features are extracted using local directional derivative pattern28 with three neighboring topologies , generating 54 features. In total, 60 features are obtained. By adopting the two-stage feature selection method (Sec. 3.1.3), 57 features are selected.
We assess the performance of the proposed methods (radish field classification, radish field segmentation, and Fusarium wilt of radish classification) via -fold cross-validation (). -fold cross-validation divides the entire dataset into roughly equal-sized disjoint subsets. Two subsets are used to train the proposed methods. The remaining subset is used to evaluate the performance of the methods. This is repeated times with differing choices of the remaining subset.
For radish field classification, the confusion matrix is computed to assess the ability of our model to distinguish differing areas of radish fields (radish, bare ground, and mulching film). The confusion matrix CM can be computed by
The pixel-level segmentation accuracy29 (PSA) is adopted to evaluate the radish field segmentation performance. PSA is calculated as follows:
PSA is measured for differing choices of in -means clustering (, 5, 10, 15, and 20) to examine the effect of the size of the clusters in segmentation performance.
Examining the performance of Fusarium wilt of radish classification, the confusion matrix is computed. Also, the true-positive rate (TPR; the rate of Fusarium wilt of radish ROIs that are correctly classified as Fusarium wilt of radish), true-negative rate (TNR; the rate of healthy radish ROIs that are correctly classified as healthy radish), and accuracy (the rate of Fusarium wilt of radish and healthy radish ROIs that are correctly classified as labeled) are measured.
Experimental Result and Discussion
Radish Field Classification Results
Using dataset1 (radish ROIs: 634, bare ground ROIs: 580, and mulching film ROIs: 506), radish field classification was performed. Table 2 describes the classification results. The experimental results suggest that our model could determine the class label of the radish, bare ground, and mulching film ROIs with high accuracy. It is notable that we obtained the highest classification performance (99.7%) for radish ROIs. of the bare ground and mulching film ROIs were misclassified.
Radish filed classification performance.
|True class/predicted class||Radish||Bare ground||Mulching film|
Radish Field Segmentation Results
Radish field segmentation results (PSA) with differing choices of the number of clusters () in -means clustering are shown in Table 3. As increases, the overall PSA increases up to , but when , it gradually decreases. This may be ascribable to the size of clusters. Increasing , the size of each cluster decreases. The features, computed from the smaller clusters, may not be robust enough to provide accurate classification performance. With , PSA was achieved for radish, bare ground, and mulching film. The experimental results suggest that is the optimal number of clusters for radish field segmentation.
Radish field segmentation performance.
|K||Radish||Bare ground||Mulching film||Overall|
Figure 6 shows the segmentation results with . Regions corresponding to radish, bare ground, and mulching film are, in general, correctly classified as labeled. However, misclassified regions are also observed (Fig. 7). These include withered radishes that are mainly brown in color. Due to the similarity in color with bare ground, these regions were clustered together with bare ground by -means clustering, that is, it is not caused by the radish field classifier but by the clustering method. As described in Sec. 3.2, -means clustering is based on three color channels.
Fusarium Wilt of Radish Classification Results
In Table 4, we demonstrate the performance of Fusarium wilt of radish classification. The classification accuracy was measured via -fold cross-validation () on dataset2 (healthy radish ROIs: 1158 and Fusarium wilt of radish ROIs: 904). We first performed radish field segmentation and discarded ROIs that contain of radishes. Then, our CNN model distinguished Fusarium wilt of radish from healthy radish, achieving an accuracy of 93.3%. TPR and TNR were 87.2% and 98.0%, respectively. We note that the image patches may include the regions of differing class labels due to the image patch generation process (Sec. 3.3.2); for example, Fusarium wilt of radish ROIs could include a part of a healthy radish. This may have an adverse effect on the performance of Fusarium wilt of radish classification. A finer training strategy utilizing the exact regions will aid in improving the overall performance of our method.
Fusarium wilt of radish classification performance.
|True class/prediction class||Fusarium wilt of radish||Healthy radish||Fusarium wilt of radish||Healthy radish|
|Fusarium wilt of radish||788||116||751||153|
Further, the performance of our CNN model was superior to the standard machine learning algorithm. Using RF, 82.9% accuracy, 83.1% TPR, and 82.8% TNR were obtained in detecting Fusarium wilt of radish (Table 4). This confirms that the CNN model could improve the standard machine learning scheme.
In addition, we repeated the above experiment with varying sizes of image patches to our CNN model. Resizing the image patches to and , the performance of our CNN model was consistent (Table 5). The results prove that our method is insensitive to the size of images.
Fusarium wilt of radish classification performance with varying image sizes.
|CNN: 120×120 window||CNN: 280×280 window|
|True class/prediction class||Fusarium wilt of radish||Healthy radish||Fusarium wilt of radish||Healthy radish|
|Fusarium wilt of radish||794||110||771||133|
Figure 8 shows the detection result of Fusarium wilt of radish. Regions with Fusarium wilt of radish are marked with red circles [Fig. 8(a)]. The region-by-region detection results are provided in Fig. 8(b). The size of sliding window is , which is about the average size of Fusarium wilt of radish ROIs. The ROIs were drawn on a radish or the infected area of a radish. Hence, our method was able to detect the individual infected areas. Overall, the regions of healthy radish and moderate Fusarium wilt of radish were successfully detected by our method. However, regions of severe Fusarium wilt of radish were often missed (Fig. 9). This is mainly due to segmentation failure, i.e., -means clustering as mentioned in Sec. 4.2.
We have demonstrated an approach of utilizing UAVs and computational techniques to identify Fusarium wilt of radish. Deep learning, in particular, was able to detect Fusarium wilt of radish with high accuracy. The capability to detect Fusarium wilt of radish from UAVs may offer great potential for reducing the effort and cost for managing and preventing the disease as well as improving the crop yield. Our method can be applied to other crops and plants since Fusarium wilt is a common vascular disease of plants, including tomato, tobacco, banana, and etc.
This study has several limitations. First, the performance of our methods was evaluated via cross-validation. A validation study on an extended dataset will further ensure the robustness of our methods. Second, only RGB images were considered. For crop monitoring and management, infrared images are often employed. Developing a methodology to combine RGB images and infrared images may further improve the performance of our methods. Third, severe Fusarium wilt of radish was often missed. Advances in segmentation methods will lead to the improved detection accuracy. Last, the severity of Fusarium wilt of radish was not considered. Depending on the level of the severity, the plan for controlling and preventing the disease may differ. Further study will be conducted to tackle the present limitations, to improve accuracy and robustness of the detection, and to facilitate efficient and effective monitoring and prevention of Fusarium wilt of radish.
This work was supported by Korea Institute of Planning and Evaluation for Technology in Food, Agriculture, Forestry, and Fisheries (IPET) through Agri-Bio Industry Technology Development Program, funded by Ministry of Agriculture, Food, and Rural Affairs (MAFRA) (316033-04-2-SB030).
Jin Gwan Ha received his BS degree in computer science in 2015 from Sejong University, Seoul, South Korea. He is currently pursuing his MS degree in computer science and engineering from Sejong University, Seoul, South Korea. He joined Computer Vision Pattern Recognition Laboratory (CVPR Lab) at the end of 2013. His current research interests include computer vision and artificial intelligence.
Hyeonjoon Moon is a professor and chairman in the Department of Computer Science and Engineering at Sejong University. He received his BS degree in electronics and computer science and engineering from Korea University in 1990. He received his MS degree and PhD in electrical and computer engineering from the State University of New York at Buffalo in 1992 and 1999, respectively. His current research interests include image processing, biometrics, artificial intelligence, and machine learning.
Jin Tae Kwak is an assistant professor in the Department of Computer Science and Engineering at Sejong University, Seoul, South Korea. He received his BS degree in electrical engineering from Korea University, Seoul, in 2005, his MS degree in electrical and computer engineering from Purdue University, Indiana, USA, in 2007, and his PhD in computer science from the University of Illinois at Urbana-Champaign, Illinois, USA, in 2012. His current research interests include medical imaging, machine learning, pattern recognition, deep learning, and image processing.
Syed Ibrahim Hassan received his BS degree in computer science in 2015 from Quaid-E-Awam University of Engineering Science and Technology Sindh, Pakistan. He is currently pursuing his MS degree in computer science and engineering from Sejong University, Seoul, South Korea. He joined CVPR Lab in September 2016. His current research interests include computer vision, deep learning, and natural language processing.
Minh Dang received his BS degree in information systems in 2016 from the University of Information Technology, HCMC, VietNam. He is currently pursuing his MS degree in computer science and engineering from Sejong University, Seoul, South Korea. He joined CVPR Lab at the beginning of 2017. His current research interests include computer vision, data mining, and natural language processing.
O New Lee is a researcher in the Department of Bioresource Engineering at Sejong University. She received her BS degree in the Department of Horticultural Science from Korea University, Seoul, South Korea, in 1997. She received her MS degree and PhD from the Department of Agricultural and Environmental Biology at the University of Tokyo in 2000 and 2004, respectively. Her current research interests include plant breeding, marker-assisted selection, and genome wide association study.
Han Yong Park is an associate professor in the Department of Bioresource Engineering at Sejong University, which he joined in 2011. He received his BS degree in the Department of Horticultural Science from Yeungnam University in 1989. He received his MS degree and PhD from the Department of Horticultural Science at Seoul National University in 1992 and 1995, respectively. He was a senior researcher, as a recognized breeder in Raphanus sativus, at vegetable breeding laboratories of renowned seed company, Hungnong, Seminis, and Monsanto in Korea for over 20 years. He enrolled 27 radish variety registration and protection in Korea and has experience exporting those seeds to Japan and China.