|
1.Introduction1.1.Research MotivationRadish is one of the major horticultural crops in Korea, occupying of the entire vegetable cultivation area. One of the most destructive and economically damaging diseases of radish is Fusarium wilt of radish. It is a vascular disease that causes a chlorosis, necrosis, and abscission of leaves and a discoloration of the vascular elements in roots, stems, and petioles, leading to death of the infected plant.1 Management and control of Fusarium wilt of radish is challenging for several reasons; for instance, its pathogen is soil inhibiting. Rapid spread of the disease is often observed, resulting in substantial harvest losses. Early detection of the disease could aid in preventing the spread of the disease and minimizing the damage. However, manual inspection is inaccurate, inefficient, and time-consuming. Therefore, an automated, fast, and precise surveillance system for detecting Fusarium wilt of radish is needed. Remote sensing permits the acquisition and recording of information of agricultural produce and environment. Satellite- and aircraft-based technologies have been the two major remote sensing technologies. Satellite-based remote sensing has been widely studied and applied but suffers from insufficient information due to low resolution images, inaccurate (or poor quality) information due to local weather conditions, and a high cost for the system.2,3 Aircraft-based remote sensing is often equipped with multiple sensors or cameras, providing high quality information or images. However, the system is still costly and hard to operate.4 Alternatively, unmanned aerial vehicles (UAVs) are remote controlled aircraft that offer ad hoc remote sensing of the surface at relatively low altitudes.5 Due to the rapid advances in sensing, control, and positioning techniques, UAVs are now capable of acquiring high spatial resolution surface images at a low operational cost. With the greater capability and availability as well as cost-effectiveness, the applications of UAVs are rapidly growing6 such as traffic monitoring,7 forest fire monitoring,8 and search and rescue operations.9 UAVs also have great potential for improving agriculture.10,11 They can not only facilitate obtaining crop and field information in a timely manner but also assist farmers in improving crop management and farm planning. Computer and information technologies can process and analyze the information or images obtained by remote sensing to monitor and assess the farming condition, e.g., crop health, crop yield, and harvest time. Several computer systems have been developed for improving agriculture. For example, plant disease detection,12–15 quality inspection of agriculture products,16 and vegetable classification.17 These systems were mainly developed based on standard computer vision and machine learning methods such as support vector machine (SVM). Deep learning is a new paradigm of machine learning. It has recently proved to be useful for several applications, for instance, image recognition,18–21 speech recognition,22 and drug discovery.23 The technique, especially, provides an efficient and effective means of handling large-scale datasets as well as discovering intrinsic feature representation of the datasets. In this study, we propose a systematic approach that combines UAVs with computerized methods to detect Fusarium wilt of radish. Images of radish fields are obtained from UAVs at low altitudes. The state-of-the-art computer methods, including deep learning, are utilized to process and analyze the radish images. The rest of this paper is organized as follows. In Secs. 2 and 3, we describe the data acquisition, image processing, and classification procedures for detecting Fusarium wilt of radish. In Sec. 4, the performance of our approach in detecting Fusarium wilt of radish is presented. In Sec. 5, we conclude with the summary of our findings and perspectives on future directions. 2.Datasets2.1.Image AcquisitionImages of radish fields were acquired in Hongchun-gun and Jungsun-gun, Kangwon-do, Korea, from July to September 2016. A commercial UAV (Phantom 4, DJI co., Ltd.), equipped with an RGB camera (12 mega pixels), was used to obtain the field images at the altitudes of above ground level. Each image has a spatial dimension of with 72 dpi. In total, 139 images were attained. Figure 1 shows the exemplary images of radish fields that were acquired from the UAV. 2.2.Dataset PreparationTwo types of datasets were constructed. The first dataset (dataset1) includes three distinctive regions of radish fields (radish, mulching film, and bare ground) (Fig. 1). Each image was manually reviewed, and the regions of interest (ROIs) corresponding to radish, bare ground, and mulching film were selected. In total, 1734 ROIs were selected from 139 images; 634 ROIs (average size of , ranging from to ) for radish, 580 ROIs (average size of , ranging from to ) for bare ground, and 520 ROIs (average size of , ranging from to ) for mulching film regions. This dataset is used for radish field classification (Sec. 3.1) and segmentation (Sec. 3.2). The second dataset (dataset2) contains ROIs for healthy radish and Fusarium wilt of radish (Fig. 1). Acquiring the images of radish fields, the infected regions were first identified. Afterward, the images were further examined with the prior knowledge of the infected regions, and then the ROIs corresponding to Fusarium wilt of radish and healthy radish were manually identified and delineated. 1158 ROIs (average size of , ranging from to ) for healthy radish and 904 ROIs (average size of , ranging from to ) for Fusarium wilt of radish were selected from 139 images. This dataset is used for detecting Fusarium wilt of radish (Sec. 3.3). 3.MethodologyThe overview of the proposed method is shown in Fig. 2. First, we conduct radish field classification using a softmax classifier. The classification of the radish field aims at identifying the class label of the respective radish, bare ground, and mulching film ROIs. The ROIs are provided in sataset1. Second, we perform the segmentation of a whole radish field. The whole radish field is partitioned into a number of disjoint regions, and their class labels are determined by the radish field classifier. Finally, a convolutional neural network (CNN) model is built for classifying Fusarium wilt of radish using dataset2. Fusarium wilt classification is only applied to the regions of radish that were preidentified in the radish field segmentation step. 3.1.Radish Field ClassificationWe extract texture- and color-based features from the radish, bare ground, and mulching film regions (dataset1). Texture-based features are extracted using local binary pattern (LBP).24 Color-based features are extracted by applying color-space conversion and an AutoEncoder (AE).25 Concatenating these two feature sets, a two-stage feature selection method is applied to choose the most discriminative features. A subset of features that are the most informative and useful in classifying radish field is designated as the most discriminative features. Utilizing the discriminative features, a softmax classifier is constructed for radish field classification. The trained softmax classifier provides the probability that a region belongs to each class label. The class label with the highest probability is assigned to each region. Figure 3 shows the entire process of radish field classification. 3.1.1.Texture feature extractionLBP can provide texture descriptors that are invariant to rotation and illumination changes at low computational cost. Given a (center) pixel in an image, LBP examines its neighboring pixels (a set of regularly distributed pixels on a circle) () in a radius and generates a binary pattern code as follows: where is 1 if and 0 if and and represent the gray level of the center pixel and its neighborhood pixels, respectively. To achieve rotational invariance, binary pattern codes are generated by where is computed by a circular bitwise right shift operation, namely, the same binary pattern code generated by the bitwise operation is regarded as one identical pattern. LBP features are computed on a gray-scale image using three neighboring topologies , generating 703,404 features.3.1.2.Color feature extractionRadish field images are initially obtained in RGB (red, green, and blue) color space and converted into HSV (hue, saturation, and value) and (lightness, green–red, blue–yellow) color spaces. Histograms are built on hue, , and channels (256 bins or features per histogram). Then, we concatenate these three color histograms into one color histogram, generating 768 color features. The color histogram features are further processed by adopting AE.25 AE is an unsupervised learning technique, typically used for dimensionality reduction. It consists of input and output layers (of the same dimensionality) and hidden layer(s). It tries to learn an approximation/representation of the input. The dimensionality of the hidden layers is smaller than the input and output layers. The hidden layers learn the compressed representation of the input (encoding), i.e., extracting meaningful features from the input. Finally, applying two-stacked AE26 on the color histogram features, we obtain reduced and compressed features (Fig. 4; ). 3.1.3.Feature selectionWe perform a two-stage feature selection to choose the most discriminative features for radish field classification. In the first stage, Wilcoxon rank-sum test is used to select statistically significant features for classification (). In the second stage, random forests (RF) with 50 trees and out-of-bag (OOB) scheme are adopted to estimate the importance of the features. RF is one of the standard machine learning algorithms for classification and regression. It constructs multiple decision trees using bootstrap aggregating, combining classification models of a randomly generated training dataset and a random selection of features. The OOB error is a measure of prediction error based on random subsampling of the training dataset. To assess feature importance, each feature is permuted and the OOB error is computed again. The difference in OOB errors before and after the permutation becomes the importance of each feature. Only features with feature importance are considered. In total, 1770 features are selected from 703,404 features (Sec. 3.1.2). 3.1.4.Softmax classifierA softmax classifier is a generalization of logistic function that can be used for multiclass classification. In an artificial neural network-based classifier, it is mainly adopted as a final classification layer. Given a feature vector , the softmax classifier outputs the probability for each class label as follows: where is a class label, is a weight, and is a bias (). and denote the number of features and classes (radish, bare ground, and mulching film), respectively. Computing the softmax classifier amounts to determining the weight and bias . These are chosen to minimize mean squared error with 200 iterations. Equation (3) is called the softmax function, which outputs a -dimensional vector of real values between 0 and 1, representing categorical probability distribution. The class label with the highest probability is assigned to .3.2.Radish Field SegmentationThe whole radish field is segmented into radish, bare ground, and mulching film using the radish field classifier (Sec. 3.1). The radish field classifier is built on ROIs, i.e., extracting texture- and color-based features from ROIs and assigning class labels to them. To apply the radish field classifier to the whole radish field image, we first identify a number of distinct regions and conduct radish field classification. -means clustering is adopted to divide a whole radish field image into a number of disjoint regions. Converting the color space of a radish field image (RGB) into HSV and color spaces, -means clustering is performed on the hue channel (HSV), and channels () (, 5, 10, 15, and 20). For each of the resultant clusters, the texture- and color-based features (Secs. 3.1.1 and 3.1.2) are extracted, and the radish field classifier (Sec. 3.1.3) assigns a class label (radish, bare ground, and mulching film) to each of the disjoint regions. Figure 5 shows the procedure of radish field segmentation. 3.3.Fusarium Wilt of Radish DetectionWe employ a CNN model to detect Fusarium wilt of radish. Radishes are identified via radish field segmentation (Sec. 3.2). By sliding a rectangular window of a fixed size () over the identified radishes, the CNN model determines the disease status, stepping by 50 pixels. 3.3.1.Convolutional neural networkA VGG-A network27 is adopted to distinguish Fusarium wilt of radish from healthy radish. A VGG network has been successfully applied to image recognition.19 It consists of eight layers of convolutional layers and two layers of fully connected layers (Table 1). The original VGG-A network takes images of size as input. In this study, the images of size are fed to the network as input. Table 1CNN architecture.
3.3.2.CNN trainingOur CNN model is trained using dataset2 (healthy radish ROIs: 1158 and Fusarium wilt of radish ROIs: 904). Each ROI is drawn on a whole radish or the infected area of a radish. For each of the ROIs, an image patch is generated by drawing the smallest rectangular window encompassing the ROI. All the image patches are resized to a fixed size of (RGB), which is about the average size of Fusarium wilt of radish ROIs. 20% of the training dataset is randomly selected and left as the validation dataset. The validation dataset is used for tuning the learning rate. In training, we set the batch size to 90 and momentum to 0.9. The learning rate is initially set to 0.01. As the error rate on the validation set reaches a plateau, the learning rate decreases by a factor of 10. This is performed three times, i.e., the learning rate gradually reduces to 0.001. The training runs for 100 epochs, taking . The detailed training steps are available in Ref. 27. For training our CNN model, NVIDA DIGITS 5 toolbox with Caffe framework was used. The experiments were performed on a Linux machine, with Ubuntu 14.04, Intel® Core i7-5930K processor, three NVIDIA Titan X 12GB GPUs, four 3072 cuda cores, and 64GB of DDR4 RAM. 3.3.3.Comparison to standard machine learningWe compared the classification performance of our CNN model to RF, a standard machine learning algorithm. RF with 50 trees are trained and tested for detecting Fusarium wilt of radish. Intensity- and texture- based features are extracted. Intensity-based features are the mean and standard deviation of each channel in HSV (hue, saturation, and value) color-space, generating six features. Texture-based features are extracted using local directional derivative pattern28 with three neighboring topologies , generating 54 features. In total, 60 features are obtained. By adopting the two-stage feature selection method (Sec. 3.1.3), 57 features are selected. 3.4.Evaluation MethodsWe assess the performance of the proposed methods (radish field classification, radish field segmentation, and Fusarium wilt of radish classification) via -fold cross-validation (). -fold cross-validation divides the entire dataset into roughly equal-sized disjoint subsets. Two subsets are used to train the proposed methods. The remaining subset is used to evaluate the performance of the methods. This is repeated times with differing choices of the remaining subset. For radish field classification, the confusion matrix is computed to assess the ability of our model to distinguish differing areas of radish fields (radish, bare ground, and mulching film). The confusion matrix CM can be computed by where is the ROIs and and denote the ground truth class label and predicted class label of an ROI , respectively.The pixel-level segmentation accuracy29 (PSA) is adopted to evaluate the radish field segmentation performance. PSA is calculated as follows: PSA is measured for differing choices of in -means clustering (, 5, 10, 15, and 20) to examine the effect of the size of the clusters in segmentation performance. Examining the performance of Fusarium wilt of radish classification, the confusion matrix is computed. Also, the true-positive rate (TPR; the rate of Fusarium wilt of radish ROIs that are correctly classified as Fusarium wilt of radish), true-negative rate (TNR; the rate of healthy radish ROIs that are correctly classified as healthy radish), and accuracy (the rate of Fusarium wilt of radish and healthy radish ROIs that are correctly classified as labeled) are measured. 4.Experimental Result and Discussion4.1.Radish Field Classification ResultsUsing dataset1 (radish ROIs: 634, bare ground ROIs: 580, and mulching film ROIs: 506), radish field classification was performed. Table 2 describes the classification results. The experimental results suggest that our model could determine the class label of the radish, bare ground, and mulching film ROIs with high accuracy. It is notable that we obtained the highest classification performance (99.7%) for radish ROIs. of the bare ground and mulching film ROIs were misclassified. Table 2Radish filed classification performance.
Note: Data are the number of the predicted regions per class, and data in parenthesis are the rate of the predicted regions per class. 4.2.Radish Field Segmentation ResultsRadish field segmentation results (PSA) with differing choices of the number of clusters () in -means clustering are shown in Table 3. As increases, the overall PSA increases up to , but when , it gradually decreases. This may be ascribable to the size of clusters. Increasing , the size of each cluster decreases. The features, computed from the smaller clusters, may not be robust enough to provide accurate classification performance. With , PSA was achieved for radish, bare ground, and mulching film. The experimental results suggest that is the optimal number of clusters for radish field segmentation. Table 3Radish field segmentation performance.
Note: Data are the PSA. K is the number of clusters in K-means clustering. Figure 6 shows the segmentation results with . Regions corresponding to radish, bare ground, and mulching film are, in general, correctly classified as labeled. However, misclassified regions are also observed (Fig. 7). These include withered radishes that are mainly brown in color. Due to the similarity in color with bare ground, these regions were clustered together with bare ground by -means clustering, that is, it is not caused by the radish field classifier but by the clustering method. As described in Sec. 3.2, -means clustering is based on three color channels. 4.3.Fusarium Wilt of Radish Classification ResultsIn Table 4, we demonstrate the performance of Fusarium wilt of radish classification. The classification accuracy was measured via -fold cross-validation () on dataset2 (healthy radish ROIs: 1158 and Fusarium wilt of radish ROIs: 904). We first performed radish field segmentation and discarded ROIs that contain of radishes. Then, our CNN model distinguished Fusarium wilt of radish from healthy radish, achieving an accuracy of 93.3%. TPR and TNR were 87.2% and 98.0%, respectively. We note that the image patches may include the regions of differing class labels due to the image patch generation process (Sec. 3.3.2); for example, Fusarium wilt of radish ROIs could include a part of a healthy radish. This may have an adverse effect on the performance of Fusarium wilt of radish classification. A finer training strategy utilizing the exact regions will aid in improving the overall performance of our method. Table 4Fusarium wilt of radish classification performance.
Note: Data are the number of the predicted regions per class, and data in parenthesis are the rate of the predicted regions per class. Further, the performance of our CNN model was superior to the standard machine learning algorithm. Using RF, 82.9% accuracy, 83.1% TPR, and 82.8% TNR were obtained in detecting Fusarium wilt of radish (Table 4). This confirms that the CNN model could improve the standard machine learning scheme. In addition, we repeated the above experiment with varying sizes of image patches to our CNN model. Resizing the image patches to and , the performance of our CNN model was consistent (Table 5). The results prove that our method is insensitive to the size of images. Table 5Fusarium wilt of radish classification performance with varying image sizes.
Note: Data are the number of the predicted regions per class, and data in parenthesis are the rate of the predicted regions per class. Figure 8 shows the detection result of Fusarium wilt of radish. Regions with Fusarium wilt of radish are marked with red circles [Fig. 8(a)]. The region-by-region detection results are provided in Fig. 8(b). The size of sliding window is , which is about the average size of Fusarium wilt of radish ROIs. The ROIs were drawn on a radish or the infected area of a radish. Hence, our method was able to detect the individual infected areas. Overall, the regions of healthy radish and moderate Fusarium wilt of radish were successfully detected by our method. However, regions of severe Fusarium wilt of radish were often missed (Fig. 9). This is mainly due to segmentation failure, i.e., -means clustering as mentioned in Sec. 4.2. 5.ConclusionsWe have demonstrated an approach of utilizing UAVs and computational techniques to identify Fusarium wilt of radish. Deep learning, in particular, was able to detect Fusarium wilt of radish with high accuracy. The capability to detect Fusarium wilt of radish from UAVs may offer great potential for reducing the effort and cost for managing and preventing the disease as well as improving the crop yield. Our method can be applied to other crops and plants since Fusarium wilt is a common vascular disease of plants, including tomato, tobacco, banana, and etc. This study has several limitations. First, the performance of our methods was evaluated via cross-validation. A validation study on an extended dataset will further ensure the robustness of our methods. Second, only RGB images were considered. For crop monitoring and management, infrared images are often employed. Developing a methodology to combine RGB images and infrared images may further improve the performance of our methods. Third, severe Fusarium wilt of radish was often missed. Advances in segmentation methods will lead to the improved detection accuracy. Last, the severity of Fusarium wilt of radish was not considered. Depending on the level of the severity, the plan for controlling and preventing the disease may differ. Further study will be conducted to tackle the present limitations, to improve accuracy and robustness of the detection, and to facilitate efficient and effective monitoring and prevention of Fusarium wilt of radish. AcknowledgmentsThis work was supported by Korea Institute of Planning and Evaluation for Technology in Food, Agriculture, Forestry, and Fisheries (IPET) through Agri-Bio Industry Technology Development Program, funded by Ministry of Agriculture, Food, and Rural Affairs (MAFRA) (316033-04-2-SB030). ReferencesM. Mace, Fungal Wilt Diseases of Plants, Elsevier(2012). Google Scholar
L. Kumar et al.,
“Review of the use of remote sensing for biomass estimation to support renewable energy generation,”
J. Appl. Remote Sens., 9
(1), 097696
(2015). http://dx.doi.org/10.1117/1.JRS.9.097696 Google Scholar
S. Candiago et al.,
“Evaluating multispectral images and vegetation indices for precision farming applications from UAV images,”
Remote Sens., 7
(4), 4026
–4047
(2015). http://dx.doi.org/10.3390/rs70404026 Google Scholar
H. Xiang and L. Tian,
“Development of a low-cost agricultural remote sensing system based on an autonomous unmanned aerial vehicle (UAV),”
Biosyst. Eng., 108
(2), 174
–190
(2011). http://dx.doi.org/10.1016/j.biosystemseng.2010.11.010 Google Scholar
A. Matese et al.,
“Intercomparison of UAV, aircraft and satellite remote sensing platforms for precision viticulture,”
Remote Sens., 7
(3), 2971
–2990
(2015). http://dx.doi.org/10.3390/rs70302971 Google Scholar
S. R. Herwitz et al.,
“Imaging from an unmanned aerial vehicle: agricultural surveillance and decision support,”
Comput. Electron. Agric., 44
(1), 49
–61
(2004). http://dx.doi.org/10.1016/j.compag.2004.02.006 CEAGE6 0168-1699 Google Scholar
A. Puri,
“A survey of unmanned aerial vehicles (UAV) for traffic surveillance,”
(2005). Google Scholar
D. W. Casbeer et al.,
“Forest fire monitoring with multiple small UAVs,”
in Proc. of the American Control Conf.,
3530
–3535
(2005). http://dx.doi.org/10.1109/ACC.2005.1470520 Google Scholar
S. Waharte and N. Trigoni,
“Supporting search and rescue operations with UAVs,”
in Int. Conf. on Emerging Security Technologies (EST),
142
–147
(2010). http://dx.doi.org/10.1109/EST.2010.31 Google Scholar
D. D. W. Ren, S. Tripathi and L. K. B. Li,
“Low-cost multispectral imaging for remote sensing of lettuce health,”
J. Appl. Remote Sens., 11
(1), 016006
(2017). http://dx.doi.org/10.1117/1.JRS.11.016006 Google Scholar
S. Nebiker et al.,
“Light-weight multispectral UAV sensors and their capabilities for predicting grain yield and detecting plant diseases,”
ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci., XLI-B1 963
–970
(2016). http://dx.doi.org/10.5194/isprsarchives-XLI-B1-963-2016 Google Scholar
T. H. Jaware, R. D. Gadgujar and P. G. Patil,
“Crop disease detection using image segmentation,”
in National Conf. on Advances in Communication and Computing, World Journal of Science and Technology,
190
–194
(2012). Google Scholar
J. K. Patil and R. Kumar,
“Advances in image processing for detection of plant diseases,”
J. Adv. Bioinf. Appl. Res., 2
(2), 135
–141
(2011). Google Scholar
T. Rumpf et al.,
“Early detection and classification of plant diseases with support vector machines based on hyperspectral reflectance,”
Comput. Electron. Agric., 74
(1), 91
–99
(2010). http://dx.doi.org/10.1016/j.compag.2010.06.009 CEAGE6 0168-1699 Google Scholar
M. K. Tripathi and D. D. Maktedar,
“Recent machine learning based approaches for disease detection and classification of agricultural products,”
in Int. Conf. on Computing Communication Control and Automation (ICCUBEA),
(2016). http://dx.doi.org/10.1109/ICCUBEA.2016.7860043 Google Scholar
D. Liu, X.-A. Zeng and D.-W. Sun,
“Recent developments and applications of hyperspectral imaging for quality evaluation of agricultural products: a review,”
Crit. Rev. Food Sci. Nutr., 55
(12), 1744
–1757
(2015). http://dx.doi.org/10.1080/10408398.2013.777020 CRFND6 0099-0248 Google Scholar
A. Rocha et al.,
“Automatic fruit and vegetable classification from images,”
Comput. Electron. Agric., 70
(1), 96
–104
(2010). http://dx.doi.org/10.1016/j.compag.2009.09.002 CEAGE6 0168-1699 Google Scholar
A. Krizhevsky, I. Sutskever and G. E. Hinton,
“ImageNet classification with deep convolutional neural networks,”
in Advances in Neural Information Processing Systems,
(2012). Google Scholar
O. Russakovsky et al.,
“ImageNet large scale visual recognition challenge,”
Int. J. Comput. Vision, 115
(3), 211
–252
(2015). http://dx.doi.org/10.1007/s11263-015-0816-y IJCVEQ 0920-5691 Google Scholar
Y. LeCun, Y. Bengio and G. Hinton,
“Deep learning,”
Nature, 521
(7553), 436
–444
(2015). http://dx.doi.org/10.1038/nature14539 Google Scholar
S. P. Mohanty, D. P. Hughes and M. Salathé,
“Using deep learning for image-based plant disease detection,”
Front. Plant Sci., 7 1419
(2016). http://dx.doi.org/10.3389/fpls.2016.01419 FOPSAC 0016-2167 Google Scholar
G. Hinton et al.,
“Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,”
IEEE Signal Process. Mag., 29
(6), 82
–97
(2012). http://dx.doi.org/10.1109/MSP.2012.2205597 Google Scholar
E. Gawehn, J. A. Hiss and G. Schneider,
“Deep learning in drug discovery,”
Mol. Inf., 35
(1), 3
–14
(2016). http://dx.doi.org/10.1002/minf.v35.1 Google Scholar
T. Ojala, M. Pietikainen and T. Maenpaa,
“Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,”
IEEE Trans. Pattern Anal. Mach. Intell., 24
(7), 971
–987
(2002). http://dx.doi.org/10.1109/TPAMI.2002.1017623 ITPIDJ 0162-8828 Google Scholar
P. Vincent et al.,
“Extracting and composing robust features with denoising autoencoders,”
in Proc. of the 25th Int. Conf. on Machine Learning,
(2008). Google Scholar
P. Vincent et al.,
“Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion,”
J. Mach. Learn. Res., 11 3371
–3408
(2010). Google Scholar
K. Simonyan and A. Zisserman,
“Very deep convolutional networks for large-scale image recognition,”
CoRR,
(2014). Google Scholar
Z. Guo et al.,
“Local directional derivative pattern for rotation invariant texture classification,”
Neural Comput. Appl., 21
(8), 1893
–1904
(2012). http://dx.doi.org/10.1007/s00521-011-0586-6 Google Scholar
G. Csurka et al.,
“What is a good evaluation measure for semantic segmentation?,”
in Proc. British Machine Vision Conf.,
(2013). Google Scholar
BiographyJin Gwan Ha received his BS degree in computer science in 2015 from Sejong University, Seoul, South Korea. He is currently pursuing his MS degree in computer science and engineering from Sejong University, Seoul, South Korea. He joined Computer Vision Pattern Recognition Laboratory (CVPR Lab) at the end of 2013. His current research interests include computer vision and artificial intelligence. Hyeonjoon Moon is a professor and chairman in the Department of Computer Science and Engineering at Sejong University. He received his BS degree in electronics and computer science and engineering from Korea University in 1990. He received his MS degree and PhD in electrical and computer engineering from the State University of New York at Buffalo in 1992 and 1999, respectively. His current research interests include image processing, biometrics, artificial intelligence, and machine learning. Jin Tae Kwak is an assistant professor in the Department of Computer Science and Engineering at Sejong University, Seoul, South Korea. He received his BS degree in electrical engineering from Korea University, Seoul, in 2005, his MS degree in electrical and computer engineering from Purdue University, Indiana, USA, in 2007, and his PhD in computer science from the University of Illinois at Urbana-Champaign, Illinois, USA, in 2012. His current research interests include medical imaging, machine learning, pattern recognition, deep learning, and image processing. Syed Ibrahim Hassan received his BS degree in computer science in 2015 from Quaid-E-Awam University of Engineering Science and Technology Sindh, Pakistan. He is currently pursuing his MS degree in computer science and engineering from Sejong University, Seoul, South Korea. He joined CVPR Lab in September 2016. His current research interests include computer vision, deep learning, and natural language processing. Minh Dang received his BS degree in information systems in 2016 from the University of Information Technology, HCMC, VietNam. He is currently pursuing his MS degree in computer science and engineering from Sejong University, Seoul, South Korea. He joined CVPR Lab at the beginning of 2017. His current research interests include computer vision, data mining, and natural language processing. O New Lee is a researcher in the Department of Bioresource Engineering at Sejong University. She received her BS degree in the Department of Horticultural Science from Korea University, Seoul, South Korea, in 1997. She received her MS degree and PhD from the Department of Agricultural and Environmental Biology at the University of Tokyo in 2000 and 2004, respectively. Her current research interests include plant breeding, marker-assisted selection, and genome wide association study. Han Yong Park is an associate professor in the Department of Bioresource Engineering at Sejong University, which he joined in 2011. He received his BS degree in the Department of Horticultural Science from Yeungnam University in 1989. He received his MS degree and PhD from the Department of Horticultural Science at Seoul National University in 1992 and 1995, respectively. He was a senior researcher, as a recognized breeder in Raphanus sativus, at vegetable breeding laboratories of renowned seed company, Hungnong, Seminis, and Monsanto in Korea for over 20 years. He enrolled 27 radish variety registration and protection in Korea and has experience exporting those seeds to Japan and China. |