Automatic identification of clue cells in microscopic leucorrhea images provides important information for evaluating gynecological diseases. Traditional manual microscopic examination of Gram-stained vaginal smears is adopted by most hospitals for identifying clue cells; however, it is both complex and time-consuming. In order to solve these problems, an automatic identification of clue cells in microscopic leucorrhea images based on machine learning is proposed in this paper. First, the Otsu threshold method is used to segment regions of interest (ROI) in image preprocessing according to the morphological features of clue cells. Then, Gabor, HOG and GLCM texture features are extracted to describe irregular edges and rough surfaces of clue cells. Finally, a SVM classifier using a hybrid kernel function by linearly weighted RBF and polynomial kernels is trained to identify clue cells rapidly and conveniently. In experiments, the method using GLCM texture features and a hybrid kernel function of SVM achieved 94.64% accuracy and 94.92% recall rate, which was better than methods using Gabor or HOG texture features and a single kernel function of SVM.
Fecal microscopic examination is a routine examination item to determine whether the digestive system is normal by analyzing formed elements. Traditional method is that doctor uses microscope eyepiece to observe sample smears. The efficiency is low, and examination results depend on doctor's experience level. Therefore, intelligent identification of formed elements is the main development direction of current fully automated fecal instruments. Unlike blood or urine samples, human fecal samples contain a lot of impurities, and sample stratification phenomenon is serious. So image quality assessment methods are difficult to find the sharpest image, affecting effectiveness of intelligent identification algorithm. In this paper, the microscopic image autofocus technology for human fecal samples is studied and divided into two parts: location and photographing. In location process, we use SMD algorithm to determine sample photographing interval. In photographing process, microscope platform zigzagged move in the interval to obtain each view's successively image sequences of different focal lengths. In order to accurately find the sharpest image in image sequence, we compared the difference between human eyes with 31 types of no-reference image quality assessment methods based on entropy, gradient, color, edge, contrast, similarity, and transform domain. Finally an improved Local TV algorithm was chose. Experimental results show that the improved Local TV algorithm is insensitive to changes in sample concentration with good robustness, and the accuracy rate can reach 94.26%. Our experimental results have some reference value for other focusing problems of complex microscopic images.
With the development of the liquid crystal display (LCD) module industry, LCD modules become more and more precise with larger sizes, which demands harsh imaging requirements for automated optical inspection (AOI). Here, we report a high-resolution and clearly focused imaging optomechatronics for precise LCD module bonding AOI inspection. It first presents and achieves high-resolution imaging for LCD module bonding AOI inspection using a line scan camera (LSC) triggered by a linear optical encoder, self-adaptive focusing for the whole large imaging region using LSC, and a laser displacement sensor, which reduces the requirements of machining, assembly, and motion control of AOI devices. Results show that this system can directly achieve clearly focused imaging for AOI inspection of large LCD module bonding with 0.8 μm image resolution, 2.65-mm scan imaging width, and no limited imaging width theoretically. All of these are significant for AOI inspection in the LCD module industry and other fields that require imaging large regions with high resolution.
Anisotropic conductive film (ACF) bonding is widely used in the liquid crystal display (LCD) industry. It implements circuit connection between screens and flexible printed circuits or integrated circuits. Conductive microspheres in ACF are key factors that influence LCD quality, because the conductive microspheres’ quantity and shape deformation rate affect the interconnection resistance. Although this issue has been studied extensively by prior work, quick and accurate methods to inspect the quality of ACF bonding are still missing in the actual production process. We propose a method to inspect ACF bonding effectively by using automated optical inspection. The method has three steps. The first step is that it acquires images of the detection zones using a differential interference contrast (DIC) imaging system. The second step is that it identifies the conductive microspheres and their shape deformation rate using quantitative analysis of the characteristics of the DIC images. The final step is that it inspects ACF bonding using a back propagation trained neural network. The result shows that the miss rate is lower than 0.1%, and the false inspection rate is lower than 0.05%.
Automatic identification of fungi in microscopic fecal images provides important information for evaluating digestive diseases. To date, disease diagnosis is primarily performed by manual techniques. However, the accuracy of this approach depends on the operator’s expertise and subjective factors. The proposed system automatically identifies fungi in microscopic fecal images that contain other cells and impurities under complex environments. We segment images twice to obtain the correct area of interest, and select ten features, including the circle number, concavity point, and other basic features, to filter fungi. An artificial neural network (ANN) system is used to identify the fungi. The first stage (ANN-1) processes features from five images in differing focal lengths; the second stage (ANN-2) identifies the fungi using the ANN-1 output values. Images in differing focal lengths can be used to improve the identification result. The system output accurately detects the image, whether or not it has fungi. If the image does have fungi, the system output counts the number of different fungi types.
In this paper, we describe in detail the hierarchical model and X (HMAX) model of Riesenhuber and Poggio. The HMAX model, accounting for visual processing and making plausible predictions founded on prior information, is built up by alternating simple cell layers and complex cell layers. We generalize the principal facts about the ventral visual stream and argue hierarchy of brain areas to mediate object recognition in visual cortex. Then, in order to obtain the futures of object, we implement Gabor filters and alternately apply template matching and maximum operations for input image. Finally,according to the target feature saliency and position information, we introduce a novel algorithm for object recognition in clutter based on the HMAX architecture. The improved model is competitive with current recognizing algorithms on standard database, such as the UICI car and the Caltech101 database including a large number of diverse categories. We also prove that the approach combining spatial position information of parts with the feature fusing can further promotes the recognition rate. The experimental results demonstrate that the proposed approach can recognize objects more precisely and the performance outperforms the standard model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.