Pedestrians involved in roadway accidents account for nearly 12 percent of all traffic fatalities and 59,000 injuries each year. Most injuries occur when pedestrians attempt to cross roads, and there have been noted differences in accident rates midblock vs. at intersections. Collecting data on pedestrian behavior is a time consuming manual process that is prone to error. This leads to a lack of quality information to guide the proper design of lane markings and traffic signals to enhance pedestrian safety. Researchers at the Georgia Tech Research Institute are developing and testing an automated system that can be rapidly deployed for data collection to support the analysis of pedestrian behavior at intersections and midblock crossings with and without traffic signals. This system will analyze the collected video data to automatically identify and characterize the number of pedestrians and their behavior. It consists of a mobile trailer with four high definition pan-tilt cameras for data collection. The software is custom designed and uses state of the art commercial pedestrian detection algorithms. We will be presenting the system hardware and software design, challenges, and results from the preliminary system testing. Preliminary results indicate the ability to provide representative quantitative data on pedestrian motion data more efficiently than current techniques.
Efficient deboning is key to optimizing production yield (maximizing the amount of meat removed from a chicken frame while reducing the presence of bones). Many processors evaluate the efficiency of their deboning lines through manual yield measurements, which involves using a special knife to scrape the chicken frame for any remaining meat after it has been deboned. Researchers with the Georgia Tech Research Institute (GTRI) have developed an automated vision system for estimating this yield loss by correlating image characteristics with the amount of meat left on a skeleton. The yield loss estimation is accomplished by the system’s image processing algorithms, which correlates image intensity with meat thickness and calculates the total volume of meat remaining. The team has established a correlation between transmitted light intensity and meat thickness with an R2 of 0.94. Employing a special illuminated cone and targeted software algorithms, the system can make measurements in under a second and has up to a 90-percent correlation with yield measurements performed manually. This same system is also able to determine the probability of bone chips remaining in the output product. The system is able to determine the presence/absence of clavicle bones with an accuracy of approximately 95 percent and fan bones with an accuracy of approximately 80%. This paper describes in detail the approach and design of the system, results from field testing, and highlights the potential benefits that such a system can provide to the poultry processing industry.
One technique to better utilize existing roadway infrastructure is the use of HOV and HOT lanes. Technology to monitor the use of these lanes would assist managers and planners in efficient roadway operation. There are no available occupancy detection systems that perform at acceptable levels of accuracy in permanent field installations. The main goal of this research effort is to assess the possibility of determining passenger use with imaging technology. This is especially challenging because of recent changes in the glass types used by car manufacturers to reduce the solar heat load on the vehicles. We describe in this research a system to use multi-plane imaging with appropriate wavelength selection for sensing passengers in the front and rear seats of vehicles travelling in HOV/HOT lanes. The process of determining the geometric relationships needed, the choice of illumination wavelengths, and the appropriate sensors are described, taking into account driver safety considerations. The paper will also cover the design and implementation of the software for performing the window detection and people counting utilizing both image processing and machine learning techniques. The integration of the final system prototype will be described along with the performance of the system operating at a representative location.
Bones continue to be a problem of concern for the poultry industry. Most further processed products begin with the
requirement for raw material with minimal bones. The current process for generating deboned product requires systems
for monitoring and inspecting the output product. The current detection systems are either people palpitating the product
or X-ray systems. The current performance of these inspection techniques are below the desired levels of accuracies and
are costly. We propose a technique for monitoring bones that conduct the inspection operation in the deboning the
process so as to have enough time to take action to reduce the probability that bones will end up in the final product.
This is accomplished by developing active cones with built in illumination to backlight the cage (skeleton) on the
deboning line. If the bones of interest are still on the cage then the bones are not in the associated meat. This approach
also allows for the ability to practice process control on the deboning operation to keep the process under control as
opposed to the current system where the detection is done post production and does not easily present the opportunity to
adjust the process. The proposed approach shows overall accuracies of about 94% for the detection of the clavicle
Most cutting and deboning operations in meat processing require accurate cuts be made to obtain maximum yield and ensure food safety. This is a significant concern for purveyors of deboned product. This task is made more difficult by the variability that is present in most natural products.
The specific application of interest in this paper is the production of deboned poultry breast. This is typically obtained from a cut of the broiler called a 'front half' that includes the breast and the wings. The deboning operation typically consists of a cut that starts at the shoulder joint and then continues along the scapula. Attentive humans with training do a very good job of making this cut. The breast meat is then removed by pulling on the wings. Inaccurate cuts lead to poor yield (amount of boneless meat obtained relative to the weight of the whole carcass) and increase the probability that bone fragments might end up in the product. As equipment designers seek to automate the deboning operation, the cutting task has been a significant obstacle to developing automation that maximizes yield without generating unacceptable levels of bone fragments.
The current solution is to sort the bone-in product into different weight ranges and then to adjust the deboning machines to the average of these weight ranges. We propose an approach for obtaining key cut points by extrapolation from external reference points based on the anatomy of the bird. We show that this approach can be implemented using a stereo imaging system, and the accuracy in locating the cut points of interest is significantly improved. This should result in more accurate cuts and with this concomitantly improved yield while reducing the incidence of bones. We also believe the approach could be extended to the processing of other species.
Researchers at the Georgia Tech Research Institute designed a vision inspection system for poultry kill line sorting with the potential for process control at various points throughout a processing facility. This system has been successfully operating in a plant for over two and a half years and has been shown to provide multiple benefits. With the introduction of HACCP-Based Inspection Models (HIMP), the opportunity for automated inspection systems to emerge as viable alternatives to human screening is promising. As more plants move to HIMP, these systems have the great potential for augmenting a processing facilities visual inspection process. This will help to maintain a more consistent and potentially higher throughput while helping the plant remain within the HIMP performance standards.
In recent years, several vision systems have been designed to analyze the exterior of a chicken and are capable of identifying Food Safety 1 (FS1) type defects under HIMP regulatory specifications. This means that a reliable vision system can be used in a processing facility as a carcass sorter to automatically detect and divert product that is not suitable for further processing. This improves the evisceration line efficiency by creating a smaller set of features that human screeners are required to identify. This can reduce the required number of screeners or allow for faster processing line speeds.
In addition to identifying FS1 category defects, the Georgia Tech vision system can also identify multiple "Other Consumer Protection" (OCP) category defects such as skin tears, bruises, broken wings, and cadavers. Monitoring this data in an almost real-time system allows the processing facility to address anomalies as soon as they occur. The Georgia Tech vision system can record minute-by-minute averages of the following defects: Septicemia Toxemia, cadaver, over-scald, bruises, skin tears, and broken wings. In addition to these defects, the system also records the length and width information of the entire chicken and different parts such as the breast, the legs, the wings, and the neck. The system also records average color and miss- hung birds, which can cause problems in further processing. Other relevant production information is also recorded including truck arrival and offloading times, catching crew and flock serviceman data, the grower, the breed of chicken, and the number of dead-on- arrival (DOA) birds per truck.
Several interesting observations from the Georgia Tech vision system, which has been installed in a poultry processing plant for several years, are presented. Trend analysis has been performed on the performance of the catching crews and flock serviceman, and the results of the processed chicken as they relate to the bird dimensions and equipment settings in the plant. The results have allowed researchers and plant personnel to identify potential areas for improvement in the processing operation, which should result in improved efficiency and yield.
The U.S. demand for deboned chicken has risen greatly in the past 5 years, with the expectations that this demand will only continue at an accelerated level. The standard inspection process for bones in meat is for workers to manually feel for bones. It is clear that this time- consuming manual inspection method is insufficient to meet the increasing demand for deboned meat products. Georgia Tech Electrical Engineering faculty and Research Scientists in conjunction with a leading x-ray equipment manufacturer are working together on the development of a system to fuse information from visible images and x-ray images to enhance the accuracy of detection. Currently there are some bones that x-ray systems have difficulty detecting. These are usually relatively thin and are located near the surface of the meat. A primary example is a fanbone (so called because of its shape). We will describe and present results from work geared towards the development of an integrated system that would fuse visible and x-ray information. Significant benefits to the poultry industry are anticipated in terms of reduced processing costs, improved inspection performance and increased throughput through the use of the integrated system to be described. Additionally, generic aspects of the proposed technologies may be applicable to other food processing industries.
The application of machine vision system to industrial manufacturing and inspection processes has motivate the development of intelligent and yet flexible decision making processes. When working with highly uniform product, most of the quality or inspection decisions can be based on straightforward but rigid rules once the relevant features have been extracted from the image. However when the product is highly nonuniform, other techniques must be applied to allow for product variability while still being capable of identifying and classifying defects. This paper will investigate methods for accomplishing this based on soft computing. A discussion of the general approach and then a specific methods for accomplishing this based on soft computing. A discussion of the general approach and then a specific examples of an integrated system for product quality determination is presented. This system combines color image processing and feature extraction with neural network classifiers and fuzzy logic based decision outputs to allow for maximum flexibility in accommodating product variability while still maintaining quality standards. The techniques for optimizing the classification parameters and the determination of the fuzzy logic membership functions and user rules are presented.
Digital cameras have been around for many years, but a new breed of consumer market cameras is hitting the main stream. By using these devices, system designers and integrators will be well posited to take advantage of technological advances developed to support multimedia and imaging applications on the PC platform. Having these new cameras on the consumer market means lower cost, but it does not necessarily guarantee ease of integration. There are many issues that need to be accounted for like image quality, maintainable frame rates, image size and resolution, supported operating system, and ease of software integration. This paper will describe briefly a couple of the consumer digital standards, and then discuss some of the advantages and pitfalls of integrating both USB and Firewire cameras into computer/machine vision applications.
The design of robust machine vision algorithms is one of the most difficult parts of developing and integrating automated systems. Historically, most of the techniques have been developed using ad hoc methodologies. This problem is more severe in the area of natural/biological products. In this arena, it has been difficult to capture and model the natural variability to be expected in the products. This present difficulty in performing quality and process control in the meat, fruit and vegetable industries. While some systems have been introduced, they do not adequately address the wide range of needs. This paper will propose an algorithm development technique that utilizes modes of the human visual system. It will address that subset of problems that humans perform well, but have proven difficult to automate with the standard machine vision techniques. The basis of the technique evaluation will be the Georgia Tech Vision model. This approach demonstrates a high level of accuracy in its ability to solve difficult problems. This paper will present the approach, the result, and possibilities for implementation.
Visual quality control in many food processing operations continues to be a manual, difficult, and tedious task. Computer/machine vision systems offer a solution, but the development of effective algorithms that are able to accommodate the natural variability of food products has proved to be problematic. This paper examines and compares three techniques for processing multi-spectral imagery for these applications. One technique is to use artificial neural networks (ANNs). ANNs have the ability to be fault tolerant when establishing decision surfaces within the test data and can operate in parallel at high speeds -- this makes them ideal for this application. The main drawback of ANNs is their inability to provide a meaningful justification for the decision boundaries they establish when classifying data. Another image processing technique that uses a more deterministic data classification method is vector quantization (VQ). VQ uses a data clustering and splitting algorithm that can be modified to improve speed and accuracy according to the application. In an effort to include all levels of algorithm complexity, a modified thresholding approach is also compared to the more computationally demanding ANN and VQ techniques. The strengths and weaknesses of each of these algorithms are highlighted based on their performance in these domains.
With the current trend of integrating machine vision systems in industrial manufacturing and inspection applications comes the issue of camera and illumination stabilization. Unless each application is built around a particular camera and highly controlled lighting environment, the interchangeability of cameras of fluctuations in lighting become a problem as each camera usually has a different response. An empirical approach is proposed where color tile data is acquired using the camera of interest, and a mapping is developed to some predetermined reference image using neural networks. A similar analytical approach based on a rough analysis of the imaging systems is also considered for deriving a mapping between cameras. Once a mapping has been determined, all data from one camera is mapped to correspond to the images of the other prior to performing any processing on the data. Instead of writing separate image processing algorithms for the particular image data being received, the image data is adjusted based on each particular camera and lighting situation. All that is required when swapping cameras is the new mapping for the camera being inserted. The image processing algorithms can remain the same as the input data has been adjusted appropriately. The results of utilizing this technique are presented for an inspection application.
Manufacturing processes that utilize natural products as raw materials for forming their deliverables face additional challenges in the areas of quality control and inspection. This comes about from the natural variability that occurs in the products. Systems to automate this activity have been difficult to design and implement from the view of algorithm complexity which impacts the computational requirements for real-time execution. This paper will describe a technique for recognizing global or systematic defects on poultry carcasses. A method for implementing the technique that is capable of executing at a rate of about 180 birds per minute is described.
We will describe in this paper work that is geared towards the development of a tool to assist in the visual tasks of grading and inspecting poultry. Extensions to similar activity in other food processing arenas will be discussed. We specify aids since we believe that these systems will not be able to fully conduct the range of functions required but would be able to provide capable assistance if they function in a screening capacity.