KEYWORDS: Sensors, Data modeling, 3D modeling, Space operations, 3D acquisition, Solid modeling, Algorithm development, Satellites, Image processing, Detection and tracking algorithms
Researchers at the Michigan Aerospace Corporation have developed accurate and robust 3-D algorithms for pose determination (position and orientation) of satellites as part of an on-going effort supporting autonomous rendezvous, docking and space situational awareness activities. 3-D range data from a LAser Detection And Ranging (LADAR) sensor is the expected input; however, the approach is unique in that the algorithms are designed to be sensor independent. Parameterized inputs allow the algorithms to be readily adapted to any sensor of opportunity. The cornerstone of our approach is the ability to simulate realistic range data that may be tailored to the specifications of any sensor. We were able to modify an open-source raytracing package to produce point cloud information from which high-fidelity simulated range images are generated. The assumptions made in our experimentation are as follows: 1) we have
access to a CAD model of the target including information about the surface scattering and reflection characteristics of the components; 2) the satellite of interest may appear at any 3-D attitude; 3) the target is not necessarily rigid, but does have a limited number of configurations; and, 4) the target is not obscured in any way and is the only object in the field of view of the sensor. Our pose estimation approach then involves rendering a large number of exemplars (100k to 5M), extracting 2-D (silhouette- and projection-based) and 3-D (surface-based) features, and then training ensembles of decision trees to predict: a) the 4-D regions on a unit hypersphere into which the unit quaternion that represents the
vehicle [QX, QY, QZ, QW] is pointing, and, b) the components of that unit quaternion. Results have been quite promising and the tools and simulation environment developed for this application may also be applied to non-cooperative spacecraft operations, Autonomous Hazard Detection and Avoidance (AHDA) for landing craft, terrain mapping, vehicle guidance, path planning and obstacle avoidance.
This paper discusses a new automated image analysis technique for inspecting and monitoring changes in plastic bumper surfaces during the paint-baking process. This new technique produces excellent performance, and is appropriate for on-line production monitoring as well as laboratory analysis. The objective of this work was to develop an accurate method for determining the paint bake time and temperature at which parts had been treated. This task was accomplished using mathematical morphology to extract differentiating features from samples collected at three magnifications and sending these feature-vectors to a back-propagation neural network for classification.
A system of programs is described for acquisition, mosaicking, cueing and interactive review of large-scale transmission electron micrograph composite images. This work was carried out as part of a final-phase clinical analysis study of a drug for the treatment of diabetic peripheral neuropathy. MOre than 500 nerve biopsy samples were prepared, digitally imaged, processed, and reviewed. For a given sample, typically 1000 or more 1.5 megabyte frames were acquired, for a total of between 1 and 2 gigabytes of data per sample. These frames were then automatically registered and mosaicked together into a single virtual image composite, which was subsequently used to perform automatic cueing of axons and axon clusters, as well as review and marking by qualified neuroanatomists. Statistics derived from the review process were used to evaluate the efficacy of the drug in promoting regeneration of myelinated nerve fibers. This effort demonstrates a new, entirely digital capability for doing large-scale electron micrograph studies, in which all of the relevant specimen data can be included at high magnification, as opposed to simply taking a random sample of discrete locations. It opens up the possibility of a new era in electron microscopy--one which broadens the scope of questions that this imaging modality can be used to answer.
Many text recognition systems recognize text imagery at the character level and assemble words from the recognized characters. An alternative approach is to recognize text imagery at the word level, without analyzing individual characters. This approach avoids the problem of individual character segmentation, and can overcome local errors in character recognition. A word-level recognition system for machine-printed Arabic text has been implemented. Arabic is a script language, and is therefore difficult to segment at the character level. Character segmentation has been avoided by recognizing text imagery of complete words. The Arabic recognition system computes a vector of image-morphological features on a query word image. This vector is matched against a precomputed database of vectors from a lexicon of Arabic words. Vectors from the database with the highest match score are returned as hypotheses for the unknown image. Several feature vectors may be stored for each word in the database. Database feature vectors generated using multiple fonts and noise models allow the system to be tuned to its input stream. Used in conjunction with database pruning techniques, this Arabic recognition system has obtained promising word recognition rates on low-quality multifont text imagery.
Various approaches have been proposed over the years for using contextual and linguistic information to improve the recognition rates of existing OCR systems. However, there is an intermediate level of information that is currently underutilized for this task: confidence measures derived from the recognition system. This paper describes a high-performance recognition system that utilizes identification of field type coupled with field-level disambiguation and a spell-correction algorithm to significantly improve raw recognition outputs. This paper details the implementation of a high-accuracy machine-print character recognition system based on backpropagation neural networks. The system makes use of neural net confidences at every stage to make decisions and improve overall performance. It employs disambiguation rules and a robust spell-correction algorithm to enhance recognition. These processing techniques have led to substantial improvements of recognition rates in large scale tests on images of postal addresses.
This paper discusses the use of neural networks to locate regions of interest for fingerprint classification using feature-encoded fingerprint images. The target areas are those useful for the classification of fingerprints: whorls, loops, arches, and deltas. Our approach is to limit the amount of data which a classification algorithm must consider by determining with high accuracy those areas which are most likely to contain relevant features (effective for classification). Several feature sets were analyzed and successful preliminary results are summarized. Five feature sets were tested: (1) grayscale data, (2) binary ridges, (3) binary projection, and (4 & 5) 4- and 8-way directional convolutions. Four-way directional convolution produced accurate results with a minimal number of false alarms. All work was conducted using fingerprint data from NIST Special Database 4. The approach discussed here is also applicable to other general computer vision problems. In addition to fingerprint classification, an example of face recognition is also provided to illustrate the generality of the algorithmic approach.
This paper describes an investigation into the use of genetic algorithm techniques for selecting optimal feature sets in order to discriminate large sets of Arabic characters. Human experts defined a set of over 900 features from many different classes which could be used to help discriminate different characters from the Arabic character set. Each of the features was assigned a cost, based on the average amount of CPU time necessary to compute it for a typical character. The goal of the optimization was to find the subset of features which produced the best trade-off between recognition accuracy and computational cost. Using all of the features, or particular subsets, we obtained high recognition rates on machine-printed Arabic characters. Application of the genetic algorithm to selected subsets of characters and features demonstrates the ability of the method to significantly reduce the computational cost of the classification system and maintain or increase the recognition rate obtained with the complete set of features.
The increased importance and sophistication of modern computer modeling capabilities has led to a need for high-quality databases to represent complex scenes. Such databases must contain complete and accurate three-dimensional descriptions of selected areas, along with material classifications and other relevant information. Typically, no single sensor input can provide such complete information; data from multiple sources must be integrated to construct the database. This paper surveys key issues in scene representation. In particular, it outlines methods to construct a database from different image and cartographic sources, describes required database architecture and content, and presents an object-oriented representation approach which we call object hierarchy.
This paper is an investigation into the use of genetic algorithm techniques for doing optimal feature set selection in order to discriminate large sets of characters. Human experts defined a set of over 900 features from many different classes which could be used to help discriminate different characters from a chosen character set. Each of the features was assigned a cost, based on the average amount of CPU time necessary to compute it for a typical character. The goal of the task was to find the subset of features which produced the best trade-off between recognition accuracy and computational cost. The authors were able to show that by using all of the features or even major classes of them, high rates of discrimination accuracy for a printed character set (above 98% correct, first choice) could be obtained. Application of the genetic algorithm to selected subsets of characters and features demonstrated the ability of the method to significantly reduce the computational cost of the classification system and maintain or increase accuracy from the case where a complete set of features was used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.