PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Most systems for cartographic features extraction developed within the computer vision and image understanding community make little use of detailed camera information during object detection and delineation. For the most part the scale, size, and orientation of specific features are usually expressed in terms of image pixel size. Given the use of nadir and near-nadir mapping photography this has not severely impacted the development of several techniques at a variety of institutions for building detection, road network extraction, and other specific man-made objects. It is not too unfair to say that the inherent difficulties involved in achieving robust automated object detection and delineation have overshadowed any errors due to lack of rigor in the modeling of the image acquisition. In this paper we will develop several of these issues and discuss how the use of photogrammetric cues will play a major role in future systems for automated cartographic feature extraction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Rigorous formulation in terms of only feature descriptors is given for: two- and three- dimensional transformations, photogrammetric conditions, and linear feature geometric constraints. Experimental results, considering control and pass features, from single photo resection (recovering both interior and exterior orientation elements) and two-photo triangulation (estimating pass lines for object completion), using simulated data and some real image data. Geometric constraints are used to provide redundancy in place of straight lines in stereo pairs. Extensive investigation is continuing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated extraction of elevation data from stereo images requires automated images registration followed by photogrammetric mapping into a Digital Elevation Model (DEM). The Digital Production System (DPS) Data Extraction Segment (DE/S) of the Defense Mapping Agency (DMA) currently uses an image pyramid registration technique known as Hierarchical Relaxation Correlation (HRC) to perform Automated Terrain Extraction (ATE). Under an internal research and development project, GDE Systems has developed Global Least Squares Matching (GLSM) technique of nonlinear estimation requiring a simultaneous array algebra solution of a dense DEM as a part of the matching process. This paper focuses on traditional low density DEM production where the coarse-to-fine process of HRC and GLSM is stopped at lower image resolutions until the required DEM quality is reached. Tests were made comparing the HRC and GLSM results at various image resolutions against carefully edited and averaged check points of four cartographers from 1:40,000 and 1:80,000 softcopy stereo models. The results show that both HRC and GLSM far exceed the traditional mapping standard allowing an economic use of lower resolution source images. GLSM allowed up to five times lower image resolution than HRC producing acceptable contour plots with no manual edit from 1:40,000 - 800,000 softcopy stereo models vs. the traditional DEM collection from 1:40,000 analytical stereo model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A general model for digital photogrammetry has been developed, integrating area-based multi- image matching, point determination, object surface reconstruction and orthoimage generation. Using this model the unknown quantities are estimated directly from the pixel intensity values and from control information in a nonlinear least squares adjustment. The unknown quantities are the geometric and radiometric parameters for the description of the object surface (e.g. the heights of a digital terrain model and the intensity values of all points on the surface), and the orientation parameters of the images. Any desired number of images, scanned in various spectral bands, can be processed simultaneously. The convergence radius or pull-in range, known to be rather poor (a few pixels only) in least squares matching, is considerably extended, and the computation time is considerably reduced by using a hierarchical procedure with image pyramids. Some tests using this approach on real aerial imagery were made. They constitute the first controlled tests of the approach and prove its applicability for practical needs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we explore issues in object space matching using photogrammetric principles for computer vision. We provide a general philosophical basis for our research and demonstrated its utility with the example of multiple image point matching. In the proposed model, we reconstruct the shape and reflectance of the surface patch from the gray levels of the corresponding image patches. The geometry of the surface patch is approximated by a digital elevation model (DEM). For every surface grid element a reflectance value is introduced. Apart from the geometric and radiometric parameters of the surface path, the exterior orientation of the image patches is introduced as unknowns. Since the image patches are rather small, a parallel projection is used instead of the central projection. The description of the mathematical model is followed by implementation details and results obtained from a prototype version of an automatic aerotriangulation system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Reggie, an autonomous image registration software package which can process several diverse types of image data, including Landsat thematic mapper (TM), SPOT high resolution visible (HRV), and ADRG digital map imagery is described. This software package is in the final stages of development, and will soon be commercially available. The Reggie algorithm employs sensor geometry models to fully exploit all a priori knowledge of the image collection geometries and the potential image distortions. Sensor models for Landsat TM and SPOT HRV sensors and digital map data are described. Initial results demonstrating registration accuracy and execution time are presented for the registration of Landsat TM and SPOT HRV imagery and ADRG digital map data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Général Jean-Victor Poncelet published his treatise on projective geometry in 1822. This was the start of an enormous development in geometry in the 19th century. During this period geometry in the plane and in 3-dimensional space was studied in particular detail. The development culminated in the publishing of the Encyclopedia of Mathematics, which appeared in irregular installments from 1900 to 1934. Photogrammetry—the use of photographic images for surveying, mapping and reconnaissance—began in the second half of the 19th century. By the 1890's substantial theoretical contributions were made by Sebastian Finsterwalder. Finsterwalder reported on his foundational work in a keynote address to the German Mathematical Society in 1897; he also contributed an article on photogrammetry to the Encyclopedia of Mathematics. Among other things Finsterwalder observed that Rudolf Sturm's analysis of the "homography problem" (1869) can be used to solve the problem of 3D-reconstruction using point matches in two images. Subsequently, important theoretical advances were made by mathematicians at the Technical University of Vienna. An excellent reference for geometry and its relationshipto photogrammetry is a book of Emil Muller on constructive geometry, which appeared in 1923. Muller's assistent and successor Erwin Kruppa established the "structure-from-motion" theorem in 1913. This theorem was rediscovered by Shimon Uliman in 1977
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The relationship between photogrammetry and computer vision is examined. This paper reviews the central issues for both computer vision and photogrammetry and the shared goals as well as distinct approaches are identified. Interaction in the past has been limited by both differences in terminology and in the basic philosophy concerning the manipulation of projection equations. The application goals and mathematical techniques of both fields have considerable overlap and so improved dialogue is essential.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Invariant relationships have been derived from the mathematical models of image formation for several types of sensors; from the collinearity equations of pinhole camera systems and separately, from the condition equations of strip-mapped SAR. In the present paper, we extend these results by combining the collinearity and condition equations of photographic and SAR systems. The resulting invariants enable us to transfer points and three-dimensional models from multiple photographic to SAR images and vice-versa. Geometric integrity of the different imaging systems is preserved by the technique. The method will facilitate synergistic, model- based interpretation of different sensor types.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The determination of absolute orientation of image pairs is a central task in aerial photogrammetry. The usual way in photogrammetry to determine the absolute orientation is using control points. Acquisition of control points (i.e. the signalization of control points and the preservations over a longer period) is a very time and cost intensive task. Using new sensors as combined systems, the effort for the acquisition of control points to determine the absolute orientation could be minimized or even avoided. Beside the airborne camera, GPS, two laser profilers and an inertial system are used as additional sensors. The laser profilers are looking sidewards, so that during the stripewise flight over the photogrammetric block laser profiles at the upper and lower border of the strip are recorded. GPS- and INS-Data are registered during the whole time of the flight. Using this method there is following additional information for each image pair: (1) GPS determined coordinates of the projection centers of each image, (2) INS determined attitude angles for the absolute orientation of the laser profile, and (3) GPS and INS supported (i.e. absolute oriented) laser profiles at the upper and lower border of each image pair. Using an iterating method utilizing the additional laser profile and GPS information, the absolute orientation of an image pair could be performed without using additional control points.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a general model for panoramic cameras is described including options of different methods of forward motion compensation. A Levenberg-Marquardt based parameter estimation program is used to estimate camera parameters from ground control points or image correspondences. The model has 16 parameters including ones for describing the orientation and location of the camera, the velocity of motion of the camera, forward motion control parameters as well as internal parameters such as scale, principal point offsets and digitizing parameters (for use if the panoramic image is digitized from film). An important feature of the parameter solution method is that no initialization of the camera parameters is necessary, except knowledge of the sweep direction, which is usually obvious since the image is far wider in the sweep direction than the cross-sweep direction. The parameter solving program will automatically find an accurate initial parameter estimation and refine it by iteration to the best solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With this paper we continue work done in the area of automatic orientation of images. The term relative orientation usually is understood to be the task of estimating five rotational and translational parameters from a given set of homologous image points. With the attribute automatic we want to indicate that the parameter estimation is not the main task. The main task is that the stereo measurement process (i.e. establishing point correspondences) has to be done automatically. The presented procedure for automatic relative orientation of two images consists of modules for the computation of image pyramids, feature extraction, correspondence and the determination of orientation parameters. The procedure works in a hierarchical fashion, in which not more than some general a priori knowledge, e.g. that the images are taken at a standard image flight, is used. In this paper we describe the parallelization of the total procedure and the implementation on a SIMD computer system. A comparison to the sequential algorithms is given, discussing aspects mainly due to computational performance. Results concerning the quality are derived from experiments with two pairs of aerial images. Finally we show that with some modifications of the orientation procedure we arrive at a shape from stereo approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a matching algorithm for automatic DTM generation from SPOT images that provides dense, accurate and reliable results and attacks the problem of radiometric differences between the images. The proposed algorithm is based on a modified version of the Multiphoto Geometrically Constrained Matching (MPGC). It is the first algorithm that explicitly uses the SPOT geometry in matching, restricting thus the search space in one dimension, and simultaneously providing pixel and object coordinates. This leads to an increase in reliability, and to reduction and easier detection of blunders. The sensor modelling is based on Kratky's polynomial mapping functions to transform between the image spaces of stereopairs. With their help epipolar lines that are practically straight can be determined and the search is constrained along these lines. The polynomial functions can also provide approximate values, which are further refined by the use of an image pyramid.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the increasing availability of SAR satellites (ERS1, ALMAZ, J-ERS, RADARSAT) along with optical remote sensing platforms (LANDSAT, SPOT), it is critical to be able to register, compare and analyze data from different sources. In this paper we will show how DEM can be used to help in these various processes in particular for SAR imagery. DEM as well as orbitographic data along with the sensor description are required to generate terrain geocoded registered products. In the case of SAR imagery these 'ancillary data' can also be used to generate by-products such as overlays, shadows, compressions and dilatations thematic maps. These maps will help in further analysis and resampling of the SAR data. Applications for improved SAR terrain geocoding as well as special radar products, SAR image interpretation and SAR interferometry will be described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When digital stereo correlation methods are used to generate elevation data from stereo images of the terrain, the likelihood that the correlation process will be successful throughout the whole of the stereo model can be enhanced significantly by using hierarchical, multi-scale, pyramidal, coarse-to-fine, etc., techniques. One main advantage of these techniques is that the correlation data obtained at a lower resolution in the hierarchical procedure is used to guide and control the correlation process at the next higher resolution and, consequently, the user can be reasonably assured that the correlation process will not get 'lost'. Unfortunately, constraints imposed by results obtained at the lower resolution can also lead to generation of erroneous elevation data in some circumstances. For example, if the terrain is spotted with individual trees, buildings or other structures, the correlation process, when performed on the low resolution images, will include approximations to the heights of these features in the resulting elevation data. These are erroneous elevations compared to the 'bald earth' and their influence will be carried from one step to the next in the hierarchical procedure. The result will be a 'noisy' digital elevation model (DEM) containing approximations to the elevations of trees and structures on the terrain and not just the terrain itself.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This report presents a description of part of an end-to-end system being developed for the automatic recognition of general land use/cover categories from digitized aerial photography. Standard USGS categories are used here and include urban, fields, trees, and water. The system consists of modules for segmentation, feature extraction, and classification. This report extends the results of our efforts on the feature extraction and classification portions of the system, which were partially described earlier. Since the data source is panchromatic, the features used are measures of texture. These include spatial gray-level co-occurrence matrix, Laws, Fourier domain rings and wedges, and a simultaneous autoregressive model. The classifiers employed include the Bayes quadratic, k-nearest neighbor, Parzen, and a multilayer perceptron neural network. Through leave-one-scene-out sampling, each classifier type is trained and tested using feature data generated by each feature extraction technique. A new, fast method of training the multilayer perceptron is described. It is expected that many of the techniques developed will be applicable to other areas of image recognition where texture is an important discriminant.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Application-specific metrics for measuring effect of lossy compression on imagery are defined. Experimental results reveal that even small information loss, as measured by mean- square error, may result in large errors in classification and stereo extraction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital Photogrammetric Workstations (DPWS) have become a major focus of research within the photogrammetric community in the last year. This paper presents the state-of-the-art of DPWS. A DPWS is the main component of a Digital Photogrammetric Systems (DPS). A DPS is defined as hardware and software for deriving input data for Geographic Information Systems (GIS) as well as for Computer Aided Design (CAD) systems and other photogrammetric products from digital imagery using manual and automatic techniques. Besides the DPWS itself a DPS also includes A/D and D/A convertors for the imagery (digital cameras, film scanners, output devices for producing film and paper hardcopies). First design issues of a DPWS are addressed. Then, the question of automation versus interaction is discussed, and it is pointed out, where automation is possible in the chain of processing digital imagery. Subsequently a classification of the different kinds of DPWS according to their products is given. Then first experiences and results obtained by civilian mapping organizations involved in digital photogrammetry and using DPWS are described. Finally requirements for a broader use in practice and trends for further development in digital photogrammetry and in DPS are pointed out.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A digital image restitution and mensuration software package was developed and installed in a geographic information system. Mapping is performed by monoscopic digitization in the image display screen. Three of the four image restitution schemes employ a rigorous mathematical model for generating the object space coordinates, while the fourth one provides a close approximation. The primary application of this mapping tool is in map revision and natural resource inventory mapping. It is suitable for those with no photogrammetric experience.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Precise ground coordinates in the restitution from raster data is a key point for their integration in geographic information systems. This paper presents methods and results of the restitution of planimetric and altimetric features from digital stereo data (SPOT-PLA and airborne SAR) in stereoscopic mode. The method uses a photogrammetric approach. The stereo restitution is done with a digital video plotter using low-cost hardware (PC), and comparisons of the results with the digital topographic map are done in the ARC/INFO environment. Results from spot stereo pair in the Rocky Mountains (Canada) show a planimetric accuracy of 12 m with 90% confidence for well-identifiable features, and an altimetric accuracy for a DEM of 30 m with 90% confidence. First results from a stereo SAR pair show residuals of 5 m in planimetry and 20 m in altimetry when the stereo model is formed. The restitution has not been completely finished and evaluated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image analysis is a labor-intensive activity that grows increasingly intensive due to the volume of imagery and collateral being collected. Image analysts (IAs) and photo interpreters need to extract accurate yet timely information from the data. Strategically automating portions of the processing will help analysts achieve both objectives, first by eliminating some of the tedium of the activity then by accelerating the process. Model-supported exploitation (MSE) was identified as the technology that will provide this automation. This paper discusses in detail the various MSE design constraints, first as they pertain to the RADIUS problem domain, and then in context of their impact on the design of an MSE workstation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Today's Imagery Analysts (IA) are confronted with increasing work loads, more diverse sensor inputs, and rapidly changing world situations in their preparation of reliable intelligence data. Computing Devices International has developed a low cost prototype exploitation workstation which provides an end-to-end capability for demonstrating the potential and utility of 3D Model Supported Exploitation (MSE) to support IAs. The Geo-Located Multi-source Exploitation (GLMX), provides an environment for the rapid construction of 3D site models, the organization and recall of intelligence information associated with scene objects, registration of new images with existing site models, and MSE software tools to aid IAs in orientation, object detection and counting, negation, change detection, multi-sensor image fusion, and other tasks in support of intelligence collection and dissemination. Ultimately the goal of MSE is to provide greater levels of automation based upon the existence of a 3D site model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital image exploitation often requires the extraction of objects for inclusion in a product. Mapping, site modeling, and perspective scene generation all use the results of object extraction. Although one would reasonably expect a considerable amount of self-consistency in the extracted objects, the results of manual or automatic delineation may not conform to the expectations of product consumers and maintain appropriate links between lower-level object components that are individually extracted. Applying object geometry through the photogrammetric modeling of an image can greatly enhance the exploitation analyst's capacity to produce results that meet consumer's expectations of high fidelity and that produce consistent object descriptions for computer manipulation. This improves both the productivity of delineation beyond manual techniques and the reliability of delineation beyond the currently available, fully-automated techniques such as Automatic Target Recognition (ATR).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Knowledge about the imaging geometry and acquisition parameters provides useful geometric constraints for the analysis and extraction of man-made features in aerial imagery, particularly in oblique views. In this paper, we discuss the application of vanishing points for the identification of vertical and horizontal lines, and the use of multiple views for verification of these lines. The vertical and horizontal attributions are used to constrain the set of possible building hypotheses. Preliminary results exploiting these attributions are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.