Translator Disclaimer
14 December 1999 Multisensor data fusion for automated scene interpretation
Author Affiliations +
An approach to the combined extraction of linear as well as two-dimensional objects from multisensor data based on a feature- and object-level fusion of the results is proposed. The data sources are DAIS hyperspectral data, AES-1 SAR data, and high- resolution panchromatic digital orthoimages. Rural test areas consisting of a road network, agricultural fields, and small villages were investigated. The scene interpretation is based on a conceptual model consisting of a semantic net for each of the sensors and a semantic net of the real world objects. The sensor nets and the object net are combined into one network by means of a geometry and material level of network nodes. Road networks are extracted from the panchromatic orthoimage and from selected hyperspectral bands. Based on the knowledge that roads compose networks the extraction results are combined. Two-dimensional, i.e. areal, objects are extracted from hyperspectral data after a principal component transformation. The SAR data is segmented using image intensity and interferometric elevation. The classifications of the hyperspectral and SAR data are combined with the extracted road network using rule- and segment-based methods. In the outlook, comments are given on the trade-off between the improvement of the results using the new method and the increasing costs for data acquisition.
© (1999) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Olaf Hellwich and Christian Wiedemann "Multisensor data fusion for automated scene interpretation", Proc. SPIE 3871, Image and Signal Processing for Remote Sensing V, (14 December 1999);

Back to Top