Damage assessment, change detection or geographical database update are traditionally performed by experts looking for
objects in images, a task which is costly, time consuming and error prone. Automatic solutions for building verification
are particularly welcome but suffer from illumination and perspective changes. On the other hand, semi-automatic
procedures intend to speed up image analysis while limiting human intervention to doubtful cases. We present a semi-automatic
approach to assess the presence of buildings in airborne images from geometrical and photometric cues. For
each polygon of the vector database representing a building, a score is assigned, combining geometrical and photometric
cues. Geometrical cues relate to the proximity, parallelism and coverage of linear edge segments detected in the image
while photometric factor measures shadow evidence based on intensity levels in the vicinity of the polygon. The human
operator interacts with this automatic scoring by setting a threshold to highlight buildings poorly assessed by image
geometrical and photometric features. After image inspection, the operator may decide to mark the polygon as changed
or to update the database, depending on the application.
This paper presents the participation of the SIC to the EuroSDR contest (European Spatial Data Research; formerly known as OEEPE) about road extraction. After presenting the framework of the EuroSDR contest, our approach for road extraction is described. It consists of a line detector based on edge detection using a straightness constraint obtained from geometrical moment to filter out non-straight segments. Those segments are then filtered according to the NDVI (vegetation index) since roads are made of material different from vegetation. Resulting figures about the completeness, correctness and localization precision of the road segments are discussed for the EuroSDR data, and compared to the results of other challengers participating to the contest.
This paper presents the discussion and results in the field of automatic face identification. The implementation possibilities are presented, and the retained choices are motivated. The objective is to identify the person whose image is available from a grey-level camera. The approach is to extract characteristics that will be classified according to extracted characteristics of a database. One section is devoted to the importance of a proper acquisition method, based on profile images. Several sections are more technical and deal with the profile extraction, the computation of the curvature and the way characteristics are derived. This is naturally followed by practical results. Finally, some prespectives are listed to let the present work be integrated in a practical application where several hundred people must be identified.