Many low or middle level 3D reconstruction algorithms involve a robust estimation and selection step by which parameters of the best model are estimated and inliers fitting this model are selected. The RANSAC algorithm is the most widely used robust algorithm for this step. However, this robust algorithm is computationally demanding. A new version of RANSAC, called distributed RANSAC (D-RANSAC), is proposed in this paper to save computation time and improve accuracy. We compare our results with those of classical RANSAC and another state of the art version of it. Experiments show that D-RANSAC is superior to RANSAC in computational complexity and accuracy, and comparable with other proposed improved versions.
A central problem in the development of a mass-screening tool for atherosclerotic plaque is automatic calcification detection.
The mass-screening aspect implies that the detection process should be fast and reliable. In this paper we present a
first step in this direction by introducing a semi-automatic calcification classification tool based on non-linear stretching, an
image enhancement method that focusses on local image statistics. The calcified areas are approximated by a coarse brush,
which in our case is mimicked by taking the ground truth, provided by radiologists, and dilating it with circular structuring
elements of varying sizes. Thresholds are then examined for the different structuring elements, that yield optimal results
on the enhanced image. The results in this preliminary study which contains 19 images of varying calcification degree,
fully annotated by medical experts, show a significant increase in accuracy when the methodology is validated on a region
of interest containing the areas of a simulated coarse brush.
A model-based texture recognition system that classifies image textures seen from different distances and under different illumination directions is presented. The system works on the basis of a surface model obtained by means of four-source color photometric stereo (CPS), used to generate 2-D image textures as they would have appeared if imaged under different imaging geometries. The proposed recognition system combines cooccurrence matrices for feature extraction with a nearest neighbor classifier. The use of the cooccurrence matrices instead of filtering methods for feature extraction allows us to utilize only pixels for which valid information has been extracted by CPS. The validity of the method is demonstrated by classifying texture images captured under different imaging geometries than the reference images in the database. Moreover, the process of recognition allows one to guess the approximate direction of the illumination used to capture the test image.
In this paper, a new method of 2D inhomogeneous and non-parametric image registration with sub-pixel accuracy is proposed. The method is based on the invocation of deformation operators which imitate the deformations expected to be observed when a landslide occurs. The similarity between two images is measured by a similarity function which takes into consideration grey level value correlation and geometric deformation. The geometric deformation term ensures that the minimum necessary deformation compatible with the two images is employed. An extra term, ensuring maximum overlap between the two images is also incorporated, in order to avoid the pitfall of finding maximum correlation coefficient with minimum overlap. Sub-pixel accuracy is achieved by manipulating lists of pixels (real valued positions and corresponding grey values) rather than integer grid positions conventionally used to represent images. Landsat 5 TM images of southern Italy are used for the experiments.
In this paper, we present a review of some commonly used methods for signal interpolation and/or estimation, from a set of randomly chosen
samples. Most of these methods were originally devised for 1D signals. First we extend these methods to 2D and then perform a comparative study. Our experimental results show good
interpolation/reconstruction performances of some methods for
sampling ratios as small as 5% of the original number of pixels.
The aim of this work is to use information from various sources, including remote sensing images from which land use change may be identified, in order to produce landslide hazard maps. We designed a fuzzy neural network which allows us to incorporate all the levels of uncertainty in the informations used in order to draw conclusions about the severity of the landslide hazard. The scale of operation of such a system is at the regional level rather than the local microlevel where ground local measurements may be performed and detailed geotechnical mathematical models may be applied to calculate soil stresses. It is not possible to apply such accurate detailed models for large scale hazard assessment. The proposed system is expected to be less accurate but more widely applicable than what is currently used in geotechnics.
We propose a three stage algorithm to build all 3D-horizons simultaneously in volume seismic data. To improve reliability, this algorithm takes into consideration the relative positions of all horizons, and uses globally self consistent connectivity criteria which respect the temporal order of horizon creation. The first stage consists of the preliminary estimation of the local direction of each horizon at each point of the 3D-space. The second is smoothing the signal along the detected layer structure to reduce the level of noise in the signal. The basic processing is realized at the last stage of the algorithm. The processed 3D-seismic data are used for the simultaneous building of all 3D horizons. The output of the processing is a set of 3D-horizons represented by a series of triangulated surfaces.
In this paper, two different approaches for horizon picking are examined. The first one is a simple line detection algorithm applied to the full resolution image. The second algorithm is a multiscale line detection algorithm, based on the wavelet transform of the iamge. Both full resolution and multiresolution line detection algorithms are applied to 2D seismic images and compared in terms of their performance. Results show that the full resolution line detection approach outperforms the multiresolution approach.
This paper gives an overview of image registration algorithms and presents a new algorithm, which can be used to register images of the same or different modalities. In particular, a correlation-based scheme is used, but instead of grey values it correlates numbers formulated by different combinations of the extracted local Walsh coefficients of the images.
A CCD imaging system has been developed for detecting and imaging the beta/X-ray emissions from radiolabelled samples, principally for use in autoradiography. By using a novel frame-by-frame acquisition method, quantitative images of 14C-deoxyglucose distribution in a mouse brain have been produced with a spatial resolution of ~35micrometers under cooled conditions. The energy resolution of the cooled device has been measured using X-rays from 241Am, and found to be 0.5keV at 17.5 keV. We describe the problems associated with using a CCD at room temperature for radiation detection and imaging. To address these we have developed a simple histogram shift for dark current fixed pattern noise. We have also developed an image restoration method based on simulated annealing using CCD-specific models for the noise and the data. Applying both techniques to images of 20micrometers 18F labeled fibers obtained at room temperature yields FWHM measurements of ~85micrometers and ~39micrometers respectively.
Directional effects of illumination make difficult classification of rough 3D surfaces as their appearance may change dramatically, especially when a surface has variable albedo. One way of circumventing the problem is to separate the local shape and albedo information prior to classification. We do it by means of Colour Photometric Stereo, which produces 5 descriptors for each surface patch: two gradient components, and three colour components. This information is illumination-invariant, and can be used as an input for a suitable classification scheme. We proceed to classify a collection of surfaces by matching their colour histograms and multidimensional co-occurrence matrices of shape descriptors.
The wide usage of small satellite imagery, especially its commercialization makes application based on-board compression not only meaningful but also necessary in order to solve the bottleneck between the huge volume of data generated on-board and the very limited downlink bandwidth. In this paper, we propose a method which encodes different regions with different algorithms. We use three shape-adaptive image compression algorithms to be the candidates. The first one is a JPEG-based algorithm; the second one is based on the Object- based Wavelet Transform (OWT) method proposed by Katata; the third adopts Hilbert scanning of the regions of interest followed by one dimensional (1-D) wavelet transform. The three algorithms are also applied to the full image so that we can compare their performance on who rectangular image. We use eight Landsat TM multi-spectral images as our test set. The results show that these compression algorithms have significantly different performance for different regions. For relatively smooth regions, e.g. regions that consist of a single type of vegetation or water areas etc., the 1-D wavelet method is the best; for highly textured regions, e.g. urban areas, mountain areas and so on, the modified OWT method wins over the others; for the whole image, OWT working at whole image mode, which is just an ordinary 2-D wavelet compression, is more suitable. Based on this, we propose a new application based compression architecture which encodes different regions with different algorithms.
A comparative study of three terrain interpolation methods, Delaunay triangulation interpolation, Kriging interpolation and fractal interpolation is presented in this paper. The study uses simulated and real terrains. The simulated terrains are generated by the fractal method. Each terrain is subsampled and then interpolated by each of the three methods. The difference between the original terrain and the interpolated one yields the error surface. A comparison between the error distributions for the three methods of interpolation is presented.
Error models of slope and aspect of a terrain are presented in this paper. Such data are often extracted from a GIS which may contain information from digital maps and remote sensing images. Although the sources of these data are usually of diverse resolution, all of them are usually re-sampled to refer to the same resolution. In this paper we shall examine the error which is associated with such data because of subsampling. The error distributions will be modelled empirically.
In this paper, vegetation indices are defined to characterize the vegetation content of sets of pixels. Vegetation indices provide information about the presence or lack of vegetation on the ground, but do not provide knowledge for the class of vegetation. However, vegetation indices are widely used for the construction of vegetation maps. In this paper, several vegetation indices are introduced for sets of pixels and their relationship with the fractional vegetation cover is examined with the help of simulated and real satellite data.
The bidirectional reflectance model can be used to perform non-linear spectral unmixing of intimate mixtures. This paper investigates the properties of this model in terms of its stability to small errors in the measured variables.
KEYWORDS: Neural networks, Information fusion, Data fusion, Fuzzy systems, Fuzzy logic, Data modeling, Image processing, Classification systems, Geographic information systems, Rule based systems
Traditional techniques for fusing information in Remote Sensing and related disciplines rely on the application of expert rules. These rules, are often applied to data held in the layers of a GIS which are spatially superimposed to yield conclusions based on the fulfillment of certain conditions. Modern techniques in fusion of information try to take into consideration the uncertainty of each source of information. They are divided in distributed and centralized systems according to whether conclusions reached by different classifiers relying on different sources of information are combined, or all data from all available sources of information are used together by a single reference mechanism. In terms of the central inference mechanism used, these techniques fall in six categories, namely rule-based, fuzzy systems, Dempster-Shafer systems, Pearl's inference networks, other probabilistic approaches, and neural networks. All these approaches are discussed and compared.
We are presenting an algorithm for the detection and tracking of buried linear features under a variety of surface coverages. The buried structures manifest themselves as a few pixels wide bands with contrast and texture changes of the over-ground growth, in high 1 m resolution aerial photographs. Some statistical non-linear filters are used to enhance these features, and their responses is further enhanced by lateral continuity, taking into consideration prior knowledge about the shape of the feature.
There are two major approaches in spectral unmixing: linear and non-linear ones. They are appropriate for different types of mixture, namely checkerboard mixtures and intimate mixtures. The two approaches are briefly reviewed. Then in a carefully controlled laboratory experiment, the limitations and applicability of two of the methods (a linear and a non- linear one) are compared, in the context of unmixing an intimate mixture.
The topic of this contribution is a matching between property borders of cadastral maps and field borders identified in satellite imagery (Landsat TM). For reasons of efficiency, the search space for the matching is spatially limited. Sufficient contextual information in a small spatial environment, therefore, is of paramount importance. The short edges of lots of long and narrow fields cannot be extracted in the same go as the well defined long edges because they are perceived edges rather than real step edges. A two step algorithm is needed for their extraction. Perceptual and conventional edges can be combined to supply the necessary local contextual information for a robust and efficient matching to take place. The matching algorithm is a refined version of probabilistic relaxation.
Probabilistic relaxation has been used previously as the basis for the development of an algorithm to label features extracted from an image with corresponding features from a model. The algorithm can be executed in a deterministic manner, making it particularly appropriate for real-time methods. In this paper, we show how the method may be adapted to image sequences, taken from a moving camera, in order to provide navigation information. We show how knowledge of the camera motion can be incorporated into the labelling algorithm in order to provide better real-time performance and improved robustness.
The method of singular value decomposition is applied in the separation of the heart and the brain signals which are assumed linearly superimposed in a magnetoencephalographic recording. The signals have been obtained by a SQUID device operating in a separate epoch mode. Each signal is recorded by 37 channels and at the middle of its duration an auditory stimulus was heard by the subject. At the same time a 38th channel was recording the ECG signal. Averaging over all the epochs of the same channel aligned according to the auditory stimulus, and under the assumption that the brain and the heart signals are linearly superimposed, we eliminate any signal synchronous with the heart and retain any signal from the brain synchronous with the auditory stimulus. Aligning all the signals of each channel according to the heart, as defined by the QRS complex in the ECG, and averaging again, we eliminate any signal synchronous with the auditory signal and thus obtain a signal which consists from components aligned with the heart. We use these two 37 channel signals to define a subspace in the 37-dimensional space spanned by the signals recorded by the 37 channels in which the heart component is minimal and the brain component is maximal. The vector basis which is obtained this way defines the weights by which the single epoch signals that are recorded by the 37 channels can be linearly blended to form the underlying true brain signal.
In this paper we investigate the use of wavelet transforms to texture segmentation of Remotely Sensed images. The method adopted is multiresolution with maximum overlap. Various wavelet filters are considered (two different types of Daubechies, Battle-le Marie filters and Haar). To investigate the usefulness of these filters and the relevance of the various resolution levels, we introduce a novel probe: For the feature derived from a certain filter combination, we calculate the 2-point correlation function in the feature domain. This function allows us to judge whether this particular feature segregates the data into clusters or not. We also show that it gives an indication of the number of clusters present in the feature space. At the end we identify the useful features and perform image segmentation using all of them with the help of a C-means clustering technique. We conclude that the most useful results are obtained by using the Daubechies coiflet filter.
We present a method designed to solve the problem of automatic color grading for industrial inspection of textured ceramic tiles. We discuss problems we were confronted with, like the temporal and spatial variation of the illumination, and the ways we dealt with them. Then, we present results of correctly grading a series of textured ceramic tiles, the differences of which were at the threshold of the human perception.
In this paper we present a novel method for mixed pixel classification where the classification of groups of pixels is achieved taking into consideration the higher order moments of the distributions of the pure and the mixed classes. The method is demonstrated using simulated data and is also applied to real Landsat TM data for which ground data are available.
In previous work we presented an algorithm for matching features extracted from an image with those extracted from a model, using a probabilistic relaxation method. Because the algorithm compares each possible match with all other possible matches, the main obstacle to its use on large data sets is that both the computation time and the memory usage are proportional to the square of the number of possible matches. This paper describes some improvements to the algorithm to alleviate these problems. The key sections of the algorithm are the generation, storage, and use of the compatibility coefficients. We describe three different schemes that reduce the number of these coefficients. The execution time is improved in each case, even when the number of iterations required for convergence is greater than in the unmodified algorithm. We show that the new methods also perform well, generating good matches in all cases.
Probabilistic relaxation has been used previously as the basis for the development of an algorithm to match features extracted from an image with corresponding features from a model. The technique has proved very successful, especially in applications that require real- time performance. On the other hand its use has been limited to small problems, because the complexity of the algorithm varies with the fourth power of the problem size. In this paper, we show how the computational complexity can be much reduced. The matching is performed in two stages. In the first stage, only small subsets of the most salient features are used to provide an initial match. The results are used to calculate projective parameters that relate the image to the model. In the second stage, these parameters are used to simplify the matching of the entire feature sets, in a second pass of the matching algorithm.
Machine vision and automatic surface inspection has been an active field of research during the last few years. However, very little research has been contributed to the area of defect detection in textured images, especially for the case of random textures. In this paper, we propose a novel algorithm that uses color and texture information to solve the problem. A new color clustering scheme based on human color perception is developed. No a priori knowledge regarding the actual number of colors associated with the color image is required. With this algorithm, very promising results are obtained on defect detection in random textured images and in particular, granite images.
We have developed a method of matching and recognizing aerial road network images based on road network models. The input is a list of line segments of an image obtained from a preprocessing stage, which is usually fragmentary and contains extraneous noisy segments. The output is the correspondences between the image line segments and model line segments. We use attributed relational graphs (ARG) to describe images and models. An ARG consists of a set of nodes, each node representing a line segment, and attributed relations between nodes. The task of matching is to find the best correspondences between the image ARG and the model ARG. The correspondences are
found using a relaxation labeling algorithm, which optimizes a criterion of similarity. The algorithm is capable of subgraph matching of an image road structure to a map road model covering an area 10 times larger than the area imaged by the sensor, provided that the image distortion due to perspective imaging geometry has been corrected during preprocessing stages. We present matching experiments and demonstrate the stability of the matching method to extraneous line segments, missing line segments, and errors in scaling.
We present here the theory of developing robust test statistics for edge shape matching in one dimensional signals. We show that an unbiased test can be developed under the assumption of uncorrelated noise and this test can be made optimal and robust to perturbations of the assumed noise distribution under the extra assumption of symmetric noise. This approach to edge detection is believed to overcome the shortcomings of the uncertainty principle in image processing and is appropriate for use when edges of a certain type have to be identified with great accuracy in their location.
Machine vision and automatic inspection has been an active field of research during the past few years. In this paper, we review the texture defect detection methods used at present. We classify them in two major categories, global and local, and we discuss briefly the major approaches that have been proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.