The target of this research is to develop a machine-learning classification system for object detection based on three-dimensional (3D) Light Detection and Ranging (LiDAR) sensing. The proposed real-time system operates a LiDAR sensor on an industrial vehicle as part of upgrading the vehicle to provide autonomous capabilities. We have developed 3D features which allow a linear Support Vector Machine (SVM), Kernel (non-linear) SVM, as well as Multiple Kernel Learning (MKL), to determine if objects in the LiDARs field of view are beacons (an object designed to delineate a no-entry zone) or other objects (e.g. people, buildings, equipment, etc.). Results from multiple data collections are analyzed and presented. Moreover, the feature effectiveness and the pros and cons of each approach are examined.
In sensing applications where multiple sensors observe the same scene, fusing sensor outputs can provide improved results. However, if some of the sensors are providing lower quality outputs, e.g. when one or more sensors has a poor signal-to-noise ratio (SNR) and therefore provides very noisy data, the fused results can be degraded. In this work, a multi-sensor conflict measure is proposed which estimates multi-sensor conflict by representing each sensor output as interval-valued information and examines the sensor output overlaps on all possible n-tuple sensor combinations. The conflict is based on the sizes of the intervals and how many sensors output values lie in these intervals. In this work, conflict is defined in terms of how little the output from multiple sensors overlap. That is, high degrees of overlap mean low sensor conflict, while low degrees of overlap mean high conflict. This work is a preliminary step towards a robust conflict and sensor fusion framework. In addition, a sensor fusion algorithm is proposed based on a weighted sum of sensor outputs, where the weights for each sensor diminish as the conflict measure increases. The proposed methods can be utilized to (1) assess a measure of multi-sensor conflict, and (2) improve sensor output fusion by lessening weighting for sensors with high conflict. Using this measure, a simulated example is given to explain the mechanics of calculating the conflict measure, and stereo camera 3D outputs are analyzed and fused. In the stereo camera case, the sensor output is corrupted by additive impulse noise, DC offset, and Gaussian noise. Impulse noise is common in sensors due to intermittent interference, a DC offset a sensor bias or registration error, and Gaussian noise represents a sensor output with low SNR. The results show that sensor output fusion based on the conflict measure shows improved accuracy over a simple averaging fusion strategy.