|
Range sensing is essential in several applications: from machine vision to aids for the visually impaired and collision avoidance systems. Any range sensor can be used to obtain spatial information of its surroundings. Actually it may be possible to obtain a three-dimensional image by scanning the line of sight of the range sensor.1 Many range sensors at different stages of development are described in the literature. We can find range sensors based on ultrasound, microwaves, or optics.2 Optical range sensors offer better directionality due to the short wavelength of light and are, therefore, better for three-dimensional image formation. Optical range sensors can be active or passive. Active devices emit a light signal and collect the backscattered wave from the object of interest; whereas passive ones use the optical illumination from their surroundings. Active optical range sensors are based either on time-of-flight measurements3 or on triangulation schemes.4 Some active optical range sensors are well developed but are too costly for many applications. Besides, active range sensors depend on the backscattering coefficient of the object, and this can be very low for many surfaces, particularly for large oblique angles of incidence. In this respect, a passive optical range sensor could be better for some applications. Passive optical range sensing is possible with stereo vision5 or focus sensing.6 Stereo vision requires complicated image processing and is limited to applications in which one has a priori information of the scene. Passive focus sensing is well developed for its use in digital cameras; however, it is generally too slow for real-time applications. There is another option for passive optical ranging that may be called optical differentiation.7 It consists of taking two or more different photometric measurements from the light radiated by the object and deriving the range from these measurements. In principle, optical differentiation offers the possibility of developing fast and inexpensive range sensors. To our knowledge, little work has been devoted to this option. In this letter, we propose a simple technique of this type. The basic principle in which the proposed technique is based is the following. In most cases, the surface of an object is rough and the light reflected or radiated by it is scattered in all directions and has a very short coherence length, . The surface can be subdivided in small elements with dimensions on the order of . The Sommerfeld’s radiation principle ensures that the energy flux radiated from each surface element seen from a distance will decay with the factor . For most types of surfaces, the angular distribution of the light reflected by any surface element is smoothly distributed over a wide angular range. On the other hand, the light radiated by any surface element of the object is incoherent with respect to that radiated by any other surface element. If the entrance aperture of an optical device is small enough, so that to a good approximation the energy flux coming from any surface element of the object of interest is constant across the device’s aperture, then the optical device sees the object as an effective collection of point sources, all with an independent brightness and color. Now, an optical range sensor will have a small but finite angle of vision, and when directed to an arbitrary object, it will collect light from a portion of all the equivalent point sources. If we now think of a device that has the same response function to the light coming from any of the equivalent point sources within its cone of vision, the signal is equivalent to that of a single point source with an average brightness and color. In this case, the problem reduces to determining the distance to a point source. The only limitation is that the surface elements seen by the sensor should be rough and reflect light diffusely. Specular reflections from a surface element could cause errors because the effective point source condition may not be satisfied. The passive ranging device we propose uses an iris of radius , a lens of radius larger than , and a photodetector of radius as shown in Fig. 1. The iris is placed in front of the lens at a distance and centered on its optical axis. The photodetector is placed behind the lens at its focal length , also centered with the optical axis. The half angle of the cone of vision, , is given by . We assume that is much smaller than the distance to the object, , so that the light from an equivalent point source on the surface of the object focuses to a point on the detector. Let us consider first an isolated point source along the optical axis of the system as shown in Fig. 1. For simplicity here, let us assume the point source radiates isotropically in all directions a total optical power of . The radius of the accepted cone of light at the lens plane is , where depends on the distance to be measured, , and is given by . It is not difficult to see that the fraction of the total optical power received by the detector is given by The photocurrent signal at the detector, , is proportional to . To obtain a signal independent of but dependent on the distance to the object, , we must obtain a second photometric signal with the iris at another position, . Then, the output signal, , is conveniently defined as the ratio of the difference and the sum of the signals at the detectorwhere by and , we denote the photocurrent signals at the detector for the iris at a distance and , respectively. The only requirement to obtain and is that the iris borders should not “cut” the cone of vision of the detector at neither of the two positions, and . Using Eq. 1 in Eq. 2 and assuming , yields, to first order in the ratios and ,For point sources off the optical axis by a small angle, , the received power is given by Eq. 1 plus a correction term of order . For an extended object covering totally or partially the cone of vision of the sensor, all the equivalent point sources contributing to the signal make an angle with the optical axis of . Now, if the cone of vision is sufficiently narrow, the correction term is negligible. Therefore, the power received by the photodetector in the device is given by Eq. 1 but with replaced by an average taken over all the equivalent point sources lying within the cone of vision. To develop a fast range sensor with the described principle, one should measure and simultaneously. This can be accomplished by a double system. In one system, the iris is at a distance from its corresponding lens and, in the second one, at a distance . Both systems must be aligned so that they have the exact same cone of vision. This can be done with a beamsplitter, which divides the light coming from the object of interest in two, sending one half of the light into one system and the other half into the other. Processing the signal from and can be done in real time analogically with operational amplifiers. To test the proposed principle for range sensing, we assembled an experimental prototype using a thin lens of focal length and diameter of . We used a silicon photodiode with off-the-shelf electronics as the photodetector and fixed it close to the focal length of the lens centered along the optical axis of the system. We used an iris with a circular aperture of 1 cm diameter mounted on a rail carrier on an optical rail aligned along the optical axis of the lens. The distance between the lens and the iris was adjustable from to about . Because the detector was not sensitive enough to daylight, we used a bright light-emitting diode (LED) as the test object for range measurement. The signal was calculated from two photometric measurements, one with the iris at and the other at . Figure 2 shows the signal for a few different distances to the LED. The actual experimental data for had a constant offset of 0.007, which was added to the plotted values in the figure. This offset was probably due to a misalignment of the iris. For comparison, we show the theoretical curve of versus given by Eq. 4. We can appreciate a good agreement between the experimental data and the theoretical curve. The reproducibility of the experimental data for was limited due to the mechanical system used to displace the iris. In general, it was less than about 5%. Subsequent experiments with different light sources showed similar results. The signal was measured with the LED at a fixed distance but at different positions around the optical axis and across the cone of vision. It was found to be constant within the reproducibility limits of our experimental measurements. The full angle of the cone of vision was determined to be about . The signal-to-noise ratio in our experiments with the LED was about . If the electronic noise were the only source of error, the estimated resolution of the experimental prototype at is less than 1% and better for shorter distances. However, the lower resolution in our experiments was less due to the mechanical setup. Although the experimental prototype can certainly be improved considerably, we believe the present results demonstrate the feasibility of using the proposed principle to develop inexpensive passive optical sensors. For indoor applications with a regular lamp or daylight, it will be necessary to improve the sensitivity of the detector. Most probably using an avalanche photodiode will be adequate. Let us consider that the resolution is limited by random noise currents, and , added to the photometric signals. Using Eqs. 1, 2, 3, and after some algebra, one finds the following relation to the first order in and , where and is the average value of the photocurrents and . Let us define the resolution as the distance uncertainty, , normalized by the distance. The distance uncertainty can be taken as three times the rms value of the equivalent distance noise, that is, . Then the resolution is given by three times the rms value of Eq. 4. This equation shows that the resolution in distance worsens proportionally to the distance to be measured and is proportional to the relative noise in the detectors.The theoretical limit to the resolution is dictated by the shot noise in the photodetectors. It is well known that its rms value is given by , where is the electron’s charge, is the bandwidth of the system, and is the responsivity of the photodetectors. In Fig. 3, we plot the resolution versus the average optical power received by the sensor, , assuming: , , , and . We can appreciate, in the graph, that at of received power, the resolution at a 10 m distance is about 5.5%. It would take a received power of to reduce the uncertainty to about 0.5% at a 10 m distance. The assumed bandwidth of the measurement means that the time used to take a measurement is . From Eq. 4, we can see that the resolution improves for shorter distances. For example at and of optical power, the resolution would be 0.55%. From our results, it appears possible to develop range sensors based on the proposed technique for distances from a few centimeters to about . Further analysis and experimentation is needed to fully evaluate the potential of this technique. As mentioned earlier, a practical device will require a double system and electronic processing of the output signal. ReferencesE. García, and
H. Lamela,
“Low-cost three-dimensional vision system based on a low power semiconductor laser range finder and a single scanning mirror,”
Opt. Eng., 40 61
–66
(2001). https://doi.org/10.1117/1.1331267 0091-3286 Google Scholar
B. Andó,
“Electronic sensory systems for the visually impaired,”
IEEE Instrum. Meas. Mag., 6
(2), 62
–67
(2003). 1094-6969 Google Scholar
D. Castagnet,
H. Tap-Béteille, and
M. Lescure,
“Avalanche-photodiode-based heterodyne optical head of a phase-shift laser range finder,”
Opt. Eng., 45 043003
(2006). https://doi.org/10.1117/1.2190229 0091-3286 Google Scholar
M. C. Amann,
T. Bosch,
M. Lescure,
R. Myllylä, and
M. Rioux,
“Laser ranging: A critical review of usual techniques for distance measurement,”
Opt. Eng., 40 10
–19
(2001). https://doi.org/10.1117/1.1330700 0091-3286 Google Scholar
B. Wang,
R. Chung, and
C.-L. Shen,
“Genetic-algorithm based stereo vision with no block partitioning of input images,”
Opt. Eng., 43 2788
–2795
(2004). https://doi.org/10.1117/1.1795818 0091-3286 Google Scholar
K. Engelhardt and
K. Knop,
“Passive focus sensor,”
Appl. Opt., 34 2339
–2344
(1995). 0003-6935 Google Scholar
H. Farid and
E. P. Simoncelli,
“Range estimation by optical differentiation,”
J. Opt. Soc. Am. A, 15 1777
–1786
(1998). 0740-3232 Google Scholar
|