|
1.IntroductionIn most computer-based imaging applications, such as automobile navigation systems, traffic safety monitoring systems, and remote accident surveillance systems, vision sensors must have a large field of view and high resolution. Beside, the images captured should be transmitted to remote locations for processing and generating final images. Some applications require high-speed (even real-time), video-rate processing, and that their sensor must be easy to manufacture. Omnidirectional vision sensors, consisting of rotationally symmetrical curved mirrors and digital imaging systems, are proving to be promising candidates to satisfy these requirements, because of recent advances in the development of image sensors and computers. Fig. 1 is an example of an omnidirectional vision sensor (omnidirectional camera). This sensor converts a three-dimensional viewing sphere in object space into a two-dimensional circle on the image plane. The sensor can view a field of 360 deg horizontally and 135 deg vertically, covering almost 80% of the entire viewing sphere. It is important to understand image-forming characteristics, that is, how objects with the same solid angle in object space would appear in the image plane. For that, it is essential that all the objects located at various positions are well-focused at the image plane. Also, for computing pure perspective images, the imaging system must have a single viewpoint. Omnidirectional vision sensors have been studied mainly for their application in visual navigation systems of mobile robots. Earlier workers proposed and tested many types of rotationally symmetrical curved mirrors, including conical mirrors,1 spherical mirrors,2,3 hyperboloidal mirrors,4 and paraboloidal mirrors,5,6 beside systems using two mirrors.7 Also, there are several reports on surveying the field or describing the theory of omnidirectional vision sensors.8,9,10 None of the available mirror forms satisfy all the required characteristics. Therefore, it is important to select the most suitable one among them, depending on the application. The characteristics of curved mirrors were theoretically analyzed and their merits and demerits evaluated from a practical point of view. The evaluation was focused particularly on image-forming characteristics, position of virtual images, and size of blur spot images. In addition, a practical design method, which satisfies some of the required characteristics, is proposed. Based on the evaluation, the authors selected a spherical mirror for practical applications and developed several prototype models. As reports dealing with virtual images and size of blur spots of curved mirrors are few, this paper will be of great use in the development of omnidirectional vision sensors. 2.Evaluation Method of Omnidirectional Vision Sensors2.1.Optical System and Ray-Tracing CalculationA model of the optical system generally used in an omnidirectional vision sensor is shown in Fig. 2. The three-dimensional surface shape of a curved mirror is generally expressed by the form , but one needs to consider only a cross section expressed by the form , because the mirror is rotationally symmetric around its principal axis. A light ray, incident on the curved mirror from an object , is reflected at reflecting point at coordinates and is focused on an image sensor through the principal point of a lens system . This results in the formation of a circular image on the image sensor. From the standpoint of the lens system, one is essentially taking a photograph of a virtual image formed at position . Let the slope of the mirror surface at be , the angle of incidence of light from object be , the viewing angle of the lens system , and the angle of incidence at the mirror, which is equal to the angle of reflection from the mirror, . Then, the following relationships are satisfied: The crossing point (viewpoint) of incident light with the -axis is given by Under some special conditions, for a hyperboloidal mirror and a paraboloidal mirror, is constant for the entire range of ; that is, a single viewpoint is realized.11 Usually, however, viewpoints are distributed in some range. Care must be taken if the object is located near the mirror. The reason for this, and the effect of the distribution of , will be discussed in Section 4.2.Recently, a unique method was proposed to realize a single viewpoint using a cone-shaped mirror.12 But this method can only be applied to limited vertical viewing angles and in low-resolution imaging systems. Thus this method cannot be used for the applications in the current study. 2.2.Image-Forming CharacteristicsIn an omnidirectional vision sensor system, a three-dimensional viewing sphere in object space appears as a two-dimensional circle in the image plane. In this image, the vertical direction in the object space shows up in the radial direction, and the horizontal direction in the circumferential direction. The radial height on the image sensor, corresponding to object , is . In the lens system of a conventional camera, is proportional to : The rate of change of , with respect to , can be calculated by A graph of the relationship between and shows the basic image characteristics of an omnidirectional vision sensor.Fig. 3 shows the projection of solid-angle objects located every 15 deg of incident angle onto the image plane. In the ideal case, an object with the same solid angle in the object space, for example, would appear with the same shape in the image plane, regardless of its location in the object space. However, this is impossible in practice, because the images are distorted by projection. For the sake of simplicity in calculation, small solid-angle objects are considered here. A solid-angle appears as an image of radial height , circumferential width , area , and width-to-height ratio (aspect ratio) , which are given by In the figures showing the image-forming characteristics, images are enlarged to show virtual solid-angle images.2.3.Virtual Image PointsVirtual image points, formed by the curved mirror, are distributed in a region close to the mirror. This distribution must fall within the depth of field of the imaging system; otherwise, the image points will be blurred. The distance between the virtual image points and the principal point of the lens is relatively short; so, a short-working-distance imaging system is necessary. It is not easy to ensure that all virtual image points are within the depth of field. When a homocentric bundle of rays is incident on a curved mirror, the curved surface focuses the rays to form a virtual image. The virtual image point can be calculated from the curvature of the mirror and the angle of incident light at the reflecting point. In the curved mirror of an omnidirectional camera, the curvatures in the radial and circumferential directions are different. Also, the angle of incident light is different. Therefore, it is necessary to calculate image points for both directions. The image focused by radial curvature of the mirror is called the tangential or meridional image, and the one focused by circumferential curvature the sagittal image. When the positions of these two images do not agree, astigmatism occurs, and when the image plane is not flat, field curvature occurs. One can calculate the virtual image points using the following Eqs. (14) and (18) for tangential and sagittal image points, given in Brueggemann (1968).13 For a derivation of these equations, one must go back to Monk (1937).14 The radius of curvature in the tangential direction is For a spherical mirror, is constant.When is the distance between points and , and is the distance between and the virtual image point , one has the following: The coordinates of the virtual image point are The sagittal image plane is perpendicular to the tangential plane, including the normal to the curved mirror. The radius of curvature in the sagittal direction is given by the following: For a spherical mirror, is also constant.If the distance between and the position of the virtual image is , it follows that The coordinates of the virtual image point are Calculation methods for the depth of field and the blur-spot size are given in the Appendix.3.Design and Evaluation Results3.1.Design ProcedureTwo kinds of lens systems are usually used in omnidirectional vision sensors. If one uses an orthographic lens system, such as a telecentric camera lens, is equivalent to 0 deg, and it is rather easy to design curved mirrors by mathematical analysis. However, in this case, the mirror size is limited by the lens size. Perspective lenses are widely used for most digital imaging equipment, such as digital cameras. Omnidirectional vision sensors using perspective lenses are desirable, because such lenses are widely available and do not limit the mirror size. In this case, however, it is not easy to design mirror-shape functions that satisfy the desired characteristics. Here, the characteristics of mirrors with the most common shape-defining functions are simulated, namely, spherical and hyperboloidal mirrors. Also, a practical method is proposed here to define mirror shapes that satisfy the requirements. The equidistant mirror will be shown here as a practical example. The mirror-shape functions used in the calculations, as also the functions derived from them, are shown in Table 1. The parameters were obtained by trial and error so that and at . The values obtained for each mirror are also shown in Table 1. Table 1Functions Defining Mirror Shapes*
3.2.Calculation of Ray Traces and Image-Forming CharacteristicsCalculated characteristics of three mirrors are shown in Figs. 4Fig. 5Fig. 6Fig. 7–8. Fig. 4 shows ray traces, virtual image points, and depth of field. For both tangential and sagittal images, the distance of the object from the mirror surface was assumed to be infinity and 5 times the mirror radius. The depth of field will be explained later. Fig. 5 shows the basic image characteristics of an omnidirectional vision sensor, namely the radial image height versus and rate of change versus . The equidistant (E-DI) mirror shows the linear relationship in versus , and is constant over the entire range. For spherical (SPH) mirror, decreases with increasing , and for the single viewpoint (SVP) mirror, increases with increasing . One can see that the image characteristics of the SPH and SVP mirrors differ substantially. Fig. 6 shows the distribution of viewpoints . The SVP mirror is a hyperboloidal mirror arranged in a special optical configuration, and for this mirror, is constant over the entire range of . Viewpoints for other mirrors are distributed in some range, and for SPH mirror, the distribution is the largest. Therefore, using data for SPH mirror, one can estimate the maximum influence of the viewpoint distribution. Fig. 7 shows the image area and the width-to-height ratio of the image. For the SVP mirror, is constant, that is, the image shape of a small square object always remains square, but the change in area is very large over the entire range of . For the SPH mirror, decreases slightly and increases as increases. For E-DI mirrors, both and increase as increases. In Fig. 8, the solid line shows the image-forming characteristics, in which each rectangle represents the virtual image of a object. From this figure, which is a geometrical expression of Fig. 7, one can visually understand the image-forming characteristics of each mirror. 3.3.Estimation of Depth of Field and Blur-Spot SizeThe depth of field shown in Fig. 4 was calculated using Eq. (33) given in the Appendix and the parameters of a prototype omnidirectional vision sensor. For the image sensor, , the total number of pixels , , and . The maximum radius of the mirror is 43.9 mm at and . The corresponding focal length of the lens is . When , is 6412 mm. The distances between the center of the virtual images and the principal point of the lens are somewhat different for each mirror. With the spherical mirror, and the depth of field is , which corresponds to . For equidistant mirrors, ; for hyperboloidal mirror, ; their depths of field correspond to and , respectively. These bands are shown in Fig. 4. The spread of tangential image points for spherical mirror is very small, and all image points remain within the depth of field. For equidistant and hyperboloidal mirrors, the spread is large. As regards the spread of the sagittal image point, it is large for all three mirrors, and the image points are not within the depth of field. To evaluate the effect of this, further detailed discussion is necessary. One must calculate the blur-spot size expressed in pixel size units in the image plane as well as in the solid view angle in the object sphere, because each pixels unit has a different solid-angle, restrictively. The blur spot sizes calculated from Figs. 9Fig. 10 to 11, by using the Eqs. (29), and (34) to (37) in the Appendix, are shown in various forms of expression. To evaluate blur-spot size, it is important to select , where the distance between lens and image point is just in focus. The best for each mirror was calculated by trial and error to minimize overall blur size in the solid view angle. Fig. 9(a) shows the calculated results of tangential and sagittal blur-spot size in pixel size units. In fact, the sign of cannot be recognized, but still the calculated results are kept for convenience. Here, it was assumed that the diameter of the image plane circle was 1000 pixels. Blur-spot size of less than 1 pixel could not be recognized. In Fig. 10, the blur-spot size for each mirror is shown geometrically with image-forming characteristics. Blur sizes are enlarged by . Image-forming characteristics show square objects, but the images are altogether different. So, the effect of blur image spots is also different. To evaluate this effect, one must calculate the blur-spot size expressed in solid view angle. Fig. 9(b) shows the calculated blur-spot size in angle of object space. In Fig. 10, object spaces show square. Blur-spot sizes are enlarged and . Fig. 9(c) shows blur-spot size in solid angles (square degrees). The system of this study covers to 135 deg, which means that the coverage of the entire space was , the entire space being , and the total pixel number in the project plane . So, the average solid angle for 1 pixel is . This value is schematically shown in Fig. 11. From Figs. 4 to 11, one can understand the characteristics of each mirror. Detailed discussion of each mirror follows. 3.4.Spherical MirrorSpherical mirrors are rather easy to manufacture. An extremely precise spherical mirror can be made using grinding and polishing techniques similar to those employed in making lenses. Examples of omnidirectional vision sensors using spherical mirrors will be shown later in section 5. In Fig. 5, of the spherical mirror decreases as the angle of incidence increases, and that explains why this mirror can be used at large . The image-forming characteristics of a virtual object are shown in Fig. 10(a). At angles of depression (), square objects appear almost as such with approximately equal area, whereas at angles of elevation (), they appear with radially compressed shapes and decreased areas. To cover a large field of view, image information must be uniformly distributed. Then, at any part of the image plane, one can reconstruct the desired image by computer processing. A spherical mirror with an orthographic imaging system shows an ideal equiareal image characteristics.15 With the spherical mirror, in a perspective imaging system, the image area is much smaller at as compared to the area at . This is due to the lens system. Fig. 12 shows the calculated results for the change in image area when is changed. If one uses a long-focal-length lens, becomes small, and almost equiareal image characteristics can be obtained. With system, of the spherical mirror decreases to 70% at ; so, image quality in this area decreases, even if it is just in focus with this area. Fig. 4(a) shows the virtual image points and depth of field for the spherical mirror. For tangential and sagittal images, the distance of the object from the mirror surface was assumed to be infinity and 5 times the mirror radius, respectively. The spread of tangential image points for the spherical mirror is very small, and all the image points were within the depth of field. On the other hand, the sagittal image points at the peripheral area were not within the depth-of-field. To estimate the influence of this situation, one needs to go into the details. In Figs. 9Fig. 10–11, selecting the best , one can almost make the blur size less than 1 pixel in the tangential direction. This makes the blur spot area very small. 3.5.Equidistant MirrorIn the circular image plane, the radial height of the mirror is proportional to the incident light angle . This is the same as the constant-angular-resolution mirror.15 In the case of an orthographic system, in Eqs. (4) and (7), and . If , then is proportional to , and an equidistant mirror can be realized. The mirror shape is given by In the case of a perspective system, if is proportional to , then is proportional to , and an equidistant mirror may be realized. The mirror-shape may be obtained by solving . However, it is difficult to obtain the solution for this equation by mathematical analysis. A relatively simple method to realize a curved mirror with the desired characteristics is to use a polynomial expression. An almost equidistant mirror can be realized using a three-term polynomial. At the peripheral area in Fig. 10(b), the images were enlarged in the circumferential direction; so, the image area became larger. But, the circular image of this mirror seemed natural. In this mirror, the distributions of tangential and sagittal images were relatively large. So, the blur size in entire incident light angle could not be tolerated. To reduce this effect, one must use larger lens F-number. If this problem is solved, this mirror may be used in a large-incident-light angle system. 3.6.Single-Viewpoint Mirror (Hyperboloidal Mirror)In Eq.(6), if is constant, a single viewpoint can be realized. In the case of an orthographic system, a paraboloidal mirror satisfies this condition.5,11 This mirror also satisfies the constant aspect ratio condition, that is, in Eq. (12), is 1. In the case of the perspective system, with a hyperboloidal mirror, if the principal point of the lens is set at one of the focal points, incident light from various angles would be projected to the other focal point. This single viewpoint is the main advantage of a hyperboloidal mirror. In the hyperboloidal mirror case, becomes larger as the incident angle increases. This restricts the usage of hyperboloidal mirror for small values of . With this mirror, the image shape of a small square object remains almost a square as the incident angle increases, but the area changes rapidly, as shown schematically in Fig. 10(c). In the peripheral part of the image plane, the image area is very large, but it becomes very small at the central part. Therefore, at large values of , images of large rectangular objects appear as trapezoids. With the hyperboloidal mirror, the tangential and sagittal image points appeared at the same positions and thus no astigmatism resulted. But their distribution was very large, and it was impossible to focus the entire image within the depth of field. In Figs. 9Fig. 10–11, was set to focus peripheral area to minimize blur-spot size. But in central area, the blur size was very large; therefore, a hyperboloidal mirror is not suitable for use at a large field of view. 4.Image-Processing Software4.1.Principles of Image ProcessingThe circular image produced by a spherical mirror is distorted, and therefore, it is difficult for any human observer to view the image. To correct the image, the authors developed a new image-processing software that can transform a circular image to any desired shape, such as a wide rectangular panorama image, a conventional rectangular image, or a compensated circular image. Fig. 13 illustrates the principles of this image processing. The processing assumes a virtual screen on which images are projected with a virtual projector. The virtual projector is equivalent to the imaging system, including the mirror described above. Light rays from the virtual projector are formed by reverse ray tracing of the rays focused on the image sensor. The virtual screen is equivalent to the processed image itself. One can freely change its size, and spatial relationship relative to the projector. Various image effects, such as distant view, wide-angle view, and super-wide-angle view, are possible. In addition, one can produce panning and tilting effects by moving the screen to the left or right, or up and down. The correction applied is determined by the shape of the virtual screen. Two examples of virtual screen are described below. One is a “cylindrical-surface virtual screen.” When this cylindrical screen is so placed as to surround a light source, it can display an image with a horizontal viewing angle of up to 360 deg. Because the actual display is on a flat plane, the cylindrical surface must be transformed to form a flat surface. Another example is the “spherical-surface virtual screen.” As this screen covers the entire field of view, the horizontal viewing angle is 360 deg, and the vertical viewing angle 0 to 135 deg. A flat image is obtained by transforming the spherical surface. The radial height of the displayed image is proportional to the incident light angle. All image processing operations were executed through software on a personal computer. With a commercially available computer, using a high-speed processor (3.3 GHz) equipped with a USB2.0 interface, one could acquire -pixel images at 15 fps and execute processing in real time. 4.2.Influence of Viewpoint Distribution of Incident LightIn the virtual projector system, it was assumed that the light was projected from a single viewpoint . With curved mirrors, except for the single viewpoint mirror, the viewpoint of incident light varied with the incident light angle . When an object was located sufficiently far away, there was no error caused by the viewpoint distribution in the image plane. On the other hand, when the object was close by, some problems, such as parallax or angle error, occurred as discussed below. When the object was located at a distance of times the mirror radius , its influence on the image was examined. If it is assumed that is located at , there will be an angle error . In Fig. 14: where is the distance between and .The number of pixels equivalent to the angle error can be calculated from and . Assuming , the calculated error for the spherical mirror, which showed the largest viewpoint distribution, is shown in Fig. 15. By selecting a proper position, one can minimize the error. If the object is located at a distance of (i.e., ), which corresponds to 180 cm for a mirror with , the error is equivalent to . If , that is to say, at a distance of about 30 cm, the error becomes . 5.Prototype Development5.1.Development of Spherical MirrorExperimental models of hyperboloidal and spherical mirrors were constructed and tested. The hyperboloidal mirror was precisely cut by machine from an aluminum block. The maximum viewing angle of the mirror was 105 deg at diameter 60 mm. Focusing the entire area of the 1 M pixel image sensor was found to be very difficult. The spherical mirror was made as follows. The maximum viewing angle of the mirror was 135 deg at diameter 88 mm. With a 1-M pixel image sensor, good focusing performance was confirmed, as predicted theoretically. The main advantages of spherical mirror are its good focusing performance and ease of manufacture. Therefore, ignoring its disadvantages, spherical mirror was adopted for practical use in the omnidirectional vision sensor of this study, which was required to image a large field of view with a high resolution. A plastic hemispherical mirror was developed using the vacuum method. A plastic substrate was heated and softened and then clamped between a base plate and another plate having a circular hole. Then, a hemisphere was formed by vacuum suction. The surface of the plastic hemisphere was plated with metal using a method similar to that used in making automobile parts. The mirror thus produced had a high reflectance, sufficient strength, and low weight. The diameter of the hemisphere was 100 mm. 5.2.Spherical Mirror Omnidirectional Vision SensorThree prototype models were developed. The first model, shown in Fig. 16(a), had a structure in which the spherical mirror was supported by a plastic transparent rod. The mirror was not covered by any material so that good images could be obtained. The rod was cut to give a trapezoidal crosssection so that the side of the rod does not show up in the images. The photographs taken by this model and processed with the authors’ software are shown in Fig. 17. The second model, shown in Fig. 16(b), was developed for outdoor use. The mirror was supported by a water-proof, dust-proof, transparent cover. The mirror was placed in a spherical glove to prevent internal reflections around the cover. The third model, shown in Fig. 16(c), was developed for use with a digital still camera or video camera for outdoor photographs. This model was compact and portable. In some applications, this camera can be used upside down to obtain superior image characteristics at angles of elevation (). 6.ConclusionsTo assess the performance of omnidirectional vision sensors, the authors evaluated the image-forming characteristics and the virtual image point, besides estimating the blur-spot-size. Also, a practical design method is proposed to satisfy the required characteristics. The evaluation method presented in this paper is also useful for designing many other omnidirectional mirrors, such as equiareal, paraboloidal, or ellipsoidal mirrors. A conventional perspective lens system was adopted for the camera. It is possible to overcome many drawbacks of this system by using a specially designed lens system, for example, a telecentric camera lens for a spherical mirror. Therefore, it is important to consider the overall system design, including the lens and the mirror, for developing better omnidirectional cameras. AppendicesAppendix:Calculation of the Blur-Spot Size and the Depth of fieldThe size of the blur spot of a virtual image in the object plane is calculated by simply using the triangles shown in Fig. 18 with the help of the lens equation. The lens has a focal length , aperture diameter and f-stop number . Their relationship is given by The image sensor plane is located at a distance of from the lens and the object at a distance of from the lens, on the other side of the image sensor. If the object image is precisely in focus, the thin-lens equation will be as follows: If the object is moved to distance from the lens, the object image is at from the lens. This gives At the image sensor plane, the object’s image is out of focus and blurred. The size of the blur is calculated using triangles as follows:From Eqs. (24)(25)(26)–(27), is expressed as follows: Using Eq. (29), one can calculate the distributions of . If the blurred image size is smaller than , humans cannot recognize the blur and hence it can be tolerated. Where is the so-called “permissible circle of confusion,” one must assume that is the size of 1 pixel of the image sensor. When is or , is , or , respectively, and they are given as follows. The tolerable region along the optical axis is the depth of field and given as follows: Here, is the so-called “hyperfocal distance.”In omnidirectional mirrors, the virtual image points of tangential and sagittal images are different. In both cases, the blurred images are oval, but their sizes are usually different. For At , ; so, selection of is important to minimize the distribution of . Using Eq. (29), one can calculate the size of the blurred image in pixel unit in the circular image plane. But pixel units have different solid angles. Blur size, expressed in a solid view angle, can be calculated by using Eqs. (36) and (37), respectively. Usually solid angles are expressed in sr (steradian), but in this paper they are expressed in square degrees.AcknowledgmentsThe authors express their gratitude to Takashi Hirose and Koji Mori, a former president and manager of Rosel Electronics Corporation, respectively, for their support in this research work. They also thank K.P. Thompson, ORA vice president of optical engineering services, who suggested the book, Conic Mirrors by Brueggemann, for their reference. ReferencesY. YagiS. Kawato,
“Panorama scene analysis with conic projection,”
in Proc. IEEE Intl. Workshop on Intelligent Robots Syst. IROS ’90,
181
–187
(1990). Google Scholar
J. HongX. TanB. PinetteR. WeissE. M. Riseman,
“Image-based homing,”
in Proc. 1991 IEEE Intl. Conf. on Robotics and Automation,
620
–625
(1991). Google Scholar
A. OhteO. TsuzukiK. Mori,
“A practical spherical mirror omnidirectional camera,”
in Proc. ROSE 2005 IEEE Intl. Workshop Robotic Sensors Environ,
8
–13
(2005). http://www1.parkcity.ne.jp/ohtephoto/QA/05ROSE2005FinalManuscript2.pdf Google Scholar
K. YamazawaY. YagiM. Yachida,
“Omnidirectional imaging with hyperboloidal projection,”
in Proc. IEEE/RSJ Conf. Intelligent Robots Syst.,
1029
–1034
(1993). Google Scholar
S. K. Nayar,
“Catadioptric omnidirectional camera,”
in Proc. IEEE Conf. Comp. Vision Pattern Recog.,
482
–488
(1997). Google Scholar
H. Ishiguro,
“Development of low-cost omnidirectional vision sensors and their applications,”
in Proc. Intl. Conf. on Information Systems, Analysis and Synthesis,
433
–439
(1998). Google Scholar
S. K. NayarV. Peri,
“Folded catadioptric cameras,”
in Proc. IEEE Conf. Comp.Vision Pattern Recog.,
217
–223
(1999). Google Scholar
A. M. BrucksteinT. J. Richardson,
“Omniview cameras with curved surface mirrors,”
in Bell Lab Tech Memo,
1
–6
(1996). Google Scholar
S. BakerS. K. Nayar,
“A theory of catadioptric image formation,”
in Proc. IEEE 6th Intl. Conf. Comp. Vision,
35
–42
(1998). Google Scholar
K. DaniilidisC. Geyer,
“Omnidirectional vision: theory and algorithms,”
in Proc. IEEE 15th Intl. Conf.Pattern Recog.,
89
–96
(2000). Google Scholar
S. DerrienK. Konolige,
“Approximating a single viewpoint in panoramic imaging devices,”
in Proc. 2000 IEEE Intl. Conf. Robotics Automation,
3931
–3938
(2000). Google Scholar
S.-S. LinR. Bajcsy,
“Single-viewpoint, catadioptric cone mirror omnidirectional imaging theory and analysis,”
J. Opt. Soc. Am. A, 23 2997
–3015
(2006). http://dx.doi.org/10.1364/JOSAA.23.002997 JOAOD6 1084-7529 Google Scholar
H. P. Brueggemann, Conic Mirrors, The Focal Press, London and New York
(1968). Google Scholar
G. S. Monk, Light, Principles and Experiments, McGraw-Hill, New York
(1937). Google Scholar
R. A. HicksR. K. Perline,
“Equi-areal catadioptric sensors,”
in Proc. Third Workshop on Omnidirectional Vision,
13
–18
(2002). Google Scholar
BiographyAkira Ohte received his BS and Dr. Eng. degrees in applied physics from Tokyo University in 1961 and 1980, respectively. In 1961, he joined Yokogawa Electric Works Ltd., Tokyo, Japan, where, as a research engineer, he developed transducers, sensors, analog control systems, and precision NQR thermometers. He also promoted R&D of medical instruments and coherent optical measuring instruments. In 1990, he was appointed division manager of the Electronic Devices Division, and in 1996, vice president and director of corporate R&D for the Yokogawa Electric Corporation. In 1998, he was appointed vice president of Yokogawa Research Institute Corporation. Since retirement from corporate management, he acts as a research consultant. He is a Life Fellow of the IEEE, a Fellow of the SICE Japan, and Member of SPIE. Osamu Tsuzuki received his BE degree in precision mechanics from Chuo University in 1972 and the Diplom-Ingenieur degree in construction and manufacturing from Technische Universität Berlin, Germany, in 1978. His research interests include numerical analysis, computational mechanics, and artificial intelligence. He has served as director of the Development Department at Rosel Electronics Corporation, Tokyo, Japan, since 1988, where he works on applications of software computing techniques. |