Open Access Paper
16 August 2017 To make a further explanation on some questions about optical imaging using a light field camera
Yun Fu, Xiping Xu, Peng Yin
Author Affiliations +
Proceedings Volume 10452, 14th Conference on Education and Training in Optics and Photonics: ETOP 2017; 104523D (2017) https://doi.org/10.1117/12.2270969
Event: 14th Conference on Education and Training in Optics and Photonics, ETOP 2017, 2017, Hangzhou, China
Abstract
In traditional optics, the focusing should be done before the exposuring. However, a light field camera makes it a reality to take a photograph before the focusing. The principle of a light field camera consists of some fundamental theories on optical imaging, including pinhole imaging, depth of focus, digital refocusing, synthetic aperture imaging, etc. It is easier for students to understand the above theories through learning the theory and experiment of a light field camera. Meanwhile it also involves some optical knowledge for a light field camera during the acquisition and processing images. In the paper, we will discuss the similarities and differences on optical properties among the pinhole, the convex lens and the light field camera. Our intention is to make these optical theories much easier to understand for students in our teaching work.

1.

INTRODUCTION

As a carrier, a light ray carries a lot of information from an object. The light field [1] describes the amount of light traveling in every direction through every point in space, and shows the radiant properties of a point at a given direction. The light field is a function of radiant intensity, which is a mapping between the radiant intensity and the traveling direction and position for a light ray. The two-plane light field theory presented by Levoy [2] in 1996 is a common method to describe the light field, which is parameterized by a pair of points on the two parallel planes intersecting with a ray. A light field camera [3] can capture the radiant intensity of a light ray in space at a direction, and can reconstruct visual information at any direction after data processing. Therefore, a light field camera makes it a reality that pictures can be refocused after they are taken. Now we will make a further explanation on some optical knowledge related to a light field camera.

2.

OPTICAL IMAGING PRINCIPLE

2.1

Pinhole imaging

There is a board with a pinhole being placed in the middle between an object and a receiving screen, therefore, the reversed image of the object will appear on the screen. It is so-called pinhole imaging. Because light travels in straight lines in homogeneous medium, pinhole [4] acts as a low-pass filter for spatial frequencies in the image plane of the beam, as shown in figure. 1(a). The resolution depends on the size of a pinhole. It means that the smaller the pinhole, the clearer the image, but the darker the image. We simulate the imaging process using Tracepro. Let us suppose that the size of the pinhole is 1.5 mm, the object distance in imaging system is 140 mm, and the source’s luminous intensity is 1 lm. When the imaging plane is 140 mm away from the pinhole, the irradiation intensities, including the maximum, minimum, and average in the imaging plane are shown in figure. 2 (b). The average of irradiation intensities in the imaging plane is 0.0004 W/m2.

Figure 1.

Pinhole imaging

00110_PSISDG10452_104523D_page_2_1.jpg

Figure 2.

Convex lens imaging

00110_PSISDG10452_104523D_page_2_2.jpg

2.2

Convex lens imaging

To solve the contradiction between the clarity and the brightness, a convex lens replaces a pinhole so that a clear and bright image is obtained, shown in figure. 2(a). In a convex lens imaging system, a beam of light emitted or reflected by a point in object space is incident on a lens, and then is refracted and converges to a point in image space. On the view of collecting light, a convex lens [5] is similar to a pinhole. However, the irradiation intensity obtained in the imaging plane for two optical imaging systems are totally different. We simulate the imaging process for a convex lens using the same software. Replacing the pinhole with the convex lens in the optical path, but other parameters are basically the same, such as the object distance, the image distance, and the luminous intensity of the light source. It should be mentioned that the lens diameter is 10 mm and the focal length of the convex lens is 70 mm. The irradiation intensity in the imaging plane is shown in figure.2 (b). Compared to the pinhole, the radiant intensity through the lens is much stronger. The average of irradiation intensities in the imaging plane is 0.004W/m2, as shown in figure. 2 (b). The irradiation intensity received in the imaging plane through the lens is ten times as much as it through the pinhole.

2.3

Light field imaging

A traditional camera projects an image of external reality onto a flat surface, but losing the position and direction of information. However, a light field camera can capture all information for a light ray in space. In the two-plane light field theory [6], the pupil plane of a light field camera is defined as the U plane, and the detector’s target plane is defined as S plane. LF (u, v, s, t) denotes that a light ray intersects the U plane at the position (u, v) and the S plane at (s, t) inside the camera, as shown in figure. 3.

Figure 3.

Four parameters method

00110_PSISDG10452_104523D_page_3_1.jpg

When the center of the main lens is set on the position (0,0), (1,0), (0,1), (1,1) in the U plane respectively, the whole light field between the U plane and the S plane can be captured. It is similar to a camera array that has many sampling points in the U plane. In fact, because light travels in straight lines in geometrical optics, the captured images in image space are similar figures when the center of the lens is set on different points in the U plane, as shown in figure. 4.

Figure 4.

Light field imaging

00110_PSISDG10452_104523D_page_3_2.jpg

3.

DIGITAL AUTOFOCUS PRINCIPLE IN LIGHT FIELD CAMERA

The more rays converge to a point in the imaging plane, the larger irradiation intensity of the point, and the clearer the image. The focusing [7] is to adjust the detector position until it captures the clearest image, as shown in figure.5.

Figure 5.

Defocused image

00110_PSISDG10452_104523D_page_4_1.jpg

In geometrical optics, parallel light rays converge to a point in the focal plane through a perfect optical system, and to a defocused spot in the defocused plane. When a point in object space converges to a point in image space through a lens after the focusing, its foreground and background also will form an image in the focal plane. However, the images of the foreground and background are blur spots, some of which can be distinguished when the size of a blur spot equals to or is less than the allowable diameter of a blur spot. The maximum distance between the foreground and background which are the distinguished blur spots in the image plane is known as depth of field in object space, and is known as depth of focus in image space. In physics, the focusing is to move the detector back and front until to make a defocused spot smallest. The focusing is done by manual or automatic mode before the exposuring in traditional camera. However, in light field imaging system, because it captures four parameters information, we can arbitrarily transform the focal plane and traveling directions by data processing. This is so-called digital autofocus [8].

In figure.6, L(u,s) represents a light field captured by the imaging system, in which U represents the plane on which the center of the main lens is, S represents the plane on which the micro-lens array is, and l is the distance between the two planes. Besides, S’ represents a new focal plane, and the distance between S’ and the center of the main lens is l’. Let us suppose l’= αl. Therefore, formula (1) can be used to calculate the image irradiance value in the S’ plane.

00110_PSISDG10452_104523D_page_5_1.jpg

Figure 6.

Resampling of the 4D light field using digital autofocus

00110_PSISDG10452_104523D_page_4_2.jpg

Figure. 6 shows that a ray intersecting the U plane and the S plane respectively also intersects the S’ plane. Therefore, formula (2) is workable.

00110_PSISDG10452_104523D_page_5_2.jpg

According to similar triangles theory, formula (3) is given,

00110_PSISDG10452_104523D_page_5_3.jpg

and formula (4) is obtained by formula (3) transformation.

00110_PSISDG10452_104523D_page_5_4.jpg

Finally, a simplified formula shown in (5) is derived from (1), (2) and (4).

00110_PSISDG10452_104523D_page_5_5.jpg

Formula (5) shows that projects a light field to a new focal plane by auto-focus. As can be seen from (5), auto-focus is to translate the light field on the position, and then to integrate it on the direction.

4.

APPLICATION OF AUTO-FOCUS

Based on the distribution of the light field inside the light field camera, a new virtual focus plane, which is in the front of or behind the old focus plane, can be built. After calculating radiant distributions and positions at intersection points with the new focus plane for all rays, an image [9] in the new focus plane will be reconstructed. There is an example shown in figure. 7. By this way, multi-focus images can be obtained.

Figure 7.

The focusing on different object planes

00110_PSISDG10452_104523D_page_5_6.jpg

5.

CONCLUSION

Taking a light field camera as an example to make further explanation, students can more clearly understand knowledge about pinhole imaging, convex lens imaging, and light field imaging, meanwhile, also can know about obscure theories on light field imaging. By using digital focus mode in a light field camera, an image in an arbitrary imaging plane can be reconstructed based on a picture with a single exposure. It also can focus quickly even with a large aperture. Furthermore, on the basis of the sequence of images generated from different object planes, a light field camera can compose a panoramic image and estimate depth information for an object in 3D space.

REFERENCES

[1] 

[2] 

Levoy, M., Hanrahan, P., “Light Field Rendering,” in Proc. ACM SIGGRAPH, 31 –42 (1996). Google Scholar

[3] 

[4] 

Priambodo P S, Darusalam U, Rahardjo E T, “Free-Space Optical Propagation Noise Suppression by Fourier Optics Filter Pinhole,” [J]. International Journal of Optics & Applications, 5 (2), 27 –32 (2015). Google Scholar

[5] 

Ziegler M, Priemer B, “From the Pinhole Camera to the Shape of a Lens: The Camera-Obscura Reloaded,” [J]. Physics Education, 50 (6), 706 –712 (2015). https://doi.org/10.1088/0031-9120/50/6/706 Google Scholar

[6] 

Levoy, Marc, “Light field rendering,” in Conference on Computer Graphics and Interactive Techniques ACM, 31 –42 (1996). Google Scholar

[7] 

“Light Field Camera Lytro, the Principle and Algorithms,” https://wenku.baidu.com/view/e213a7a9ba1aa8114531d969.html Google Scholar

[8] 

Zhou Zhiliang, “Research on light field imaging technology,” [D]. University of Science and Technology of China, (2012). Google Scholar

[9] 

Ng R, “Digital light field photography,” [C]. Stanford University, (2006). Google Scholar
© (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Yun Fu, Xiping Xu, and Peng Yin "To make a further explanation on some questions about optical imaging using a light field camera", Proc. SPIE 10452, 14th Conference on Education and Training in Optics and Photonics: ETOP 2017, 104523D (16 August 2017); https://doi.org/10.1117/12.2270969
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Cameras

Engineering education

Image processing

Optical imaging

Photography

Synthetic aperture imaging

Back to Top