You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither SPIE nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the SPIE website.
19 May 2011Application issues in the use of depth from (de)focus analysis methods
Recovering 3D object information through analyzing image focus (or defocus) has been shown to be a potential tool in
situations where only a single viewing point is possible. Precise modeling and manipulation of imaging system
parameters, e.g. depth of field, modulation transfer function and sensor characteristics, as well as lighting condition and
object surface characteristics are critical for effectiveness of such methods. Sub-optimal performance is achieved when
one or more of these parameters are dictated by other factors. In this paper, we will discuss the implicit requirements
imposed by most common depth from focus/defocus (DFF/DFD) analysis methods and offer related application
considerations. We also describe how a priori information about the objects of interest can be used to improve
performance in realistic applications of this technology.
The alert did not successfully save. Please try again later.
M. Daneshpanah, G. Abramovich, K. Harding, A. Vemury, "Application issues in the use of depth from (de)focus analysis methods," Proc. SPIE 8043, Three-Dimensional Imaging, Visualization, and Display 2011, 80430G (19 May 2011); https://doi.org/10.1117/12.886271