A crucial task in facial expression recognition is the classification of facial features in captured images. This classification task is challenging because facial features change dynamically due to several facial expressions. Additionally, the captured face images are often degraded by additive noise, nonuniform illumination, geometrical modifications, and partial occlusions, increasing uncertainty in classification . Several successful methods for facial landmark classification based on machine learning have been proposed. This work presents a comparative study of existing classification methods for facial landmarks in image sequences degraded by noise, nonuniform illumination, and partial occlusions. The performance of the classification methods considered in the study is quantified in terms of accuracy using face images from well-known datasets. The study aims to provide useful insights into the efficacy of existing facial landmark classification methods under challenging conditions.
Binocular vision is an effective technique for depth estimation in outdoor applications. However, the performance of this technique can be affected by low-contrasted images captured in the presence of scattering particles. This paper presents a binocular vision-based method for image dehazing and depth estimation in a scattering medium. First, the disparity map of the scene is computed from captured binocular images. Next, the atmospheric parameters of the medium and scene depth are determined with a proposed estimation method. Finally, undegraded images of the scene are obtained using an atmospheric optics restoration method. The theoretical foundations of the proposed approach are reviewed, and an experimental validation is presented utilizing a laboratory platform.
Disparity refinement is a post-processing step in stereo vision that retrieves unknown disparity values caused by pixel occlusions or estimation errors. This step is crucial for improving depth estimation accuracy and reducing artifacts. In this work, we propose an iterative method based on genetic optimization to perform disparity refinement for stereo vision. The estimation of unknown disparity values is formulated as an optimization problem, where a fitness function is optimized by minimizing a trade-off between disparity variations and point correspondence errors. The proposed method achieves accurate refined disparity maps for stereo depth estimation. Computer simulation results are presented and discussed in terms of objective performance measures. Additionally, the results are compared with those obtained using a well-known existing method.
KEYWORDS: Cameras, Thermal imaging cameras, 3D modeling, Profilometers, 3D projection, Projection systems, Visible radiation, Thermography, Point clouds, 3D image processing
Conventional fringe projection profilometers utilize cameras and projectors in the visible spectrum. Nevertheless, some applications require profilometers with a complementary thermal camera for the infrared spectrum. Since the point cloud is computed from pixel correspondences between the visible camera-projector pair, the texture in the visible spectrum is obtained by direct association of color from each image pixel to its corresponding point in the cloud. Unfortunately, the texture from the thermal camera is not straightforward because of the inexistence of pixel-point correspondences. In this paper, a simple interpolation-based method for determining the texture of the reconstructed objects is proposed. The theoretical principles are reviewed, and an experimental verification is conducted using a visible-thermal fringe projection profilometer. This work provides a helpful framework for three-dimensional data fusion for advanced multi-modal profilometers.
Current calibration methods for multimodal systems consisting of structured light and thermography use calibration targets with physical characteristics. However, defects in the manufacturing of these targets are common. Therefore, these methods are prone to undesired errors. We propose a calibration method for a multimodal system (a visible camera, projector, and thermal imaging camera) that does not require the construction of a physical calibration target. For this purpose, thanks to an auxiliary camera, we use a digital screen to obtain the intrinsic parameters of the camera, and a mirror to obtain the intrinsic and extrinsic parameters of the projector and the thermal imaging camera. The experimental results demonstrate that it is possible to elude the challenging task of fabricating physical targets without compromising the accuracy of the system calibration compared to conventional methods.
Correlation filters have been widely used in several pattern recognition applications. These filters can reliably detect and accurately locate a target with good tolerance to geometrical modifications and the presence of additive and nonoverlapping noise in the scene. This work presents an exhaustive performance evaluation of several advanced correlation filters for the task of printed character recognition. Several printed character strings in the English alphabet containing geometrical modifications and nonuniform illumination conditions are recognized using different advanced correlation filters. The performance of each tested filter is characterized in terms of efficiency of character recognition and accuracy of character location estimation.
Computation of three-dimensional information of an object from captured images is an important task in computer vision. The use of binocular vision for this task has been widely explored for years. However, the accuracy of three-dimensional reconstruction using binocular vision is conditioned by operating within a specific field of view. This work presents a three-dimensional reconstruction method based on multi-ocular vision. This method achieves higher accuracy in comparison with the conventional binocular approach. The performance of the proposed method is evaluated for three-dimensional reconstruction using images from an existing stereo dataset and a real laboratory experiment using four cameras.
KEYWORDS: Education and training, Signal to noise ratio, Facial recognition systems, Databases, Binary data, Matrices, Light sources and illumination, Statistical modeling, Statistical analysis, Nose
The detection and localization of landmarks in human face images is an essential task in many computer vision applications. This task is challenging because face images can contain geometric modifications due to gesticulations and pose changes, and degradation caused by noise or nonuniform illumination. This work presents an exhaustive evaluation of several state-of-the-art facial landmark detection methods. The performance of each tested method is characterized in terms of reliability of landmark detection and accuracy of landmark localization. Computational results obtained in facial landmark recognition using images from well-known datasets are presented, discussed and compared in terms of objective measures.
Fourier Transform Profilometry (FTP) is a powerful 3D reconstruction method based on structured-light projection suitable for dynamic shape measurements. A main feature of FTP is that it works using a single fringe pattern. However, the quality of the 3D reconstruction largely depends on the accuracy of first-order spectrum filtering. This work compares some representative spectrum filtering methods in different simulated situations, highlighting advantages and drawbacks. This study provides a reference for the practical implementation of a FTP system.
KEYWORDS: Projection systems, Cameras, Calibration, Fringe analysis, Image processing, Camera calibration, 3D projection, 3D modeling, 3D image processing
Metric three-dimensional reconstruction by fringe projection profilometry requires calibrating the employed camera and projector. However, the calibration process is more difficult for projectors than for cameras. This work presents a reconstruction method where the projector parameters are not required explicitly. For this, we assume the projector follows the pinhole model and single-axis fringe projection is employed. The theoretical principles are explained, and the proposed method is validated experimentally by a metric three-dimensional reconstruction. The results provide a theoretical framework for further generalization, including implicit camera calibration and lens distortion, while keeping the metric reconstruction capability.
KEYWORDS: Pose estimation, Cameras, Matrices, 3D metrology, Singular value decomposition, 3D image processing, Video, Navigation systems, Error analysis, Global Positioning System
Location and pose estimation are essential tasks for robot navigation. Conventional global positioning systems can perform poorly due to environmental or indoor interference. Alternatively, vision-based location and pose estimation systems may be more suitable for indoor and outdoor applications. However, vision-based systems still need to improve their robustness in an uncontrolled environment and operational performance. In this work, a visual pose estimation method for robot navigation in an uncontrolled environment is proposed. The theoretical principles of pose estimation are reviewed and the usefulness of the proposed approach in a navigation sequence is shown. The results obtained show that the proposed method is feasible for robot navigation applications.
KEYWORDS: Cameras, Virtual reality, RGB color model, Human-machine interfaces, Coded apertures, 3D modeling, Visual process modeling, Video processing, Video, MATLAB
Modern advances in optical metrology and computer vision have provided an unprecedented ability to generate a wide variety of 3D digital content. The mouse, trackpad, and touch screens are typical 2D interactive interfaces of digital content. However, such interfaces are restrictive to manipulate 3D content such as models, object scans, and environments. In this work, a 3D pointer based on stereo vision to interact virtually with digital 3D objects is proposed. The theoretical principles and the experimental calibration procedure are provided. The proposed 3D pointer is evaluated experimentally by simple interaction routines with objects reconstructed by fringe projection profilometry.
Camera pose estimation is an essential task in many computer vision applications. A widely used approach for this task is given by the specification of several corresponding points in a pair of captured input and reference images. The effectiveness of these methods depends on the accuracy of the specified points and is very sensitive to outliers. This work presents an iterative method for camera pose estimation based on local image correlation. The pose of the camera is estimated by finding a homography matrix that produces the best match between local fragments of the reference image constructed around the specified points and their corresponding projective transformed fragments of the input image when using the estimated homography. The performance of the proposed method is tested by processing synthetic and experimental images.
KEYWORDS: Projection systems, Navigation systems, Image segmentation, Mobile robots, Digital imaging, Dynamical systems, Cameras, 3D image processing, 3D displays
Experimental platforms are necessary to evaluate the performance of algorithms in different navigation scenarios. Physical platforms require materials and time to create a single experimental scenario. This approach becomes impractical for exhaustive evaluation in different scenarios because of the prohibitive increase of resources, time, and space. This paper proposes a multi-projector system to mitigate the time and cost by projecting dynamically designed scenes for vehicle navigation experiments. Theoretical principles of perspective projection and mosaicing are reviewed. The dynamic platform is presented for different vehicle navigation cases. The results show that the proposed approach is feasible for vehicle navigation evaluation.
Three-dimensional object reconstruction is an essential task in many computer applications. In essence, it consists of firstly estimating the disparity of all corresponding points of an observed scene from a pair of stereo images and then determining the depth map of the scene by triangulation from the estimated disparity. Conventionally, the baseline is fixed in general-purpose stereo cameras. This can limit the resolution and robustness of the three-dimensional reconstruction. In this work, a multi-baseline stereo vision approach for three-dimensional object reconstruction is presented. The mathematical principles of multi-baseline stereo vision are provided. Additionally, experimental results of three-dimensional object reconstruction are presented and discussed in terms of objective measures.
Vision systems have become a promising feedback sensor for robot navigation due to their ability to extract meaningful scene information. In this work, a multicamera system is proposed to estimate the position and orientation of an omnidirectional robot. For this, three calibrated devices (two smartphones and a webcam) are employed. Also, two badges of different colors are placed on the omnidirectional robot to detect its position and orientation. The obtained pose information is used as feedback for the robot trajectory controller. The results show that the proposed system is a useful alternative for the visual localization of ground mobile robots.
Different camera models are usually employed to address specific imaging processes such as telecentric, pinhole, and radial distortion. Recently, the distorted pinhole camera model was developed, and several imaging processes were analyzed as particular cases. This paper shows that the tangential distortion is also a particular case of the distorted pinhole camera model. The mathematical principles of this generalized approach are presented, and its usefulness is illustrated by distorting test images. The results show that the distorted pinhole camera model provides an advanced mathematical framework for computer vision and optical metrology applications
Nowadays, computer vision is an essential part of modern autonomous mobile robots. Fisheye cameras are employed to capture large scenes with a single camera, but the hard radial distortion limits the accuracy of measurements. In this research, a vision system with multiple low-distortion cameras to capture large flat scenes from different viewpoints is proposed. This system applies a homography-based image mosaicing method and linear image interpolation. The obtained results show that the proposed system is useful for visual navigation of ground mobile robots.
Perspective distortion is a typical transformation reproduced by the pinhole model. However, camera lenses introduce radial distortion that reduces the accuracy of image processing tasks, such as lane detection for visual navigation. This paper proposes an image warping method based on the distorted pinhole camera model for lane detection applications. The theoretical principles of the imaging process are analyzed. The usefulness of this method is illustrated by estimating the pose of a ground vehicle using lane lines. The results show that the proposed approach is feasible for visual feedback in robot navigation applications.
An algorithm for the recognition and tracking of several objects in color image sequences is presented. First, each three-channel color input image are encoded into a single-channel complex-valued image. Next, a set of prespecified targets are recognized and located in the scene by a composite matched filtering with complex constraints. Afterwards, the set of targets are tracked by adapting the matched filtering to each input image and by processing small image fragments extracted at the predicted coordinates of the targets in the scene. Results obtained with the proposed algorithm in a test image sequence are presented and analyzed in terms of efficiency of target recognition and accuracy of target tracking.
Pose estimation is an essential task in many mobile robot navigation systems. Visual guidance provides a feasible means for pose estimation using the observed scene information as reference. This work presents an approach to estimate the pose of a mobile robot based on projective transformations. First, the Hough transform is used for lane detection. Next, a projective transformation is computed using the detected lines as reference. Finally, the robot's pose is estimated from the resulting projective transformation. The theoretical principles and computational implementation are analyzed. Experimental results of a visual navigation experiment are presented to validate the usefulness of the proposed approach.
Estimation of projective transformations is an essential process in modern vision-based applications. Usually, the provided experimental point correspondences required to estimate the projective transformations are corrupted with random noise. Thus, for an accurate estimation of the actual projective transformation, a robust optimization criterion must be employed. In this work, we analyze a two-step estimation approach for robust projective transformation estimation. First, the algebraic distance is employed to obtain an initial guess. Then, the geometric distance is used to refine this initial guess. Three geometric-based refining methods are evaluated, namely, the one-image error, the symmetric-transfer error, and reprojection. The obtained results confirm a high accuracy and robustness of the analyzed approach.
A parallel implementation of a proposed stereo vision algorithm for three-dimensional scene reconstruction is presented. The algorithm firstly estimates the disparity map of a scene from a pair of rectified stereo images using an adaptive template matched filter. Next, the estimated disparity is utilized to retrieve the three-dimensional information of the scene, by considering the stereo camera's intrinsic parameters. The proposed algorithm is implemented on an embedded graphics processing unit by exploiting massive parallelism for high-rate image processing. The performance of the proposed algorithm is evaluated in real-life scenes captured on a laboratory experimental platform in terms of accuracy and processing speed.
A reliable method for the detection and tracking of facial landmarks in image sequences is presented. Given a set of prespecified facial landmarks in a reference face image, a bank of composite matched filters is constructed for reliable detection and accurate location of the landmarks in an input image sequence. The filter bank is dynamically adapted to each captured frame by learning from current and past landmark detections and considering geometrical modifications of the landmarks. The detected landmarks are accurately tracked using a kinematic motion model that predicts their coordinates in future frames. The performance of facial landmark detection and tracking obtained with the proposed method is tested by processing real-life image sequences. The obtained results are analyzed and discussed in terms of objective measures.
A real-time system for restoration of images degraded by haze is presented. First, a transmission function estimator is automatically constructed using genetic programming. Next, the resultant estimator is employed to compute the transmission function of the scene by processing an input hazy image. Finally, the estimated transmission function and the hazy image are used in a restoration model based on atmospheric optics to obtain a haze-free image. The proposed method is implemented in a laboratory prototype for high-rate image processing. The performance of the proposed approach is evaluated in terms of objective metrics using synthetic and real-world images.
The capture of panoramic images requires the use of complex and specialized cameras. However, high quality panoramic images can be constructed digitally by stitching several images captured with conventional lowcost cameras. In this work, an image stitching method based on projective transformations is proposed. The theoretical principles and computational implementation are presented. Experimental panoramic images are composed to validate the usefulness of our method.
Length measurements provide important information about the three-dimensional world. This is especially useful for decision making in robot vision, path planning in autonomous navigation, and people identification in security application. In this work, we present a length measurement method based on perspective transformations using an uncalibrated camera. The theoretical principles are analyzed and the computational implementation is discussed. The usefulness of our proposal is verified experimentally by measuring relative lengths from experimental monocular images.
Uncalibrated camera-projector fringe projection systems are unable to provide metric three-dimensional measurements. The main difficulty for camera-projector calibration is that independent calibration of the devices is cumbersome and susceptible to alignment errors. In this paper, an efficient and accurate method for calibration of a camera-projector pair is proposed. The operating principle and computational implementation are analyzed. The metric measurement of a three-dimensional object is carried out to demonstrate the efficiency and accuracy of the proposed method.
Detection and description of local features in images is an essential task in robot vision. This task allows to identify and uniquely specify stable and invariant regions in a observed scene. Many successful detectors and descriptors have been proposed. However, the proper combination of a detector and a descriptor is not trivial because there is a trade-off among different performance criteria. This work presents a comparative study of successful image feature detection and description methods in the context of the simultaneous localization and mapping problem. The considered methods are exhaustively evaluated in terms of accuracy, robustness, and processing time.
Template matching is an effective method for object recognition because it provides high accuracy in location estimation of targets and robustness to the presence of scene noise. These features are useful for vision-based robot navigation assistance where reliable detection and location of scene objects is essential. In this work, the use of advanced template matched filters applied for robot navigation assistance is presented. Several filters are constructed by the optimization of objective performance criteria. These filters are exhaustively evaluated in synthetic and experimental scenes, in terms of efficiency of target detection, the accuracy of a target location, and processing time.
Stereo matching is challenging due to the presence of perspective distortions and noise. Commonly, stereo matching algorithms utilize local matching techniques to determine the correspondence of two pixels of the same point in a scene. This work presents a stereo matching algorithm based on locally-adaptive windows and correlation filtering. The proposed algorithm estimates the disparity of each pixel by matching an adaptive sliding-window obtained from the left image in the right image of the stereo pair. Computer simulation results obtained with the proposed algorithm are analyzed and discussed by processing pairs of stereo images.
Rectangular shapes in observed scenes are transformed to quadrilaterals in resulting images due to perspective distortion. If the aspect ratio of the original rectangle is known, the associated homography can be computed directly and used for perspective correction. However, in camera document scanning applications, the aspect ratio of the imaged rectangle is unknown. In this work, a homography estimation method appropriate for document scanning applications is given. This method does not require a priori knowledge of the aspect ratio of the imaged rectangle nor a calibrated camera. The computational implementation is evaluated experimentally.
Path planning for autonomous vehicles is a challenging computer vision problem. In this work, we propose an algorithm to generate dynamically a smooth path for trajectory guidance of an autonomous vehicle. For this, we use B-spline curves and the perspective-distorted images obtained from an onboard camera. The theoretical principles of the algorithm are presented in detail. Preliminary results obtained with an experimental prototype are shown.
A self-tuning filter for noise reduction in sinusoidal signals is proposed. Unlike the conventional sine fitting methods, no a priori knowledge of the encoded phase distribution is assumed. For this, an analytical model with three parameters (the average, the amplitude of the sinusoid, and the standard deviation of the noise) is used. The estimated standard deviation is used for adaptively tuning the noise filter. Obtained results show the feasibility of the proposal for fringe pattern normalization.
Image restoration consists in retrieving an original image by processing captured images of a scene which are degraded by noise, blurring or optical scattering. Commonly restoration algorithms utilize a single monocular image of the observed scene by assuming a known degradation model. In this approach, valuable information of the three dimensional scene is discarded. This work presents a locally-adaptive algorithm for image restoration by employing stereo vision. The proposed algorithm utilizes information of a three-dimensional scene as well as local image statistics to improve the quality of a single restored image by processing pairs of stereo images. Computer simulations results obtained with the proposed algorithm are analyzed and discussed in terms of objective metrics by processing stereo images degraded by optical scattering.
Image restoration is a classic problem in image processing. Image degradations can occur due to several reasons, for instance, imperfections of imaging systems, quantization errors, atmospheric turbulence, relative motion between camera or objects, among others. Motion blur is a typical degradation in dynamic imaging systems. In this work, we present a method to estimate the parameters of linear motion blur degradation from a captured blurred image. The proposed method is based on analyzing the frequency spectrum of a captured image in order to firstly estimate the degradation parameters, and then, to restore the image with a linear filter. The performance of the proposed method is evaluated by processing synthetic and real-life images. The obtained results are characterized in terms of accuracy of image restoration given by an objective criterion.
An algorithm to generate ronchigrams of parabolic concave mirrors is proposed. Unlike the conventional direct ray-tracing method, which produces scattered pixels, the proposed algorithm returns regularly sampled images. Thus, the proposed algorithm is fully compatible with further fringe processing tasks such as phase demodulation and wavefront analysis. The theoretical principles of our proposal are explained in detail, and the programming code is provided. Several computer experiments highlight the performance and advantages of our proposal.
Phase demodulation is an essential image processing stage required by digital fringe projection profilometers. Currently, several approaches for phase demodulation have been proposed. In this work, a set of phase demodulation methods useful for digital fringe projection profilometry is presented. This survey covers fringe pattern normalization, extraction of wrapped phase, and phase unwrapping. Experimental results obtained with a laboratory fringe projection system are presented.
Structured light projection is one of the most useful methods for accurate three-dimensional scanning. Video projectors are typically used as the illumination source. However, because video projectors are not designed for structured light systems, some considerations such as gamma calibration must be taken into account. In this work, we present a simple method for gamma calibration of video projectors. First, the experimental fringe patterns are normalized. Then, the samples of the fringe patterns are sorted in ascending order. The sample sorting leads to a simple three-parameter sine curve that is fitted using the Gauss-Newton algorithm. The novelty of this method is that the sorting process removes the effect of the unknown phase. Thus, the resulting gamma calibration algorithm is significantly simplified. The feasibility of the proposed method is illustrated in a three-dimensional scanning experiment.
Inhomogeneous or gradient index media exhibit a refractive index varying with the position. This kind of media are very interesting because they can be found in both synthetic as well as real life optical devices such as the human lens. In this work we present the development of a computational tool for ray tracing in refractive optical systems. Particularly, the human eye is used as the optical system under study. An inhomogeneous medium with similar characteristics to the human lens is introduced and modeled by the so-called slices method. The useful of our proposal is illustrated by several graphical results.
An operator-based approach for the study of homogeneous coordinates and projective geometry is proposed. First, some basic geometrical concepts and properties of the operators are investigated in the one- and two-dimensional cases. Then, the pinhole camera model is derived, and a simple method for homography estimation and camera calibration is explained. The usefulness of the analyzed theoretical framework is exemplified by addressing the perspective correction problem for a camera document scanning application. Several experimental results are provided for illustrative purposes. The proposed approach is expected to provide practical insights for inexperienced students on camera calibration, computer vision, and optical metrology among others.
The design and implementation of an electronic system to real-time capture and processing speckle interference patterns is presented. Because of the random and instability speckle patterns nature, is very useful a system wich allows obtaining and visualizing interference speckle patterns in the shortest time possible. Proposed system captures the first speckle pattern as steady image while captures subsequent patterns from the same source. Images are electronically transformed separately into value arrays and subtracted to obtain real-time interference speckle patterns, these patterns are automatically archived for later analysis. System consist of a CCD camera, a computer interface that makes capturing, a transparent object and a 4f interferometric system whose source is a laser that passes through diffuser glass in order to obtain speckle effect. Experimental results and analytic explanation is showed bellow.
A simple method to estimate the amplitude and standard deviation of sinusoidal signals corrupted with additive Gaussian noise is proposed. For this, a two-parameter model is developed by sorting the samples of the signal. This reduced parametric model allows robust parameter estimation, even if the phase function of the sinusoid is nonlinear, discontinuous, and unknown. The functionality and performance of the proposed method are analyzed by several computer simulations; the used GNU Octave program is provided. The proposed method can be useful for unbiased envelope estimation in fringe pattern normalization among other potential applications.
Tunable lenses have become very popular elements due their capacity of change their focal length by only modifying their shape. This characteristic is very useful in different applications in the field of optics. The development of tunable lenses consists on several phases: first, to find a suitable material, second, to obtain an optimal analysis and design, and third, to find the way to change the lens shape and characterization. In this work we present the characterization of a tunable lens, formed by spherical profiled elastic membranes and a liquid medium between them. The proposed liquidfilled tunable has a design such that the spherical aberration is the least to different focus. The development of an optomechanical system to change the lens shape is presented.
Ametropies of the human eye, are refractive defects hampering the correct imaging on the retina. The most common ways to correct them is by means of spectacles, contact lenses, and modern methods as laser surgery. However, in any case it is very important to identify the ametropia grade for designing the optimum correction action. In the case of laser surgery, it is necessary to define a new shape of the cornea in order to obtain the wanted refractive correction. Therefore, a computational tool to calculate the focal length of the optical system of the eye versus variations on its geometrical parameters is required. Additionally, a clear and understandable visualization of the evaluation process is desirable. In this work, a model of the human eye based on geometrical optics principles is presented. Simulations of light rays coming from a punctual source at six meter from the cornea are shown. We perform a ray-tracing in three dimensions in order to visualize the focusing regions and estimate the power of the optical system. The common parameters of ametropies can be easily modified and analyzed in the simulation by an intuitive graphic user interface.
The perspective and lens distortions induced by the imaging system of a camera device are corrected by using
an elementary geometrical approach. We propose a simple method based on the use of a crossed grating in the
reference plane and a phase demodulation process. Preliminary results showing the performance of the proposed
method are discussed.
A simple method to evaluate the focal length of concave mirrors is proposed. The inverse ray-tracing approach of
the Ronchi test is used in the measurement stage. The theoretical principles are given and a numerical method
for ronchigram processing is proposed. The results verify the feasibility of the proposal.
A reliable method for three-dimensional digitization of human faces based on the fringe projection technique
is presented. The proposed method employs robust fringe analysis algorithms for robust phase computation.
The quality of the resultant 3D face model is characterized in terms of accuracy of surface computation using
objective metrics. We present experimental results obtained with real and synthetic laboratory objects. The
potential of this method to be used in the field of face recognition is discussed.
Numerical results are presented to show the characterization of an electromechanical actuator capable to achieve equally spaced phase shifts and fraction linear wavelength displacements aided by an interface and a computational system. Measurements were performed by extracting the phase with consecutive interference patterns obtained in a Michelson arrangement setup. This paper is based in the use of inexpensive resources on stability adverse conditions to achieve similar results to those obtained with high-grade systems.
An alternative method for phase-shifting interferometry of two steps by using a speckle interferometer is proposed. It is shown that the introduced phase-step could be unknown because the use of an appropriate fringe-pattern processing. The acquired fringe patterns are processed by well established phase-shifting algorithms in order to compare these results with our proposal. Numerical phase difference between two states of phase object is compared with theoretical method using electronic speckle pattern interferometry (ESPI) . Simulated and experimental results are provided.
A simple and efficient phase-unwrapping algorithm based on a rounding procedure and a global least-squares minimization is proposed. Instead of processing the gradient of the wrapped phase, this algorithm operates over the gradient of the phase jumps by a robust and noniterative scheme. Thus, the residue-spreading and over-smoothing effects are reduced. The algorithm’s performance is compared with four well-known phase-unwrapping methods: minimum cost network flow (MCNF), fast Fourier transform (FFT), quality-guided, and branch-cut. A computer simulation and experimental results show that the proposed algorithm reaches a high-accuracy level than the MCNF method by a low-computing time similar to the FFT phase-unwrapping method. Moreover, since the proposed algorithm is simple, fast, and user-free, it could be used in metrological interferometric and fringe-projection automatic real-time applications.
A faster and robust generalized phase-shifting interferometry suitable to automatic real-time applications is presented. The proposal is based on the parameter estimation by the least squares method. The algorithm can retrieve the wrapped phase from two or more phase shifted interferograms with unknown phase steps between 0 and π rad. Moreover, by the multiple or single parameter estimation approach of this algorithm, interferograms with variable visibility both spatial and temporally can be processed, overcoming the restrictions and drawbacks from usual variable spatial visibility approaches. By both computer simulation and a optical experiment, the algorithm's feasibility is illustrated.
A method to create homogenous polarized light based on non-quadrature amplitude modulation is proposed. This method consists in the addition of two fields out of phase different from mπ and in the variation of their amplitudes only for obtaining a resulting field modulated in both phase and amplitude. This principle is used to modulate the vertical component in both phase and amplitude, while the horizontal component is varied in amplitude only keeping constant its phase, thus any amplitude relation and phase difference between components can be created and therefore any polarization state could be obtained. A theoretical model will be shown, and it will be sustained with numerical simulations of several polarization examples.
The fringe-pattern normalization method by parameter estimation is used to relieve the critical filter's requirements in the Fourier transform method for phase demodulation. By the normalization procedure, the zero order spectrum is suppressed allowing a straightforward filtering in the Fourier transform method. Thus, the filtering procedure is carried out by the simple half-plane filter. The benefits of this Fourier normalized-fringe analysis scheme are tested by both computer simulation and optical experiments.
The simple filtering procedure, high spatial resolution, and low computation time benefits of Fourier normalized-fringe analysis are verified. For this, both the fringe-pattern normalization method by parameter estimation using the least squares method and the standard Fourier transform method are implemented. This proposal, or any Fourier normalized-fringe analysis scheme, has the advantage that the filter’s properties are not very critical because the zero-order spectrum is suppressed by the normalization stage. Then, the simple half-plane filter is applied in the filtering procedure which, in addition, increases the spatial resolution. Both a computer simulation and the experimental results show the functionality and feasibility of the suggested scheme.
In this work we propose the simulation of a fringe
pattern in analogy with a oscilating pendulum with adaptive
parameters. This technique will be used in future to recovery
the phase of interferograms by mean a dynamical model.
We reproduce a reference fringe pattern solving the dynamical
model of a pendulum with adaptive length parameter, by
computer simulation. We obtained good preliminary results
with this dynamical system. The differential equation (DE) was
simulated with Simulink Toolbox of MATLAB. We show our
results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.