The majority of industrial production processes can be divided into a series of object manipulation and handling tasks that can be adapted for robots. Through significant advances in compliant grasping, sensing and actuation technologies, robots are now capable of carrying out human-like flexible and dexterous object manipulation tasks. During operation, robots are required to position objects within tolerances specified for every operation in an industrial process. The ability of a robot to meet these tolerances is the critical deciding factor that determines where the robot can be integrated and how proficient the robot can carry out high-precision tasks. Therefore, improving the positioning accuracy of robots can lead to new avenues for their integration into production industries. Given that tolerances in manufacturing processes are in the order of tens of micrometres or less, robots should guarantee high positioning accuracy when manipulating objects. The direct method of ensuring high accuracy is by introducing an additional measurement system(s) that can improve the inherent joint-angle-based robot position determination. In this paper, we present a High-Accuracy Robotic Pose Measurement (HARPM) system based on coordinate measurements from a multi-camera vision system. We also discuss the integration of measurements obtained by absolute distance interferometry and how the interferometric measurements can complement the vision system measurements. The performance of the HARPM system is evaluated using a laser interferometer to investigate robotic positions along a trajectory. The performance results show that the HARPM system can improve the positioning accuracy of robots from hundreds to a few tens of micrometres.
Coherence scanning interferometry (CSI) is a widely used optical method for surface topography measurement of industrial and biomedical surfaces. The operation of CSI can be modelled using approximate physics-based approaches with minimal computational effort. A critical aspect of CSI modelling is defining the transfer function for the imaging properties of the instrument in order to predict the interference fringes from which topography information is extracted. Approximate methods, for example, elementary Fourier optics, universal Fourier optics and foil models, use scalar diffraction theory and the imaging properties of the optical system to model CSI surface topography measurement. In this paper, the measured topographies of different surfaces, including various sinusoids, two posts and a step height, calculated using the three example methods are compared. The presented results illustrate the agreement between the three example models.
Approximate and rigorous methods are widely used to model light scattering from a surface. The boundary element method (BEM) is a rigorous model that accounts for polarization and multiple scattering effects. BEM is suitable to model the scattered light from surfaces with complex geometries containing overhangs and re-entrant features. The Beckmann–Kirchhoff (BK) scattering model, which is an approximate model, can be used to predict the scattering behavior of slowly varying surfaces. Although the approximate BK model cannot be applied to complex surface geometries that give rise to multiple scattering effects, it has been used to model the scattered field due to its fast and simple implementation. While many of the approximate models are restricted to surface features with relatively small height variations (typically less than half the wavelength of the incident light), the BK model can predict light scattering from surfaces with large height variations, as long as the surfaces are locally flat with small curvatures. Thus far, attempts have been made to determine the validity conditions for the BK model. The primary validity condition is that the radius of curvature of any surface irregularity should be significantly greater than the wavelength of the light. However, to have the most accurate results for the BK model, quantifying the validity conditions is critical. This work aims to quantify the validity conditions of the BK model according to different surface specifications, e.g., slope angles (SA) and curvatures. For this purpose, the scattered fields from various sinusoidal and combinations of sinusoidal profiles are simulated using the BEM and the BK models and their differences are compared. The result shows that the BK model fails when there are high SA ( ⪆ 38 deg) and small radii of curvature ( ⪅ 10 λ) within a sinusoidal profile. Moreover, it is shown that for a combination of sinusoidal profiles the BK model is valid for profiles with a high maximum slope angle value ( ⪆ 38 deg) if the average of positive SA is low ( ⪅ 5 deg).
Approximate and rigorous methods are widely used to model light scattering from a surface. The boundary element method (BEM) is a rigorous model that accounts for polarisation and multiple scattering effects. BEM is suitable to model the scattered light from surfaces with complex geometries containing overhangs and re-entrant features. The Beckmann- Kirchhoff (BK) scattering model, which is an approximate model, can be used to predict the scattering behaviour of slowlyvarying surfaces. Although the approximate BK model cannot be applied to complex surface geometries that give rise to multiple scattering effects, it has been used to model the scattered field due to its fast and simple implementation. While many of the approximate models are restricted to surface features with relatively small height variations (typically less than half the wavelength of the incident light), the BK model can predict light scattering from surfaces with large height variations, as long as the surfaces are “locally flat” with small curvatures. Thus far, attempts have been made to determine the validity conditions for the BK model. The primary validity condition is that the radius of curvature of any surface irregularity should be significantly greater than the wavelength of the light. However, to have the most accurate results for the BK model, quantifying the validity conditions is critical. This work aims to quantify the validity conditions of the BK model according to different surface specifications, e.g., slope angles and curvatures. For this purpose, the scattered fields from various sinusoidal profiles are simulated using the BEM and the BK models and their differences are compared. The result shows that the BK model fails when there are high slope angles and large curvatures, and these conditions are quantified.
Additive manufactured parts have complex geometries featuring high slope angles and occlusions that can be difficult or even impossible to measure; in this scenario, photogrammetry presents itself as an attractive, low-cost candidate technology to acquire digital form data. In this paper, we propose a pipeline to optimise, automate and accelerate the photogrammetric measurement process. The first step is to detect the optimum camera positions which maximise surface coverage and measurement quality, while minimising the total number of images required. This is achieved through a global optimisation approach using a genetic algorithm. In parallel to the view optimisation, a convolutional neural network (CNN) is trained on rendered images of the CAD data of the part to predict the pose of the object relative to the camera from a single image. Once trained, the CNN can be used to find the initial alignment between object and camera allowing full automation of the optimised measurement procedure. These techniques are verified on a sample part showing good coverage of the object and accurate pose estimation. The procedure presented in this work simplifies the measurement process and represents a step towards a fully automated measurement and inspection pipeline.
Measurement of objects with complex geometry and many self-occlusions is increasingly important in many fields, including additive manufacturing. In a fringe projection system, the camera and the projector cannot move independently with respect to each other, which limits the ability of the system to overcome object self-occlusions. We demonstrate a fringe projection setup where the camera can move independently with respect to the projector, thus minimizing the effects of self-occlusion. The angular motion of the camera is tracked and recalibrated using an on-board inertial angular sensor, which can additionally perform automated point cloud registration.
Photogrammetry based systems are able to produce 3D reconstructions of an object given a set of images taken from different orientations. In this paper, we implement a light-field camera within a photogrammetry system in order to capture additional depth information, as well as the photogrammetric point cloud. Compared to a traditional camera that only captures the intensity of the incident light, a light-field camera also provides angular information for each pixel. In principle, this additional information allows 2D images to be reconstructed at a given focal plane, and hence a depth map can be computed. Through the fusion of light-field and photogrammetric data, we show that it is possible to improve the measurement uncertainty of a millimetre scale 3D object, compared to that from the individual systems. By imaging a series of test artefacts from various positions, individual point clouds were produced from depth-map information and triangulation of corresponding features between images. Using both measurements, data fusion methods were implemented in order to provide a single point cloud with reduced measurement uncertainty.
In this paper we show that, by using a photogrammetry system with and without laser speckle, a large range of additive
manufacturing (AM) parts with different geometries, materials and post-processing textures can be measured to high
accuracy. AM test artefacts have been produced in three materials: polymer powder bed fusion (nylon-12), metal powder
bed fusion (Ti-6Al-4V) and polymer material extrusion (ABS plastic). Each test artefact was then measured with the
photogrammetry system in both normal and laser speckle projection modes and the resulting point clouds compared with
the artefact CAD model. The results show that laser speckle projection can result in a reduction of the point cloud
standard deviation from the CAD data of up to 101 μm. A complex relationship with surface texture, artefact geometry
and the laser speckle projection is also observed and discussed.
In non-rigid fringe projection 3D measurement systems, where either the camera or projector setup can change
significantly between measurements or the object needs to be tracked, self-calibration has to be carried out frequently to
keep the measurements accurate1. In fringe projection systems, it is common to use methods developed initially for
photogrammetry for the calibration of the camera(s) in the system in terms of extrinsic and intrinsic parameters. To
calibrate the projector(s) an extra correspondence between a pre-calibrated camera and an image created by the projector
is performed. These recalibration steps are usually time consuming and involve the measurement of calibrated patterns
on planes, before the actual object can continue to be measured after a motion of a camera or projector has been
introduced in the setup and hence do not facilitate fast 3D measurement of objects when frequent experimental setup
changes are necessary. By employing and combining a priori information via inverse rendering, on-board sensors, deep
learning and leveraging a graphics processor unit (GPU), we assess a fine camera pose estimation method which is based
on optimising the rendering of a model of a scene and the object to match the view from the camera. We find that the
success of this calibration pipeline can be greatly improved by using adequate a priori information from the
aforementioned sources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.