To achieve high-resolution intensity images, Fourier ptychography (FP) has been used in microscopic for decades, and is suitable for macroscopic area. Aberration is important for the quality of reconstruction during aperture scanning and phase recovery in the FP process. In the field of macroscopic imaging, due to the defocus in the 3D scene, cameras can only capture blurry images by direct photography. To overcome the limitation, we design an FP iteration method with the depth of defocus to improve the recovery for the high-resolution images. The initial scene comprises two layers of data: the intensity information and the depth map. The spectral distribution and optical transfer function are acquired via Fourier transform. This is followed by an iterative process of reconstruction: a low-resolution image is taken at each aperture, combined with depth-induced defocus aberrations, and then substituted into optical transfer function calculations to simulate intensity measurements from camera sensing. Each scan corresponds to a specific area with limited bandwidth in Fourier space, and multiple iterations are used for phase recovery to reconstruct a high-resolution image. Compared to the traditional method, the defocus elimination FP algorithm for macroscopic imaging enhances the resolution and contrast of the reconstruction, enabling a wider range of measurable scenarios.
KEYWORDS: Solar cells, Design and modelling, Solar concentrators, Photovoltaics, Refractive index, Receivers, Light sources and illumination, Lens design, Geometrical optics, Concentrated solar cells
Fresnel lenses have become the mostly used concentrator in concentration photovoltaics (CPV) due to its thinness and high concentration. Typical Fresnel lens design principle requires the focal length much greater than its diameter, posing potential spatial confinement to CPV configuration. In this paper, a CPV with concentration ratio of 245 at 3.5 mm from the center of the lens is designed, featured with a double-sided circular Fresnel lens (20 × 20). For traditional PMMAbased Fresnel lens, a long focal-length is required to avoid the concentration efficiency loss on the edge of the grooves due to total reflection. To overcome the limitation, a theoretical model of double-sided Fresnel lens is proposed. In this model, groove parameters of the rear surface are carefully tuned to pair the refracted light from the front surface in order to achieve desired ray deviation. Uniform radiation and an angle tolerance for incline illumination of 6 degrees is also taken into account to ensure the overall performance of the concentrator. The design shortens the axial working distance between the concentrator and the photovoltaic cell, in order to create a more compact CPV module or satisfy some spatial-confined CPV applications.
With the advances in scientific foundations and technological implementations, modern industrial design requires more details of object such as 3D information. Most of the optical 3D metrology focus on the spatial distribution of the light field, while weakening its temporal characteristics, which makes additional information lost or causes errors in somewhere. Here, we propose a new approach based on the temporal modulated radiation, actively introduce time perturbation and mine the hidden information in the temporal domain intensity fluctuations of the light field. A temporal cross-correlation function is used to calculate and characterize the related physical quantities of temporal light field, modeling the mapping relationship between the phase gradient of the modulated wave and the gradient of the object surface, and the feasibility of the model is verified by numerical simulation based on ray-tracing method. The proposed approach expands a new perceptive for optical metrology, provides a bridge for the integration of various technologies.
Various applications of microlens arrays in optical measurement, optical sensors, digital displays, optical communications and other fields have attracted widespread attention, among which it is always difficult to directly machine those convex microlens array made of brittle materials. Polyhedral microlens array is relatively easy to be generated; however, there are few researches found on the design of polyhedral lenses. In this paper, the method of meridional plane integral is proposed for the design of polyhedral microlens array. For a square aperture microlens array that fits the detector pixel shape, the method firstly determines the limit of its light boundary conditions on the vector height and then analyzes each meridional plane to form the final optimal design result using the multi-surface integral. The results show a better focusing effect under multiple fields of view, with a central 0-degree field of view where the spot is focused to within 17% and an edge 5-degree field of view where the same degree of focus is obtained. Meanwhile, the filling factor is theoretically 100%. The research is helpful for new design of microlens arrays considering the manufacturing capability.
Digital fringe projection (DFP) measurement technology has been widely used in research and industry due to its characteristics of fast measurement speed and easy implementation. However, when measuring objects with large surface undulations, the depth of field limitations of the camera and projector will cause part of the object to be measured to be in a state of defocus, which will reduce the measurement accuracy. This paper presents a novel method to discriminate the portion of an image that is defocused due to depth-of-field limitations and to eliminate the error caused by defocus in the phase diagram. The proposed approach first discriminates the defocused pixels in the image based on the modulation information, and then recovers the defocused regions by the information fusion method. The information fusion fixes the erroneous phase information by fusing the valid phase of the neighboring focused region with the phase of the defocused region obtained through kernel density estimation by means of the Kalman filter algorithm. Simulation and experimental results show that this method can effectively eliminate the errors caused by the depth-of-field limitation in the phase map, and can feasibly extend the depth of field of the digital fringe projection system without projecting additional images.
KEYWORDS: Polarization, Data fusion, Image resolution, 3D modeling, Stereoscopy, Resolution enhancement technologies, Spatial resolution, Microscopy, Point clouds, Imaging systems
Light field microscopy (LFM) is an emerging three-dimensional (3D) imaging technology that simultaneously captures both spatial and angular information of the incident light by placing a micro lens array (MLA) in front of an imaging sensor, enabling computational reconstruction of the full 3D volume of a specimen via a single camera frame or single shot imaging. Unlike other 3D imaging techniques that acquire spatial information sequentially or through scanning, LFM four-dimensional (4D) imaging scheme effectively frees volume acquisition time from spatial scales and easy to miniaturize, thus making LFM a highly scalable tool in diverse applications. However, its broad application has slowed down due to the low resolution of the limited angular and spatial information in one snapshot of captured image, the inhomogeneous resolution of reconstructed depth images, and the lack of lateral shift invariance, which greatly degrades the spatial resolution, causing grid-like artifacts and great computational complexity. The introduction of Fourier light field microscopy (FLFM) provides a promising path to improve the current LFM techniques and achieve high-quality imaging and rapid light field reconstruction. However, the inherent trade-off between the angular and spatial resolution is still not fundamentally resolved without the introduction of additional information. Polarization, another dimension of light field information, has been shown to integrate well with other 3D imaging techniques to obtain finer 3D reconstructions results. Unfortunately, this aspect has seldom received attention and is ignored in LFM. This paper presents a resolution enhancement scheme for Fourier light field microscopy system that utilizes polarization norms and light field point cloud data fusion to generate improved imaging resolution and 3D reconstruction accuracy. Different from conventional FLFM, this approach actively introduces additional surface polarization information of the sample into the reconstruction of 3D volume information. A universal polarization-integrated FLFM configuration is designed and built up, allowing polarization and light field data to be acquired simultaneously using the same set of optical paths. A mathematical model is derived to describe the mapping and fusion of the polarization norms and light field point cloud data. Simulation studies show that the resolution and accuracy of the 3D reconstruction of the proposed FLFM imaging system are significantly improved after incorporating the polarization information, confirming the validity of the proposed methods. Finally, the implications of this approach for FLM are discussed, providing guidance for future experiments and applications. The resolution enhancement approach based on polarization and data fusion provides a feasible solution to the contradiction between lateral resolution and vertical resolution, and further improve the resolution of the FLFM imaging system.
Deep learning methods have been widely used to complete the task of stereo matching in recent years, which is the key step in machine vision measurement. State-of-the-art methods are three-dimensional (3D) end-to-end networks that forms a cost volume by concatenating extracted features and processes it with 3D modules. Despite the strong performance in terms of accuracy, 3D networks mostly have high computational cost, heavy memory storge and long run-time. In this paper proposed Local Cost Volume Refinement Network (LCRN), which is a two-dimensional (2D) end-to-end network composed of feature extraction, disparity initialization, disparity refinement and disparity mergence module. LCRN initializes disparity maps by using correlation layer and residual blocks, and refines them by using local cost volumes, residual blocks and disparity regression. Local cost volumes are constructed by warping right features and giving a small disparity shift. To verify the effectiveness of LCRN, the network was pre-trained on SceneFlow dataset and fine-tuned on ROBI dataset. The network is evaluated on the test set of ROBI for robotic bin-picking. Experimental results show that LCRN maintains a competitive accuracy while having fast run-time and requiring less memory storage.
In lidar system, the laser has a divergence angle due to its Gaussian distribution which limits the detection distance. The improvement of lidar detection range is mainly based on the receiver and transmitter. Vertical Cavity Surface Emitting Laser (VCSEL) can form a two-dimensional (2D) array light source, which can realize direct detection without using a single laser for scanning. The structure of the lidar can therefore be simplified and the performance can also be improved. This paper aims to enhance the detection range of lidar by reducing the divergence angle of laser light source through designing an array collimator lens. The collimating lens unit designed is composed of a freeform surface lens and a standard lens, which is arranged in the same way as the light source. The aperture of the lens needs to be less than the distance between each adjacent light source unit, which makes direct calculation difficult. The design process is realized by nonsequential mode simulation by ZEMAX software, and the final design is obtained through the combination of calculation and software optimization. The design result shows that the initial divergence angle of the laser after passing through the lens array is compressed from 24.4° to about 0.7° theoretically, and the overall collimation keeps the energy loss within 20%. Simulation results indicate the design can initially achieve the detection range in a few of kilometers. The research work provides new solutions for light collimation and is helpful for advancing the development of lidar detection.
With the development of three-dimensional (3D) shape measurement technology, fringe projection has become an effective and reliable measurement method for its great performance on the robustness against environmental disturbance and ease of operation. Traditional method of fringe projection measurement using one projector and one camera needs multiple fringe patterns to project to get absolute phases. However, it always demands fast measuring speed in the condition of online measurement. In this work, one more camera is introduced to establish a binocular structured light system to extend the field of view. In addition, to improve the measurement speed, a triangular wave is supplemented to the sinusoidal fringe to form a superposed fringe to assist phase unwrapping with only three images projected. In the phase unwrapping process, geometric constraints are used to find the corresponding points in the images captured by left and right cameras, and then the unique corresponding points are selected by the embedded triangular waves and the period order is determined. Several experiments are performed and the results shows that the proposed method takes only one third of the time compared with the conventional measurement and thus can realize a fast 3D measurement by structured light.
The traditional method of line laser stripes extraction is to binarize the image taken by cameras with a threshold value, but the size of laser lines in different color areas are obviously different because the line laser is absorbed by different colors of the parts to be measured to different degrees. For some regions with similar color to the laser, the traditional laser fringe recognition method cannot even identify the laser line, which results in the serious vacancy of the points cloud obtained by linear structured light scanning in some regions. An adaptive threshold method for laser fringe extraction is proposed in this paper. A modified HSV space is firstly proposed and the characteristics of the laser stripes on different colors of cardboard based on the modified HSV space is analyzed. Then a new image with high contrast through the quantized characteristics of the stripes is synthesized; Finally, the neighborhood filtering method is used to perform filtering and the Steger method is employed to extract the center of the laser stripe. Experimental results show that the method proposed in this paper can better segment the laser fringe from the background than the traditional algorithm, which lays a good foundation for the calculation of laser fringe center and hence to improve the measuring capability of laser structured light.
Freeform surface has been widely applied to various optical components which require a surface quality with submicrometer form accuracy and nanometric surface finish. Currently the achievement of such ultra-precision surface quality relies largely on the skills and experience of the operator based on a trial-and-error approach, which is a time consuming and expensive process. Ultra-precision raster milling is an enabling technology to generate freeform surfaces with the optical surface quality without any subsequent processing. This paper presents the fundamental knowledge about ultra-precision raster milling, including the cutting mechanism, cutting strategies, cutting path plans, tool path generation and hence a slide motion error model has been established to predict the form error generation. A series of experimental studies have been undertaken and the prediction results were found to agree well with the measured values.
With the development of intelligent manufacturing, the role of industrial robots is becoming more and more important. However, the relatively low absolute positioning accuracy limits industrial robot application in high precision manufacturing. The main reason for the low positioning accuracy of industrial robots comes from the series configuration and insufficient stiffness, which leads to large motion errors. This paper proposed an error compensation method based on BP neural network combined with industrial robot stiffness model. Firstly, the relationship between the joint angles, the space stiffness and the error of the industrial robot is established through the stiffness model. Then, the neural network training set was constructed based on the experimental data and the simulation data from the established stiffness model. Finally, based on the training results of BP neural network, the spatial positioning error of the 6-DOF industrial robot was measured and compensated. Experimental results show that the error compensation method based on BP neural network increases the position accuracy by 95%, and the spatial position error is reduced to less than 0.005mm. This validates that the working performance and accuracy of the industrial robot can be improved, which is helpful for the further application of industrial robot in precision machining and measurement.
Ultra-thin gaps between two metallic workpieces in assembling parts need to be measured and characterized, for determining the assembling accuracy and further ensuring the final performance of the parts. Methods such as fringe projection or optical profilers encounter the difficulties in distinguishing such micro features in several tens of micrometers. In this paper, a microscopic imaging system combining image processing algorithms is designed and developed to measure the thickness of such air gaps. Simulation studies are undertaken to validate the feasibility of the proposed method. A measuring system is then designed and developed, based on which a series of measurement experiments is carried out to verify the measuring accuracy and precision. Finally, a cylindrical assembling parts with thin gaps are measured and characterized. The proposed measuring method is useful to determine the shape of air gap and the local contact situation for assembling parts.
Additive Manufacturing (AM) technology is considered as one of the most promising manufacturing technologies in the areas of aerospace and defense industries. However, the AM parts are known to have relatively high residual stresses and a variety of defects, such as porosity, balling, cracking, etc. Therefore, it is critically important to monitor the quality of products during the AM manufacturing process. In this paper, we proposed a novel enhanced fusion algorithm based on Finite Discrete Shearlet Transform (FDST) and multi-scale sequential toggle operator (MSSTO) for visible and infrared images fusion in the AM systems. The original images can be decomposed into low-frequency and high-frequency subband images by FDST. Then, the effective bright and dark image informations are extracted from the low frequency coefficients of source images by MSSTO transform, which are injected into the fusing low frequency coefficient to obtain the final low frequency synthetic coefficient. The high frequency sub-band coefficients are fused by using the local spatial frequency weighting and region energy. The fused image can be obtained by the FDST inverse transformation of the high and low fused coefficient. Experiments show that the proposed algorithm can get more texture information while retaining the significant features of the images, acheiving good detection and indetification results of the defects properties.
In this paper, by constructing a 3-DOF (Degrees of freedom) laser interferometer on each displacement axis of motion, the main errors during the coordinate measuring machines (CMM) motion are dynamically monitored. The 3-DOF laser interferometer is composed of three-beam dual-frequency laser interferometers, and each dual-frequency laser interferometer obtains the reflection signal through a corner cube reflector. By determining the position parameters of the three corner cube reflectors and three sets of feedback signals, the 3-DOF laser interferometer can calculate the three parameters, i.e., the axial displacement and the lateral pitch and yaw angles. The device is installed on each motion axis of the CMM to monitor 9 major errors during the whole motion system. Laser signals are used to track position information by establishing the rigid body transformation relationship. Therefore, the 9 errors can be updated in real time through the monitoring system to replace the calibration results to calculate the probe position. The position error of the CMM probe caused by the dynamic error is simulated. Monte Carlo method is employed to obtain the error situation and monitor the residual error distribution throughout the measurement movement process. The errors can be corrected immediately without recalibrating the CMM error parameters especially when the dynamic error changes significantly. In addition, environmental errors are also introduced in the model to analyze the impact on the accuracy of position monitoring. The method proposed can reduce the position monitoring error within 0.1 μm. Using this method, the error source of a dynamic error monitoring system can be designed in a simulated environment, which can be used as the basis for the design of the dynamic error monitoring system.
The structured light fields can be spoiled by the noisy environmental light. The defects will occur in reconstructive process due to the enormous change of surface reflectivity, which may ruin the results of the measurement. Thus, a structured light measuring system was proposed in this paper, taking the advantages of blue structured light, to reduce the disturbance of noise. A set of geometric feature parameters are proposed for characterizing the assembling errors of assembly parts, and the corresponding computation algorithms are presented based on the measured scattered points data. The proposed method can effectively reduce the influence of reflective deficiency. Experimental studies have been undertaken by measuring an assembly parts made by aluminum alloy, the measured results are also compared with those by a robotic coordinate measuring machine from Hexagon. The results show that the proposed measurement method and the developed system provides an efficient non-contact way for analyzing the feature parameters for assembly parts with high reflective surface in a high precision.
In order to measure the figure change of weak rigid body parts by a 4D interferometer, a groove ring vacuum chuck is designed to easily absorb the weak rigid body parts. Using the vacuum chuck to adsorb the object could keep it even under the force and maintain stable. Based on the principle of vacuum chuck technology, we use a 3D visual solid simulation software called Inventor to model the chuck, then use finite element software ANSYS to solve fluid dynamics and structural static analysis of the vacuum chuck. Different parameters that affect the performance of the chuck are analyzed, and the optimum values of these parameters are hence obtained, which provide an important means for designing the vacuum chuck to hold weak rigid body parts for further precision measurement or processing.
Additive Manufacturing (AM) technology is considered as one of the most promising manufacturing technologies in the areas of aerospace and defense industries. However, a lack of assurance of quality with AM parts is a key technological barrier that prevents manufacturers from adopting AM technologies, especially for high value added applications. Therefore, it is critically important to monitor the quality of products during the AM process. In this paper, current process monitoring especially the defects measurement for metal AM in Powder Bed Fusion (PBF) was firstly reviewed. And then, an optical in-situ inspection method based on multi-spectrum is proposed. The optical measuring system with infrared and white light imaging system is designed and optimized. Imaging data fusion algorithms is proposed to obtain the enhanced measuring results from infrared and white light imaging system. Simulation studies were undertaken to verify the validity of the proposed monitoring system. The research work is helpful for the optimization of process parameters so as to control the quality in the AM process.
Precision measurement of three-dimensional (3D) microstructures has drawn great interests from researchers and industries. Currently there are a bunch of high precision measurement methods such as contour-graphs, interferometers and optical computed tomography used in industrial applications. Nonetheless, the loss of information, low efficiency and possible surface damaging of traditional ways still exist. This paper presents a new way to measure micro/nano structures based on light field microscope. The light field information of the microstructures is acquired by using a microlens array which is inserted between the camera sensor and the objective lens. Then a series of regular sub-images are recorded by the photosensor, which is used to reconstruct a 3D image by the developed algorithms. The non-contact shooting process is realized through only one exposure which is much more efficient than the traditional ways. Microlens array with aspheric surface is also designed and used in the developed system, to eliminate aberrations and compensate the loss of spatial resolution. A series of simulation and experimental studies have been undertaken to measure microstructures, and the feasibility of the developed system is validated.
Optical microscopy is an important means and tool for the research of microcosmic life science. As an important technology to study the life process with optical methods, microfluidics is widely used in the field of biology and medicine. Inspired by bionics, biomimetic microfluidic control has become a promising branch of microfluidic field. In the microfluidic channel, it imitates biological functions such as collecting water droplets and cilia driven fluid motion, which provides an alternate approach for the design and development of new microfluidic devices. This paper presents a study of functional microstructures with directional transport for the application of bionic microfluidics. Biological microstructured surface with directional transport function was first studied and designed, to imitate the rose petals, the outer skin of the Texas horned lizard and the peristome surface of Nepenthes alata. The model of bionic microstructure with directional transportation was then established to reveal the characteristic mechanism of typical directional transport microstructures. Microstructures with function of directional transport were designed regarding to the criteria of distance and speed of directional transportation of water droplets. Simulation studies show that the designed functional microstructured surface is capable to achieve the expected function of directional transport. Such microstructures can be applied to the design and processing of microfluidic chips, therefore the research is helpful to promote the application and development of bionic microfluidics in optical microscopy.
Measurement and compensation of error components are critically important to improve the precision of a measuring system, however, high precision error measurement and separation of rotary axes in the system are difficult for the moving parts. Interferometry is a widely used method to measure errors of axes but can only measure errors at some special location of rotary axes which is not enough to compensate the errors. As a result, this paper proposed a 3-point method based on confocal sensor and optic flat to measure and separate perpendicularity errors with submicron precision, in which the errors at any position of an axis can be measured. Simulation studies based on multi-body kinematics and homogeneous transformation indicate that the six degree-of-freedom errors only have influence on the display of sensors but have no impact on the separation results of perpendicularity errors. Experimental studies were also undertaken. An optic flat with a peak-to-peak value (PV) of 27nm was used as the reference plane of the rotating axis and three confocal sensors with an accuracy of 85nm is used as the measuring sensors. Experimental results show that the proposed method can achieve an accuracy of 5μrad in measuring perpendicularity errors of rotary axes.
One of the important factors that affect the polishing results is the motion modes of the polishing pad in the process of Computer Controlled Optical Surfacing (CCOS). This paper presents a systematic study for the motion modes in CCOS by using a polishing pad. A series of theoretical and experimental studies have been undertaken to investigate the influences of two typical motion modes, called planet motion and orbital motion, on the polished surface, regarding to material removal rate (MRR), middle-spatial-frequency errors, surface roughness, etc. Firstly, the theoretical removal function of the two motion modes was established, and the experiments were carried out by given polishing parameters. A comparison was made between the results of experiments and simulations by the established polishing model. Then, the effects of the mentioned two motion modes on middle-spatial-frequency errors were simulated by the numerical superposition method, and the results were also verified by actual polishing results. Finally, the surface roughness generated by the two different motion modes was examined and compared. The research work shows that the planet motion has higher material removal rate, lower middle-spatial-frequency errors and lower surface roughness, by compared with orbital motion mode, which is helpful for optimizing the polishing strategy during CCOS.
In recent years, more and more attention has been paid to the bionic structure and functional materials. The theoretical research and fabricating ways of the Super-hydrophobic surface have sound achievements. However, the existing methods largely depend on the precision of the equipment and complex chemical substances, and it is hard to ensure the consistence of the material surface. Therefore, construction of microstructure on the surface of the material by using the method of mechanical processing to make the scale of the Super-hydrophobic surface to promote the popularization and application of Super-hydrophobic surface is of great significance. In order to put forward the innovative microstructure and to provide theoretical basis for the subsequent mechanical processing, based on the analysis of the classical theory of Super-hydrophobic, the super-hydrophobic film was by sol gel method. To explore the effects of different ratio of materials on the hydrophobicity, a micro/nano-structured super-hydrophobic coating was obtained by coating a film improved by hexamethyldisilazane (HMDS) after a film improved by polyethylene glycol (PEG) was coated. The microstructure of bilayer films is analyzed, and the double-layer film structure is simplified to design two kinds of microstructure models. For the design of the two models based on the Wenzel and Cassie equations, a roughness factor is adopted to establish the quantitative relationship between the contact angle and the microstructure parameters, and the microstructure parameters is also analyzed by using MATLAB software, and hence the optimized microstructure parameters is obtained.
The 8th Asia Pacific Conference on Optics Manufacture & 3rd International Forum of Young Scientists on Advanced Optical Manufacturing
4 August 2023 | Shenzhen, China
Optical design and manufacturing
25 July 2023 | Beijing, China
Advanced Optical Manufacturing Technologies and Applications 2022; and 2nd International Forum of Young Scientists on Advanced Optical Manufacturing (AOMTA and YSAOM 2022)
29 July 2022 | Changchun, China
Optics Ultra Precision Manufacturing and Testing
26 June 2022 | Beijing, China
Seventh Asia Pacific Conference on Optics Manufacture and 2021 International Forum of Young Scientists on Advanced Optical Manufacturing (APCOM and YSAOM 2021)
28 October 2021 | Hong Kong, Hong Kong
Conference on Optics Ultra Precision Manufacturing and Testing
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.