One of the challenges in high-precision manufacturing is constant inspection as well as efficient communication of the inspection results to the workers. In this context, we are presenting a multi-modal 3D imaging system designed for computer-assisted assembly manufacturing using augmented reality. The three-dimensional measurement subsystem is a structured-light system based on a digital micro-mirror device (DMD). The augmented reality imagery is displayed on the components being manufactured using another DMD-based color projector that uses wavelengths that do not interfere with the 3D measurements. A thermal camera is also part of the system and calibrated with respect to the measurement and projection subsystems. In typical target usage, the system can display localized shape deviation with respect to nominal values, or the surface temperature across the component, or any information obtained or derived from the subsystems. Moreover, it can be used to display assembly instructions and validate the compliance of the final manufactured component.
This paper presents two structured-light 3D imaging systems that use a quasi-analogue projection subsystem based on a combination of digital micromirror device (DMD), optics, digital processing and calibration procedure. The first system is a high-resolution prototype that acquires 12M 3D point per frame; the second one is a high-speed prototype that generates 150 3D frames per second, with 2M 3D points per frame. The projection subsystem can produce high frame rates, high contrast and high resolution patterns using off-the-shelf components. The structured light patterns used are the same combination of binary and sine wave fringes as those usually encountered in commercial systems. The proposed systems generate the sine wave patterns using a single binary image, thereby exploiting the high frame rate of the DMD.
This paper presents an industrial augmented reality system that simultaneously measures a component, identifies possible defects and displays the inspection result directly on the component. The processing is done in real time using a single DMD-based projector for both the inspection and augmented reality. The use of a single DMD eliminates the issue of registration between the component being inspected and an auxiliary augmented reality projector. The use of a single projection system also eliminates possible occlusion due to parallax between both projection systems. The system uses an algorithm that computes at video frame rate the temporal sequences of micromirror positions that, when imaged by a high-speed camera, contains a set of structured-light patterns. The temporal sequences are designed such that a human observer sees the desired augmented information. The proposed prototype can acquire 12 range images per second. The range uncertainty at 1-σ is 14 μm and each range image contains approximately one million 3D points.
Range sensors have drawn much interest for human activity related research since they provide explicit 3D information about the shape that is invariant to clothing, skin color and illumination changes. However, triangulationbased systems like structured-light sensors generate occlusions in the image when parts of the scene cannot be seen by both the projector and the camera. Those occlusions, as well as missing data points and measurement noise, depend on the structured-light system design. These artifacts add a level of difficulty to the task of human body segmentation that is typically not addressed in the literature. In this work, we design a segmentation model that is able to reason about 3D spatial information, to identify the different body parts in motion and is robust to artifacts inherent to the structured-light system, such as triangulation occlusions, noise and missing data. First, we build the first realistic sensor-specific training set by closely simulating the actual acquisition scenario with the same intrinsic parameters as our sensor and the artifacts it generates. Second, we adapt a state-of-the-art fully convolutional network to range images of the human body in order for it to transfer its learning toward 3D spatial information instead of light intensities. Third, we quantitatively demonstrate the importance of simulating sensor-specific artifacts in the training set to improve the robustness of the segmentation of actual range images. Finally, we show the capability of the model to accurately segment human body parts on real range image sequences acquired by our structured light sensor, with high inter-frame consistency and in real-time.
During the autumn of 2004, a team of 3D imaging scientists from the National Research Council of Canada (NRC) was
invited to Paris to undertake the 3D scanning of Leonardo's most famous painting. The objective of this project was to
scan the Mona Lisa, obverse and reverse, in order to provide high-resolution 3D image data of the complete painting
to help in the study of the structure and technique used by Leonardo. Unlike any other painting scanned to date, the
Mona Lisa presented a unique research and development challenge for 3D imaging. This paper describes this challenge
and presents results of the modeling and analysis of the 3D and color data.
We propose a dual-resolution foveated stereoscopic display built from commodity projectors and computers. The technique is aimed at improving the visibility of fine details of 3D models in computer-generated imagery: it projects a high-resolution stereoscopic inset (or fovea, by analogy with biological vision) that is registered
in image space with the overall stereoscopic display. A specific issue that must be addressed is the perceptual conflict between the apparent depth of the natural boundary of the projected inset (visible due to changes in color, brightness, and resolution) and that of the underlying scene being displayed. We solve this problem by assigning points to be displayed in either the low resolution display or the inset in a perceptually consistent manner. The computations are performed as a post-processing, are independent of the complexity of the model, and are guaranteed to yield a correct stereoscopic view. The system can accommodate approximately aligned projectors, through image warping applied as part of the rendering pipeline. The method for boundary adjustment is discussed along with implementation details and applications of the technique for the visualization of highly detailed 3-D models of environments and sites.
The National Research Council of Canada (NRC) has developed a range of 3D imaging technology tools, which have been applied to a wide range of museum and heritage recording applications. The technology suite includes the development of high-resolution laser scanner systems as well as software for the preparation of accurate 3D models and for the display, analysis and comparison of 3D data. This paper will offer an overview of the technology and its museum and heritage applications with particular reference to the 3D examination of paintings and recording of archaeological sites.
This paper presents a summary of the 3D modeling work that was accomplished in preparing multimedia products for cultural heritage interpretation and entertainment. The three cases presented are the Byzantine Crypt of Santa Cristina, Apulia, temple C of Selinunte, Sicily, and a bronze sculpture from the 6th century BC found in Ugento, Apulia. The core of the approach is based upon high-resolution photo-realistic texture mapping onto 3D models generated from range images. It is shown that three-dimensional modeling from range imaging is an effective way to present the spatial information for environments and artifacts. Spatial sampling and range measurement uncertainty considerations are addressed by giving the results of a number of tests on different range cameras. The integration of both photogrammetric and CAD modeling complements this approach. Results on a CDROM, a DVD, virtual 3D theatre, holograms, video animations and web pages have been prepared for these projects.
This paper presents the work that was accomplished in preparing a multimedia CDROM about the history of a Byzantine Crypt. An effective approach based upon high-resolution photo-realistic texture mapping onto 3D models generated from range images is used to present the spatial information about the Crypt. Usually, this information is presented on 2D images that are flat and don’t show the three-dimensionality of an environment. In recent years, high-resolution recording of heritage sites has stimulated a lot of research in fields like photogrammetry, computer vision, and computer graphics. The methodology we present should appeal to people interested in 3D for heritage. It is applied to the virtualization of a Byzantine Crypt where geometrically correct texture mapping is essential to render the environment realistically, to produce virtual visits and to apply virtual restoration techniques. A CDROM and a video animation have been created to show the results.
This talk summarizes the conclusions of a few of these laser scanning experiments on remote sites and the potential of the technology for imaging applications. Parameters to be considered for these types of activities are related to the design of a large volume of view laser scanner, such as the depth of field, the ambient light interference (especially for outdoors) and, the scanning strategies. The first case reviewed is an inspection application performed in a coal- burning power station located in Alberta, Canada. The second case is the digitizing of the ODS (Orbiter Docking System) at the Kennedy Space Center in Florida and, the third case is the digitizing of a large sculpture located outside of the Canadian Museum of Civilisation in Ottawa-Hull, Canada.
Laser range sensors measure the 3D coordinates of points on the surface of objects. Range images taken from different points of view can provide a more or less complete coverage of an object's surface. The geometric information carried by the set of range images can be integrated into a unified, non-redundant triangular mesh describing the object. This model can then be used as the input to rapid prototyping or machining systems in order to produce a replica. Direct replication proves particularly useful for complex sculptured surfaces. The paper will describe the proposed approach and relevant algorithms, and discuss tow test cases.
The determination of relative pose between two range images, also called registration, is a ubiquitous problem in computer vision, for geometric model building as well as dimensional inspection. The method presented in this paper takes advantage of the ability of many active optical range sensors to record intensity or even color in addition to the range information. This information is used to improve the registration procedure by constraining potential matches between pairs of points based on a similarity measure derived from the intensity information. One difficulty in using the intensity information is its dependence on the measuring conditions such as distance and orientation. The intensity or color information must first be converted into a viewpoint-independent feature. This can be achieved by inverting an illumination model, by differential feature measurements or by simple clustering. Following that step, a robust iterative closest point method is then used to perform the pose determination. Using the intensity can help to speed up convergence or, in cases of remaining degrees of freedom (e.g. on images of a sphere), to additionally constrain the match. The paper will describe the algorithmic framework and provide examples using range-and-color images.
This paper describes the concept of differential inspection using a digital 3D imaging laser camera. Shapes can be compared and analyzed to a resolution smaller than 25 micrometers . Differential information is displayed as a high-definition image, color-coded for human interpretation. The same concept of differential inspection can be applied to 3D color images in order to monitor shape and color variations. Potential applications are discussed.
This paper describes recent work on hierarchical segmentation of range images. The algorithm starts with an initial partition of small planar regions using a robust fitting method constrained by the detection of depth and orientation discontinuities. From this initial partition represented by an adjacency graph structure, we optimally group these regions into larger and larger regions until an approximation limit is reached. The algorithm uses Bayesian decision theory to determine the local optimal grouping and the geometrical complexity of the approximation surface. This algorithm produces a hierarchical structure that can be used to represent objects with a varying level of detail by scanning through the hierarchical structure generated. Experimental results are presented.