This paper presents an industrial augmented reality system that simultaneously measures a component, identifies possible defects and displays the inspection result directly on the component. The processing is done in real time using a single DMD-based projector for both the inspection and augmented reality. The use of a single DMD eliminates the issue of registration between the component being inspected and an auxiliary augmented reality projector. The use of a single projection system also eliminates possible occlusion due to parallax between both projection systems. The system uses an algorithm that computes at video frame rate the temporal sequences of micromirror positions that, when imaged by a high-speed camera, contains a set of structured-light patterns. The temporal sequences are designed such that a human observer sees the desired augmented information. The proposed prototype can acquire 12 range images per second. The range uncertainty at 1-σ is 14 μm and each range image contains approximately one million 3D points.
This paper presents two structured-light 3D imaging systems that use a quasi-analogue projection subsystem based on a combination of digital micromirror device (DMD), optics, digital processing and calibration procedure. The first system is a high-resolution prototype that acquires 12M 3D point per frame; the second one is a high-speed prototype that generates 150 3D frames per second, with 2M 3D points per frame. The projection subsystem can produce high frame rates, high contrast and high resolution patterns using off-the-shelf components. The structured light patterns used are the same combination of binary and sine wave fringes as those usually encountered in commercial systems. The proposed systems generate the sine wave patterns using a single binary image, thereby exploiting the high frame rate of the DMD.
One of the challenges in high-precision manufacturing is constant inspection as well as efficient communication of the inspection results to the workers. In this context, we are presenting a multi-modal 3D imaging system designed for computer-assisted assembly manufacturing using augmented reality. The three-dimensional measurement subsystem is a structured-light system based on a digital micro-mirror device (DMD). The augmented reality imagery is displayed on the components being manufactured using another DMD-based color projector that uses wavelengths that do not interfere with the 3D measurements. A thermal camera is also part of the system and calibrated with respect to the measurement and projection subsystems. In typical target usage, the system can display localized shape deviation with respect to nominal values, or the surface temperature across the component, or any information obtained or derived from the subsystems. Moreover, it can be used to display assembly instructions and validate the compliance of the final manufactured component.
Range sensors have drawn much interest for human activity related research since they provide explicit 3D information about the shape that is invariant to clothing, skin color and illumination changes. However, triangulationbased systems like structured-light sensors generate occlusions in the image when parts of the scene cannot be seen by both the projector and the camera. Those occlusions, as well as missing data points and measurement noise, depend on the structured-light system design. These artifacts add a level of difficulty to the task of human body segmentation that is typically not addressed in the literature. In this work, we design a segmentation model that is able to reason about 3D spatial information, to identify the different body parts in motion and is robust to artifacts inherent to the structured-light system, such as triangulation occlusions, noise and missing data. First, we build the first realistic sensor-specific training set by closely simulating the actual acquisition scenario with the same intrinsic parameters as our sensor and the artifacts it generates. Second, we adapt a state-of-the-art fully convolutional network to range images of the human body in order for it to transfer its learning toward 3D spatial information instead of light intensities. Third, we quantitatively demonstrate the importance of simulating sensor-specific artifacts in the training set to improve the robustness of the segmentation of actual range images. Finally, we show the capability of the model to accurately segment human body parts on real range image sequences acquired by our structured light sensor, with high inter-frame consistency and in real-time.
We present a parallel implementation of a statistical shape model registration to 3D ultrasound images of the
lumbar vertebrae (L2-L4). Covariance Matrix Adaptation Evolution Strategy optimization technique, along
with Linear Correlation of Linear Combination similarity metric have been used, to improve the robustness and
capture range of the registration approach. Instantiation and ultrasound simulation have been implemented on
a graphics processing unit for a faster registration. Phantom studies show a mean target registration error of 3.2
mm, while 80% of all the cases yield target registration error of below 3.5 mm.
Three-dimensional models of the spine are very important in diagnosing, assessing, and studying spinal deformities.
These models are generally computed using multi-planar radiography, since it minimizes the radiation dose
delivered to patients and allows them to assume a natural standing position during image acquisition. However,
conventional reconstruction methods require at a minimum two sufficiently distant radiographs (e.g., posterior-anterior
and lateral radiographs) to compute a satisfactory model. Still, it is possible to expand the applicability
of 3D reconstructions by using a statistical model of the entire spine shape. In this paper, we describe a reconstruction
method that takes advantage of a multi-body statistical model to reconstruct 3D spine models. This
method can be applied to reconstruct a 3D model from any number of radiographs and can also integrate prior
knowledge about spine length or preexisting vertebral models. Radiographs obtained from a group of 37 scoliotic
patients were used to validate the proposed reconstruction method using a single posterior-anterior radiograph.
Moreover, we present simulation results where 3D reconstructions obtained from two radiographs using the proposed
method and using the direct linear transform method are compared. Results indicate that it is possible
to reconstruct 3D spine models from a single radiograph, and that its accuracy is improved by the addition of
constraints, such as a prior knowledge of spine length or of the vertebral anatomy. Results also indicate that the
proposed method can improve the accuracy of 3D spine models computed from two radiographs.