Image-based 3D face reconstruction has a huge application in the field of face analysis, such as face recognition, facial animation, and face editing. Recently, the popular methods based on 3dmm suffer from the ill-posed face pose16 and depth ambiguity issue. In order to address the issue, two multi-view geometric constraints are included in the reconstruction process. Note that before using these two constraints, a complete UV texture needs to be generated by texture fusion. Then, we can establish dense correspondences between different views leveraging a novel self-supervised pixel consistency constraint. We also use the facial landmark-based epi-polar constraint to constrain the pose between different views to obtain more accurate results. Extensive experiments demonstrate the superiority of the proposed method over other popular 3dmm methods with single-view input in accuracy and robustness, especially under large poses.
Robotic grasping in multi-object stacking scenes is important for autonomous robot manipulation. In this paper, we propose a 6-DoF (Degree of Freedom) grasping method for stacked rectangular objects from the single-view point clouds. We use the PointNet++ network and DBSCAN clustering algorithm to extract the target object from the whole scene. The 6-DoF pose of the gripper is obtained by our grasp pose estimation algorithm. To train the PointNet++ network, we build a small rectangular object segmentation dataset containing 800 real-world stacking scenes. The whole grasping system is lightweight, which takes about 518ms for a whole grasp planning process. Sufficient experiments show that our method gets 92% success rate and 95.5% completion rate, which satisfies the requirements of industrial applications.
KEYWORDS: 3D modeling, Head, Data modeling, Detection and tracking algorithms, 3D scanning, 3D acquisition, 3D printing, 3D image reconstruction, Computer vision technology
We propose an automatic face and head deformation method that combines 3D faces with an arbitrary head model. With the rapid development of Computer Vision and Deep Learning, 3d scans of human faces are becoming easier to obtain. How to complete the scanned 3D face data and make it a complete full head model, or give the scanned 3D face different 3D hairstyles, has always been an open question. For this reason, we propose a Global Deformation Model (GDM) which is implemented by multiple iterations. By building a Global Deformation Model (GDM), a full-head 3D data with complex hairstyles could be combined with 3D face data. In this way, the scanned face is automatically completed as a full-head model. Experiments show that compared with other deformation algorithms and full-head reconstruction methods, our method has better automation and robustness. It shows good deformation results in complex 3D data. We provide an attractive solution for graphic design, Virtual Reality, 3D printing, and other industries, which can be widely used in consumer scenes.
Boundary and edge cues are very useful in improving various visual tasks, such as semantic segmentation, object recognition, stereo vision, and object generation. In recent years, the issue of edge detection has been revisited, and deep learning has made significant progress. The traditional edge detection is a challenging two-category problem, and the Multi-category semantic edge detection is a more challenging problem. And we model the edge detection of cultural relics and classify the pixels of cultural relics. To this end, we propose a novel end-to-end deep semantic edge learning architecture based on ResNet. Then, we proposed an adaptive class weighter for this problem to supervise the training. The results show that the proposed architecture is superior to the existing semantic edge detection methods in our own design of cultural relic edge detection performance.
To make the robot automatic welding systems more automatic and accurate, the 3D weld seam extraction has become a research hot-spot. In this paper, a seam extraction method for five types of straight-line seam based on point cloud obtained by three-dimensional reconstruction of binocular structured light is proposed. Firstly, the second derivative is used to calculate the inflection point for the rough extraction of seam characteristics. Secondly, according to the shape information of the welding workpiece, the center point of the seam is detected and the least square algorithm is used to fit the seam model. Finally, the 3D welding spot position and pose estimation are solved based on the established mathematical model. The experiments are conducted under five different situations, and the average extraction accuracy of the method can reach 0.19mm. The experimental results indicate that the proposed algorithm can efficiently locate five types of seam and generate the tracking path planning to guide robot manipulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.