Human pose estimation is a key step in understanding human behavior in images and videos. Bottom-up human pose estimation methods are difficult to predict the correct pose of a person in large scenes due to the challenge of scale variation. In this paper we propose a two-stage hierarchical network that first acquires images in large scenes, and sends tracking command signals to a two-degree-of-freedom shooting platform equipped with an image sensor to track a moving target based on a motion target detection frame, and locally constrains the captured image stream according to a top-down target detection algorithm to retain only the content related to the motion target in the image. The processed images are fed into the generalized human pose estimation model for pose detection. We deployed the algorithm on a two-degree-of-freedom filming platform equipped with camera equipment and deployed the experimental platform to sport scenes to conduct detection experiments on sport figures in running and ski jumping sport scenes, using the sport figure and its nearby area as the ROI region to generate pictures or videos with the skeleton pose of the sport target to guide the sport training of the target figure. This investigation can solve the challenge of scale variation to some extent in bottom-up multi-human pose estimation, especially for large scenes where the person key points can be located more accurately. The experiments show that this investigation can meet the practical use requirements of speed and accuracy of sport figure pose detection in large scenes of daily sports.
Novel view synthesis is a long-standing problem. Despite the rapid development of neural radiance field (nerf), in terms of rendering dynamic human body, NeRF still cannot achieve a good trade-off in precision and efficiency. In this paper, we aim at synthesizing a free-viewpoint video of an arbitrary human performers in an efficient way, only requiring a sparse number of camera views as inputs and skirting per-case fine-tuning. Recently, several works have addressed this problem by learning person-specific neural radiance fields (NeRF) to capture the appearance of a particular human. In parallel, some work proposed to use pixel-aligned features to generalize radiance fields to arbitrary new scenes and objects. Adopting these generalization approchs to human achieve reasonable rendering result. However, due to the difficulties of modeling the complex appearance of human and the dynamic sense, it is challenging to train nerf well in an efficient way. We find that the slow convergence of the human body reconstruction model is largely due to the nerf representation. In this work, we introduce a voxel grid based representation for human view synthesis, termed Voxel Grid Performer(VGP). Specifically, a sparse voxel grid is designed to represent the density and color in every space voxel, which enable better performance and less computation than conventional nerf optimization. We perform extensive experiments on both seen human performer and unseen human performer, demonstrating that our approach surpasses nerf-based methods on a wide variety of metrics. Code and data will be made available at https://github.com/fanzhongyi/vgp.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.