3D-aware generative methods based on neural radiance fields are gaining attention. Nevertheless, they suffer from slow training and execution speeds due to volume rendering and deep neural networks. We propose using a voxel grid as the explicit representation of the radiance field, combining a shallow network to interpret the spatial features. We employ tensor decomposition to convert the voxel into axis-aligned feature vectors, reducing synthesis space complexity from O(n3) to O(n). Additionally, we leverage the well-established 2D generative adversarial network structure in our 1D feature vector generator.
The real-time simulation of high-resolution cloth is challenging. Speed, stability, and robustness are crucial for creating realistic physical animations at interactive rates. Hierarchical position-based dynamics (HPBD) is a cloth simulation method that focuses on real-time simulation. This article describes a new method that leverages the mesh hierarchy of HPBD to support level of detail (LoD) simulation and rendering. During the rendering of only low-resolution meshes, high-resolution simulation is paused until high-resolution rendering is required. Thus, cloth simulation can be performed effectively using LoD algorithms. The new method is demonstrated using various experimental animations. Both HPBD and LoD are widely used in real-time applications. Combining these two techniques may improve the performance and quality of real-time applications.
Object detection in aerial images is a task of predicting the target categories while locating the objects. Since the different categories of objects may have similar shapes and textures in aerial images, we propose context-aware layer to provide global and robust features for classification and regression branch. In addition, we propose the CentraBox to reduce unnecessary training samples during the training phase. We also propose the instance-level normalization to balance the contributions among the instances. Finally, we compare our method with other methods in terms of accuracy, speed and parameters usage. Moreover, we also compare our own method with different hyper-parameter settings.
Texture synthesis has been researched for several decades, and a good texture can improve the quality of 3D objects. However, it is very difficult to obtain good textures through realistic scenes or the process of time-varying weathering. This paper introduces a method using a label map to control texture synthesis. The labels, which are extracted from source images, represent the different degrees of weathering in an image, meaning this method can synthesize a new label map that controls the region of weathering, while preserving the boundary feature. Furthermore, we can synthesize a more or less weathered texture from the source texture by image analogy.
We propose a framework, the CSSNet to exchange the upper clothes across people with different pose, body shape and clothing. We present an approach consists of three stages. (1) Disentangling the features, such as cloth, body pose and semantic segmentation from source and target person. (2) Synthesizing realistic and high resolution target dressing style images. (3) Transfer the complex logo from source clothing to target wearing. Our proposed end-to-end neural network architecture which can generate the specific person to wear the target clothing. In addition, we also propose a post process method to recover the complex logos on network outputs which are missing or blurring. Our results display more realistic and higher quality than previous methods. Our method can also preserve cloth shape and texture simultaneously.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.