Recently, facial attribute editing has been widely used in human-computer interaction and entertainment social fields. However, most existing facial attribute editing methods have some limitations such as low segmentation granularity and inability to accurately edit regions. To overcome the problems, the Semantic Rendering Generative Adversarial Networks which combines semantic segmentation and color rendering for facial attribute editing is presented. Firstly, asemantic segmentation network, which has limited operations to the target area due to without modifying any attribute-unrelated details, was constructed to generate masks of attribute-related regions. Secondly, to effectively generate color masks for synthesizing higher-quality images, a color rendering network model was derived by merging Transformer-based UNet encoder and ColorMapGAN decoder as the generator of the color rendering network. To verify the effectiveness of the proposed method, the constructed models had been trained on CelebA and CelebAMask-HQ datasets The experimental results shown that the proposed method can not only finely segment attribute-related and unrelated areas but also generate more realistic face images, compared with several existing facial attribute editing methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.