For personal-oriental applications, the outstanding recognition ability has caused great concern for privacy disclosure. Existing methods mainly protect the privacy by adding occlusion or noise to the original images before uploading, which is either impractical or destroying the utility of the photos. In this paper, we propose a Generative Adversarial Network (GAN) based image privacy protection algorithm PriGAN, which generates a privacy image for each original image to fool the recognition networks. When generating the privacy image, we consider the technique of adversarial image perturbations (AIP), which could confuse recognition networks with slight perturbations. That is, the privacy image could protect the privacy information by confusing the neural network hosted by the service providers. Meanwhile, the privacy image appears unmodified compared to the original one for human observers, and thus its utility could be preserved. The advantage of PriGAN is that it provides a general privacy protection framework for personal applications, by which privacy information of images can be protected without destroying the image utility. The experiment shows that the privacy images are misclassified with approximate 82.9% higher than the original images which indicate that our approach can prevent the privacy from leakage with considerable improvement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.