Deep neural networks have been applied to video compressive sensing (VCS) task recently. The existing DNN-based VCS methods compress and reconstruct the scene video only in space or time dimensions, which ignores the spatial-temporal correlation of the video. And they generally utilize pixel-wise loss as the loss function, which causes the results to be over-smoothed. In this paper, we propose a perceptual spatial-temporal VCS network. The spatial-temporal VCS network, which compresses and recovers the video in both space and time dimensions, can preserve the spatial-temporal correlation of the video. Besides, we refine the perceptual loss by selecting specific feature-wise loss terms and adding a pixel-wise loss term. The refined perceptual loss can guide the spatial-temporal network to retain more textures and structures. Experimental results show the proposed method can achieve better visual effect with less recovery time than the state-of-the-art.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.