Paper
18 November 2019 Intermediate deep-feature compression for multitasking
Author Affiliations +
Abstract
Collaborative intelligence is a new strategy to deploy deep neural network model for AI-based mobile devices, which runs a part of model on the mobile to extract features, the rest part in the cloud. In such case, feature data but not the raw image needs to be transmitted to cloud, and the features uploaded to cloud need have generalization capability to complete multitask. To this end, we design an encoder-decoder network to get intermediate deep features of image, and propose a method to make the features complete different tasks. Finally, we use a lossy compression method for intermediate deep features to improve transmission efficiency. Experimental results show that the features extracted by our network can complete input reconstruction and object detection simultaneously. Besides, with the deep-feature compression method proposed in our work, the quality of reconstructed image is good in visual and index of quantitative assessment, and object detection also has a good result in accuracy.
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Weiqian Wang, Ping An, Chao Yang, and Xinpeng Huang "Intermediate deep-feature compression for multitasking", Proc. SPIE 11187, Optoelectronic Imaging and Multimedia Technology VI, 111870Z (18 November 2019); https://doi.org/10.1117/12.2538738
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Computer programming

Clouds

Feature extraction

Image quality

Instrument modeling

Network architectures

Visual process modeling

Back to Top