Presentation
3 October 2024 Model pruning using hypernetwork with partial weight training and semisupervised multitask learning (Conference Presentation)
Po-Han Chen, Shih-Teng Yang, Albert Lin
Author Affiliations +
Abstract
In model compression, filter pruning stands out as a pivotal technique. Its significance becomes particularly crucial as the present deep learning models are developed into larger and more complicated architectures, which leads to massive parameters and high floating-point operations per second (FLOPs). Challenges have appeared due to the high computational demands associated with these advanced model structures. In this work, we introduce two novel methods aimed at addressing the challenges above: innovative automatic filter pruning methods via semi-supervised multi-task learning (SSMTL) hypernetwork and partial weight training hypernetwork, respectively. Both methods effectively train the hypernetwork and enhance the precision of the neural architecture search with reinforcement learning. Compared to other filter pruning methods, our approach achieves higher model accuracy at similar pruning ratios.
Conference Presentation
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Po-Han Chen, Shih-Teng Yang, and Albert Lin "Model pruning using hypernetwork with partial weight training and semisupervised multitask learning (Conference Presentation)", Proc. SPIE 13138, Applications of Machine Learning 2024, 131380R (3 October 2024); https://doi.org/10.1117/12.3027322
Advertisement
Advertisement
KEYWORDS
Education and training

Performance modeling

Tunable filters

Back to Top