Compared with training multiple single-scale models to achieve different scale image super-resolution (SR), using the internal correlation of multiple scales to aggregate multi-scale learning into a single model can effectively reduce the network parameters and improve efficiency. Correspondingly, it is important to study how to take the advantage of inter-scale correlation at a fixed model size for the performance improvement of multi-scale SR. We combine the capture of richer context dependencies and convolutional response modulation to improve the discriminative ability of the model, and then propose a global context-aware feature modulation network, which has a stronger representational capability for multi-scale SR. Specifically, we first model the global context based on the self-attention mechanism, which can effectively extract rich global information from convolutional features. Then, we propose to modulate the feature responses using global contextual dependencies in spatial and channel dimensions respectively. Further, we construct the global context-aware feature modulation block, which can be stacked to form a deep feature modulation network and applied to the unified multi-scale SR architecture to learn inter-scale correlation. Extensive experiments demonstrate that the proposed global context-aware modulation is effective, which can improve the performance of multi-scale SR task both quantitatively and qualitatively. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Modulation
RGB color model
Performance modeling
Education and training
Convolution
Lawrencium
Feature extraction