Lightweight Single-Image Super-Resolution Network with Attentive Auxiliary Feature Learning

1 Sun Yat-sen University, China
2 City University of Hong Kong, Hong Kong, China
3 Shanghai Jiao Tong University, China
4 Northwestern University, USA


Despite convolutional network-based methods have boosted the performance of single image super-resolution (SISR), the huge computation costs restrict their practical applicability. In this paper, we develop a computation efficient yet accurate network based on the proposed attentive auxiliary features (A$^2$F) for SISR. Firstly, to explore the features from the bottom layers, the auxiliary feature from all the previous layers are projected into a common space. Then, to better utilize these projected auxiliary features and filter the redundant information, the channel attention is employed to select the most important common feature based on current layer feature. We incorporate these two modules into a block and implement it with a lightweight network. Experimental results on large-scale dataset demonstrate the effectiveness of the proposed model against the state-of-the-art (SOTA) SR methods. Notably, when parameters are less than 320k, A$^2$F outperforms SOTA methods for all scales, which proves its ability to better utilize the auxiliary features.


  1. We handle the super resolution task from a new direction, which means we discuss the benefit brought by auxiliary features in the view of how to recover multi-frequency through different layers. Thus, we propose the attentive auxiliary feature block to utilize auxiliary features of previous layers for facilitating features learning of the current layer. The mainstay we use the channel attention is the dense auxiliary features rather than the backbone features or the sparse skip connections, which is different from other works.

  2. Compared with other lightweight methods especially when the parameters are less than 1000K, we outperform all of them both in PSNR and SSIM but have fewer parameters, which is an enormous trade-off between performance and parameters. In general, A$^2$F is able to achieve better efficiency than current state-of-the-art methods.

  3. We conduct a thorough ablation study to show the effectiveness of each component in the proposed attentive auxiliary feature block.


The architecture of A$^2$F with 4 attentive auxiliary feature blocks. The architecture of A$^2$F with more attentive auxiliary feature blocks is similar.


1. Visual Comparisons

Qualitative comparison over datasets for scale $\times4$. The red rectangle indicates the area of interest for zooming. Comparison for other two datasets can be seen supplementary material.

2. Running Time

Running time comparison with $\times4$ scale on Urban100 dataset. All of them are evaluated on the same mechine.

3. Comparison with Other Methods

Evaluation on five datasets by scale $\times2$, $\times3$, $\times4$. Red and blue imply the best and second best result in a group, respectively.


        author = {Xuehui Wang, Qing Wang, Yuzhi Zhao, Junchi Yan, Lei Fan, and Long Chen.},
        title = {Lightweight Single-Image Super-Resolution Network with Attentive Auxiliary Feature Learning},
        booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)},
        month = {November},
        year = {2020},


    If you have any questions, please contact