Home

Awesome

Adaptive Patch Exiting for Scalable Single Image Super-Resolution (ECCV2022 Oral)

This repository is an official PyTorch implementation of the paper "Adaptive Patch Exiting for Scalable Single Image Super-Resolution" (ECCV2022 Oral).

Abstract

Since the future of computing is heterogeneous, scalability is a crucial problem for single image super-resolution. Recent works try to train one network, which can be deployed on platforms with different capacities. However, they rely on the pixel-wise sparse convolution, which is not hardware-friendly and achieves limited practical speedup. As image can be divided into patches, which have various restoration difficulties, we present a scalable method based on Adaptive Patch Exiting (APE) to achieve more practical speedup. Specifically, we propose to train a regressor to predict the incremental capacity of each layer for the patch. Once the incremental capacity is below the threshold, the patch can exit at the specific layer. Our method can easily adjust the trade-off between performance and efficiency by changing the threshold of incremental capacity. Furthermore, we propose a novel strategy to enable the network training of our method. We conduct extensive experiments across various backbones, datasets and scaling factors to demonstrate the advantages of our method.

pipeline

Dependencies

Datasets

We used DIV2K dataset to train our models. You can download it from here (7.1GB).

And evaluate our models on HD scenario (DIV2K 0801-0900) and UHD scenario (DIV8K 1401-1500).

Running the code

There are lots of template in template.py, run them by command:

python main.py --template xxx

And the args explaination is in the options.py. Here we give some instructions of args:

Take EDSR for example:

Traning

Train EDSR:

python main.py --template EDSR

Train EDSR-APE:

python main.py --template EDSR_APE

Testing

Test EDSR:

python main.py --template EDSR_test

Test EDSR-APE:

python main.py --template EDSR_APE_test