Home

Awesome

iPASSR

PyTorch implementation of "Symmetric Parallax Attention for Stereo Image Super-Resolution", CVPRW 2021.<br>

Highlights:

1. We develop a Siamese network equipped with a bi-directional PAM to super-resolve both left and right images.

<p align="center"> <img src="https://raw.github.com/YingqianWang/iPASSR/master/Figs/Network.png" width="100%"></p>

2. We propose an inline occlusion handling scheme to deduce occlusions from parallax attention maps.

<p align="center"><img src="https://raw.github.com/YingqianWang/iPASSR/master/Figs/OcclusionDeduce.png" width="40%"><img src="https://raw.github.com/YingqianWang/iPASSR/master/Figs/OcclusionMask.png" width="55%"></p>

3. We design several illuminance-robust losses to enhance stereo consistency.

<p align="center"> <img src="https://raw.github.com/YingqianWang/iPASSR/master/Figs/ResLoss.png" width="100%"></p> <p align="center"> <a href="https://wyqdatabase.s3-us-west-1.amazonaws.com/iPASSR_illuminance_change.mp4"><img src="https://raw.github.com/YingqianWang/iPASSR/master/Figs/Video-illuminance.png" width="100%"></a></p><br>

4. Our iPASSR significantly outperforms PASSRnet with a comparable model size.

<p align="center"> <img src="https://raw.github.com/YingqianWang/iPASSR/master/Figs/Quantitative.png" width="100%"></p> <p align="center"> <img src="https://raw.github.com/YingqianWang/iPASSR/master/Figs/2xSR.png" width="100%"></p> <p align="center"> <img src="https://raw.github.com/YingqianWang/iPASSR/master/Figs/4xSR.png" width="100%"></p> <p align="center"> <img src="https://raw.github.com/YingqianWang/iPASSR/master/Figs/RealSR.png" width="100%"></p> <p align="center"> <a href="https://wyqdatabase.s3-us-west-1.amazonaws.com/iPASSR_visual_comparison.mp4"><img src="https://raw.github.com/YingqianWang/iPASSR/master/Figs/Video-iPASSR.png" width="100%"></a></p><br>

Download the Results:

We share the quantitative and qualitative results achieved by our iPASSR on all the test sets for both 2xSR and 4xSR. Then, researchers can compare their algorithms to our method without performing inference. Results are available at Google Drive and Baidu Drive (Key: NUDT). <br>

Codes and Models:

Requirement:

Train:

Test:

Some Useful Recources:

Citiation:

We hope this work can facilitate the future research in stereo image SR. If you find this work helpful, please consider citing:

@InProceedings{iPASSR,
    author    = {Wang, Yingqian and Ying, Xinyi and Wang, Longguang and Yang, Jungang and An, Wei and Guo, Yulan},
    title     = {Symmetric Parallax Attention for Stereo Image Super-Resolution},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
    month     = {June},
    year      = {2021},
    pages     = {766-775}
}

Contact:

Any question regarding this work can be addressed to yingqian.wang@outlook.com.