Home

Awesome

<p align="center">StabStitch++: Unsupervised Online Video Stitching with Spatiotemporal Bidirectional Warps

Introduction

Lang Nie<sup>1</sup>, Chunyu Lin<sup>1</sup>, Kang Liao<sup>2</sup>, Yun Zhang<sup>3</sup>, Shuaicheng Liu<sup>4</sup>, Yao Zhao<sup>1</sup>

<sup>1</sup> Beijing Jiaotong University {nielang, cylin, yzhao}@bjtu.edu.cn

<sup>2</sup> Nanyang Technological University

<sup>3</sup> Communication University of Zhejiang

<sup>4</sup> University of Electronic Science and Technology of China

Feature

Compared with the conference version (StabStitch), the main contributions of StabStitch++ are as follows:

  1. We propose a differentiable bidirectional decomposition module to carry out bidirectional warping on a virtual middle plane, which evenly spreads warping burdens across both views. It benefits both image and video stitching, demonstrating universality and scalability.

  2. A new warp smoothing model is presented to simultaneously encourage content alignment, trajectory smoothness, and online collaboration. Different from StabStitch that sacrifices alignment for stabilization, the new model makes no compromise and optimizes both of them in the online mode. image The above figure shows the difference between StabStitch and StabStitch++.

Performance Comparison

MethodAlignment(PSNR/SSIM) $\uparrow$Stability $\downarrow$Distortion $\downarrow$Inference Speed $\uparrow$
1StabStitch29.89/0.89048.740.67435.5fps
2StabStitch++30.88/0.89841.700.37128.3fps

The performance and speed are evaluated on the StabStitch-D dataset with one RTX4090 GPU.

Video

We have released a video of our results on YouTube.

šŸ“ Changelog

Dataset

For the StabStitch-D dataset, please refer to StabStitch.

For the collected traditional datasets, they are available at Google Drive or Baidu Cloud(Extraction code: 1234).

Code

Requirement

We implement this work with Ubuntu, RTX4090Ti, and CUDA11. Refer to environment.yml for more details.

How to run it

Meta

If you have any questions about this project, please feel free to drop me an email.

NIE Lang -- nielang@bjtu.edu.cn

@inproceedings{nie2025eliminating,
  title={Eliminating Warping Shakes for Unsupervised Online Video Stitching},
  author={Nie, Lang and Lin, Chunyu and Liao, Kang and Zhang, Yun and Liu, Shuaicheng and Ai, Rui and Zhao, Yao},
  booktitle={European Conference on Computer Vision},
  pages={390--407},
  year={2025},
  organization={Springer}
}

References

[1] L. Nie, C. Lin, K. Liao, Y. Zhang, S. Liu, R. Ai, Y. Zhao. Eliminating Warping Shakes forĀ Unsupervised Online Video Stitching. ECCV, 2024.
[2] L. Nie, C. Lin, K. Liao, S. Liu, and Y. Zhao. Parallax-Tolerant Unsupervised Deep Image Stitching. ICCV, 2023.
[3] S. Liu, P. Tan, L. Yuan, J. Sun, and B. Zeng. Meshflow: Minimum latency online video stabilization. ECCV, 2016.