Home

Awesome

Adaptive-Window-Pruning-for-Efficient-Local-Motion-Deblurring

Paper | Project Page

πŸ“’ News

πŸ“· Data

The local blur mask annotations are available at this URL

πŸ“ Model

The model of LMD-ViT is available at this URL

πŸš€ Quick Inference

Environment

Before inferencing LMD-ViT, please install the environment on Linux:

pip install -U pip
pip install -r requirements.txt

Create a folder named "ckpt" and another folder named "val_data":

cd LMD-ViT
mkdir ckpt
mkdir val_data

Put the downloaded model in the "ckpt" folder.

Prepare the evaluation data in the form of ".npy" and put them in the "val_data" folder.

Inference

You can evaluate the LMD-ViT by using:

CUDA_VISIBLE_DEVICES=0 python test.py

πŸ“Œ TODO

πŸŽ“Citations

If our code helps your research or work, please consider citing our paper and staring this repo. The following are BibTeX references:

@inproceedings{
li2024adaptive,
title={Adaptive Window Pruning for Efficient Local Motion Deblurring},
author={Haoying Li and Jixin Zhao and Shangchen Zhou and Huajun Feng and Chongyi Li and Chen Change Loy},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=hI18CDyadM}
}