Awesome
<div align="center">[CVPR 2024] Film Removal
<div align="center"> <img src="fig/image.png" alt="Problem of Film Removal" width="300"> </div>This is the official repository for Learning to Remove Wrinkled Transparent Film with Polarized Prior (CVPR 2024)
Jiaqi Tang, Ruizheng Wu, Xiaogang Xu, Sixing Hu and Ying-Cong Chen*
*: Corresponding Author
Here is our Project Page !
</div>π New Problem in Low-level Vision: Film Removal
- π© Goal: Film Removal (FR) aims to remove the interference of wrinkled transparent films and reconstruct the original information under the films.
- π©This technique is used in industrial recognition systems.
π’ News and Updates
- β May 28, 2024. We release the pre-trained model of Film Removal in K-ford. Check this Google Cloud link for DOWNLOAD.
- β May 06, 2024. We release the code of Film Removal.
- β May 06, 2024. We release the dataset of Film Removal. Check this Google Cloud link for DOWNLOAD.
βΆοΈ Getting Started
<!-- 1. [Installation](#installation) 2. [Dataset](#dataset) 3. [Configuration](#configuration) 5. [Testing](#Testing) 4. [Training](#Training) -->πͺ Installation
- Python >= 3.8.2
- PyTorch >= 1.8.1
- Install Polanalyser for processing polarization image
pip install git+https://github.com/elerac/polanalyser
- Install other dependencies by
pip install -r requirements.txt
πΎ Dataset Preparation
-
Google Drive Link for DOWNLOAD dataset.
-
Data Structure: each
K*
directory contains the data for one fold of the dataset. TheGT
directory contains the ground truth images, and theinput
directory contains the input images at different polarized angles. -
The dataset is organized as follows:
βββ K1 β βββ GT β β βββ 2DCode β β βββ 1_gt_I.bmp β βββ input β βββ 2DCode β βββ 1_input_0.bmp β βββ 1_input_45.bmp β βββ 1_input_90.bmp β βββ 1_input_135.bmp βββ K2 β βββ ... βββ ... βββ K10 βββ ...
π° Pretrained Model
- Google Drive Link for downloading our Pretrained Model in K-Ford.
π¨ Configuration
-
The Test_K_ford option specifies the number of folds for K-fold cross-validation during testing. The dataroot option specifies the root directory for the dataset, which is set to Dataset. Other configuration settings include learning rate schemes, loss functions, and logger options.
datasets: train: name: Reconstruction mode: LQGT_condition Test_K_ford: K10 # remove from training dataroot: /remote-home/share/jiaqi2/Dataset dataroot_ratio: ./ use_shuffle: true n_workers: 0 batch_size: 1 GT_size: 0 use_flip: true use_rot: true condition: image val: name: Reconstruction mode: LQGT_condition_Val Test_K_ford: K10 # for testing dataroot: /remote-home/share/jiaqi2/Dataset dataroot_ratio: ./ condition: image
β³ Testing
- Modify
dataroot
,Test_K_ford
andpretrain_model_G
intesting
configuration, then runpython test.py -opt ./codes/options/test/test.yml
- The test results will be saved to
./results/testset_name
, includingRestored Image
andPrior
.
π₯οΈ Training
-
Modify
dataroot
andTest_K_ford
intraining
configuration, then runpython train.py -opt ./codes/options/train/train.yml
-
The logs, models and training states will be saved to
./experiments/name
. You can also usetensorboard
for monitoring for the./tb_logger/name
. -
Restart Training (To add checkpoint in
training
configuration)path: root: ./ pretrain_model_G: .../experiments/K1/models/XX.pth strict_load: false resume_state: .../experiments/K1/training_state/XX.state
β‘ Performance
Compared with other baselines, our model achieves state-of-the-art performance:
β [Table 1] Quantitative evaluation in image reconstruction with 10-fold cross-validation.
Methods PSNR SSIM SHIQ 21.58 0.7499 Polar-HR 22.19 0.7176 Uformer 31.68 0.9426 Restormer 34.32 0.9731 Ours 36.48 0.9824
β [Figure 1] Qualitative Evaluation in image reconstruction.
β [Figure 2-3] Qualitative Evaluation in Industrial Environment. (QR Reading & Text OCR)
π Citations
The following is a BibTeX reference:
@inproceedings{tang2024learning,
title = {Learning to Remove Wrinkled Transparent Film with Polarized Prior},
author = {Tang, Jiaqi and Wu, Ruizheng and Xu, Xiaogang and Hu, Sixing and Chen, Ying-Cong},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2024}
}
π§ Connecting with Us?
If you have any questions, please feel free to send email to jtang092@connect.hkust-gz.edu.cn
.
π Acknowledgment
This work is supported by the National Natural Science Foundation of China (No. 62206068) and the Natural Science Foundation of Zhejiang Province, China under No. LD24F020002.