Home

Awesome

Trans-INR

This repository contains the official implementation for the following paper:

Transformers as Meta-Learners for Implicit Neural Representations <br> Yinbo Chen, Xiaolong Wang <br> ECCV 2022

<img src="https://user-images.githubusercontent.com/10364424/183021009-b0d15bf4-70ec-4402-8f17-0b26ecacc3f9.png" width="400">

Project page: https://yinboc.github.io/trans-inr/.

@inproceedings{chen2022transinr,
  title={Transformers as Meta-Learners for Implicit Neural Representations},
  author={Chen, Yinbo and Wang, Xiaolong},
  booktitle={European Conference on Computer Vision},
  year={2022},
}

Reproducing Experiments

Environment

Data

mkdir data and put different dataset folders in it.

Training

Run CUDA_VISIBLE_DEVICES=[GPU] python run_trainer.py --cfg [CONFIG], configs are in cfgs/.

To enable wandb, complete wandb.yaml (in root) and add -w to the training command.

When running multiple multi-gpu training processes, specify -p with different values (0,1,2...) for different ports.

Evaluation

For image reconstruction, test PSNR is automatically evaluated in the training script.

For view synthesis, run in a single GPU with configs in cfgs/nvs_eval. To enable test-time optimization, uncomment (remove #) tto_steps in configs.