Home

Awesome

TTT-MIM: Test-Time Training with Masked Image Modeling for Denoising Distribution Shifts

This repository contains code to illustrate test-time adaptation to single images and reproduce the results of the paper: TTT-MIM: Test-Time Training with Masked Image Modeling for Denoising Distribution Shifts.

Installation

The code is written in python and heavily depends on Pytorch. It has been developed and tested with following packages which can be installed with:

pip install requirements.txt

Usage Modes

Joint Training

We provide the pretrained model here. Alternatively, you can pretrain your own model with distributed data parallel using:

python main_joint_train.py \
--dataset imagenet --noise-mode gaussian --noise-var 0.005 \ 
--gpu 0 --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
[your imagenet-mini folder with train.csv and val.csv]

Test-Time Adaptation to Single Images

Test-time adaptation is evaluated on natural and synthetic noise with different noise levels. Test results are obtained by running the method on selected 10 images from each dataset. Test images can be found under 'test_images/', and the pretrained model that is adapted during the test-time adaptation is '/model/0715_ttt_mim_unet_gn_0.005.pth.tar'. Run 'ttt_mim_online.py' for adapting to single images, and reproducing the test results. An example to apply our method on fastMRI with simulated noise on single GPU can be seen below.

python ttt_mim_online.py \
--dataset fastmri --noise-mode gaussian --noise-var 0.005\
--pretrained [path of pretrained model] \
--niters 8 --lr 1e-5 --mask-ratio 0.01 --mask-patch-size 1 --denoise-loss pd\
--gpu 0\
[fastMRI dataset folder]

Alternatively, you can use the notebook 'TTT_MIM.ipynb', which can be run on Google colab.

Parameters

Hyperparameters

Options

Reproducing the results in Table 1

The exact hyperparameters to get results in Table 1 are given as,

SIDDPolyUFMDDCTFastMRIG0.01G0.02SPPoisson
iteration number888888888
learning rate1e-45e-55e-61e-51e-41e-55e-55e-51e-6
mask ratio0.30.40.50.40.010.40.40.40.1
mask patch size1444414444

Test-Time Adaptation to Batches of Images

This section is for adapting the method to a batch of images instead of a single one.

Here is an example to apply our method on SIDD with distributed data parallel.

python ttt_mim.py \
--dataset sidd\
--pretrained [path of pretrained model] \
--nepochs 20 --lr 1e-4 --batch-adapt 20 --mask-ratio 0.3 --mask-patch-size 14\
--gpu 0 --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
[SIDD dataset folder]

Citation

@InProceedings{TTT_Image_Denoising,
  author = {Mansour, Youssef and Zhong, Xuyang and Caglar, Serdar and Heckel, Reinhard,
  title = {TTT-MIM: Test-Time Training with Masked Image Modeling for Denoising Distribution Shifts},
  year = {2025},
  booktitle = {European Conference on Computer Vision 2024 (ECCV)}
}