Home

Awesome

ELIC: Efficient Learned Image Compression withUnevenly Grouped Space-Channel Contextual Adaptive Coding.

A Pytorch Implementation of "ELIC: Efficient Learned Image Compression withUnevenly Grouped Space-Channel Contextual Adaptive Coding."

Note that This Is Not An Official Implementation Code.

More details can be found in the following paper:

@inproceedings{he2022elic,
  title={Elic: Efficient learned image compression with unevenly grouped space-channel contextual adaptive coding},
  author={He, Dailan and Yang, Ziming and Peng, Weikun and Ma, Rui and Qin, Hongwei and Wang, Yan},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={5718--5727},
  year={2022}
}

Related links

Available Checkpoint

lambdaLink
0.450.45
0.150.15
0.0320.032
0.0160.0016
0.0080.008
0.0040.004

Training dataset

According to the paper, They train the models on the largest 8000 images picked from ImageNet dataset. so download the ImageNet

The preprocessing and selection of ImageNet dataset is same to QVRF:https://github.com/bytedance/QRAF.

Environment

This code is based on the CompressAI.

   pip3 install compressai==1.1.5
   pip3 install thop
   pip3 install ptflops
   pip3 install timm

Usage

Train Usage

cd Code
python3 train.py -d ./dataset --N 192 --M 320 -e 4000 -lr 1e-4 -n 8 --lambda 13e-3 --batch-size 16 --test-batch-size 16 --aux-learning-rate 1e-3 --patch-size 256 256 --cuda --save --seed 1926 --clip_max_norm 1.0

In ELIC, each model is finetune by 200 epoches.

python3 train.py -d ./dataset --N 192 --M 320 -e 4000 -lr 1e-4 -n 8 --lambda 13e-3 --batch-size 16 --test-batch-size 16 --aux-learning-rate 1e-3 --patch-size 256 256 --cuda --save --seed 1926 --clip_max_norm 1.0 --pretrained --checkpoint Pretrained4000epoch_checkpoint.pth.tar

Update the entropy model

python3 -m updata.py checkpoint -n updatacheckpoint-name

Test

python Inference.py --dataset ./dataset/Kodak --output_path ELIC_0450_ft_3980_Plateau -p ./ELIC_0450_ft_3980_Plateau.pth.tar --patch 64

RD Results

We trained the network and ask the RD points from the author.