Home

Awesome

Compression Benchmark

Repository for the ECCV 2024 paper - BaSIC: BayesNet structure learning for computational Scalable neural Image Compression.

Code is based on FSAR.

Introduction

Despite superior rate-distortion performance over traditional codecs, Neural Image Compression (NIC) is limited by its computational scalability in practical deployment. Prevailing research focuses on accelerating specific NIC modules but is restricted in controlling overall computational complexity. To this end, this work introduces BaSIC (BayesNet structure learning for computational Scalable neural Image Compression), a comprehensive, computationally scalable framework that affords full control over NIC processes. We learn the Bayesian network (BayesNet) structure of NIC for controlling both neural network backbones and autoregressive units. The learning of BayesNet is achieved by solving two sub-problems, i.e., learning a heterogeneous bipartite BayesNet for the inter-node structure to regulate backbone complexity, and a multipartite BayesNet for the intra-node structure to optimize parallel computation in autoregressive units. Experiments demonstrate that our method not only facilitates full computational scalability with more accurate complexity control but also maintains competitive compression performance compared to other computation scalable frameworks under equivalent computational constraints.

<img src="imgs/intro.png" width="800">

State

Setup

Hardware Requirements

Software Requirements

The recommended environment setup script with conda:

conda create -n cbench python=3.7
conda install -c pytorch pytorch==1.7.1 torchvision==0.8.2 cudatoolkit=10.2
pip install pytorch-lightning==1.5.10

or (recommended for NVIDIA RTX 20+ series GPU)

conda create -n cbench python=3.9
conda install -c pytorch pytorch==1.12.1 torchvision==0.13.1 cudatoolkit=11.3
pip install pytorch-lightning==1.7.7

or (recommended for NVIDIA RTX 40+ series GPU)

conda create -n cbench python=3.10
conda install pytorch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 pytorch-cuda=11.8 -c pytorch -c nvidia
pip install pytorch-lightning==1.9.5 numpy==1.23.5

Finally,

# if gcc version < 7
conda install -c conda-forge gcc gxx
pip install -r requirements.txt
python setup.py build develop

Known Issues

If link errors are encounted during compilation in setup.py, try uncomment lines with "extra_link_args" in setup.py and rerun this setup script.

(Optional) Environment setup according to your machine

See configs/env.py

Dataset Prepare

We mainly use the following datasets in our experiments:

Code Structure

Experiments

For any experiment you want to run (including Training/Validation/Testing, thanks to pytorch-lightning and our BasicLosslessCompressionBenchmark):

python tools/run_benchmark.py [config_file]

You can use tensorboard to visualize the training process.

tensorboard --logdir experiments

Experiment List

See configs.

Model Implementation List

See configs/presets.

Pretrained Models

TBA

tools/run_benchmark.py can automatically look for config.pth in a given directory to build the benchmark. Therefore, to test a pretrained model, simply run:

python tools/run_benchmark.py [model_directory]

Citation

TBA

Contact

TBA