Home

Awesome

Composed Image Retrieval for Training-FREE DOMain Conversion (WACV 2025) 🎨

PWC<br> PWC<br> PWC<br> PWC

This repository contains the official PyTorch implementation of our WACV 2025 paper: "Composed Image Retrieval for Training-FREE DOMain Conversion". [arXiv]

Overview

We introduce FREEDOM, a training-free, composed image retrieval (CIR) method for domain conversion based on vision-language models (VLMs). Given an $\textcolor{orange}{image\ query}$ and a $\it{text\ query}$ that names a domain, images are retrieved having the class of the $\textcolor{orange}{image\ query}$ and the domain of the $\it{text\ query}$. A range of applications is targeted, where classes can be defined at category level (a,b) or instance level (c), and domains can be defined as styles (a, c), or context (b). In the above visualization, for each image query, retrieved images are shown for different text queries.

<div align="center"> <img width="80%" alt="Domains" src="images/teaser.png"> </div>

Motivation

In this paper, we focus on a specific variant of composed image retrieval, namely domain conversion, where the text query defines the target domain. Unlike conventional cross-domain retrieval, where models are trained to use queries of a source domain and retrieve items from another target domain, we address a more practical, open-domain setting, where the query and database may be from any unseen domain. We target different variants of this task, where the class of the query object is defined at category-level (a, b) or instance-level (c). At the same time, the domain corresponds to descriptions of style (a, c) or context (b). Even though domain conversion is a subset of the tasks handled by existing CIR methods, the variants considered in our work reflect a more comprehensive set of applications than what was encountered in prior art.

Approach

Given a $\textcolor{orange}{query\ image}$ and a $\it{query\ text}$ indicating the target domain, proxy images are first retrieved from the query through an image-to-image search over a visual memory. Then, a set of text labels is associated with each proxy image through an image-to-text search over a textual memory. Each of the most frequent text labels is combined with the $\it{query\ text}$ in the text space, and images are retrieved from the database by text-to-image search. The resulting sets of similarities are linearly combined with the frequencies of occurrence as weights. Below: $k=4$ proxy images, $n=3$ text labels per proxy image, $m=2$ most frequent text labels.

<div align="center"> <img width="80%" alt="Domains" src="images/method.png"> </div>

Environment

Our experiments were conducted using python 3.10. To set up a Python environment, run:

python -m venv ~/freedom
source ~/freedom/bin/activate
pip install -r requirements.txt

To set up a Conda environment, run:

conda create --name freedom python=3.10
conda activate freedom
pip install -r requirements.txt

Dataset

Downloading the datasets

  1. Download the ImageNet-R dataset and the validation set of ILSVRC2012 and place them in the directory data/imagenet-r.
  2. Download the LTLL dataset and place it in the directory data/ltll.
  3. Download the four domains of mini-DomainNet: Clipart, painting, real, and sketch and place them in the directory data/minidn.
  4. Download the NICO++ dataset, specifically the DG_Benchmark.zip from the dropbox link and place it in the directory data/nico. The starting data directory structure should look like this:
freedom/
    β”œβ”€β”€ data/
    β”‚   β”œβ”€β”€ imagenet-r/
    β”‚   β”‚   β”œβ”€β”€ imagenet-r.tar
    β”‚   β”‚   β”œβ”€β”€ ILSVRC2012_img_val.tar
    β”‚   β”‚   β”œβ”€β”€ imgnet_real_query.txt
    β”‚   β”‚   β”œβ”€β”€ imgnet_targets.txt
    β”‚   β”‚   └── label_names.csv
    β”‚   β”œβ”€β”€ ltll/
    |   |   β”œβ”€β”€ LTLL.zip
    |   |   └── full_files.csv
    |   β”œβ”€β”€ minidn/
    |   |   β”œβ”€β”€ clipart.zip
    |   |   β”œβ”€β”€ painting.zip
    |   |   β”œβ”€β”€ real.zip
    |   |   β”œβ”€β”€ sketch.zip
    |   |   β”œβ”€β”€ database_files.csv
    |   |   └── query_files.csv
    |   └── nico/
    |       β”œβ”€β”€ DG_Benchmark.zip
    |       β”œβ”€β”€ database_files.csv
    |       └── query_files.csv

Setting-up the datasets

  1. To set-up ImageNet-R run:
mkdir -p ./data/imagenet_r/imagenet_val && tar -xf ./data/imagenet_r/ILSVRC2012_img_val.tar -C ./data/imagenet_r/imagenet_val
tar -xf ./data/imagenet_r/imagenet-r.tar -C ./data/imagenet_r/
python set_dataset.py --dataset imagenet_r

The script set_dataset.py will create the folder real, that includes the 200 classes that are useful for ImageNet-R from the validation set of ILSVRC2012, and it will place them in their corresponding class folder. It will also create the file full_files.csv that is needed for data loading. After that you no longer need the imagenet_val folder.

  1. To set-up ltll run:
unzip ./data/ltll/LTLL.zip -d ./data/ltll
python set_dataset.py --dataset ltll

The script set_dataset.py will handle some space (" ") characters in directory and file names.

  1. To set-up Mini-DomainNet run:
unzip ./data/minidn/clipart.zip -d ./data/minidn/
unzip ./data/minidn/painting.zip -d ./data/minidn/
unzip ./data/minidn/real.zip -d ./data/minidn/
unzip ./data/minidn/sketch.zip -d ./data/minidn/
  1. To set-up NICO++ run:
unzip ./data/nico/DG_Benchmark.zip -d ./data/nico
unzip ./data/nico/NICO_DG_Benchmark.zip -d ./data/nico
mv ./data/nico/NICO_DG/* ./data/nico/
rmdir ./data/nico/NICO_DG
python set_dataset.py --dataset nico

The script set_dataset.py will handle some space (" ") characters in directory names.

The necessary files in the final data directory structure are the following:

freedom/
    β”œβ”€β”€ data/
    β”‚   β”œβ”€β”€ imagenet-r/
    β”‚   β”‚   β”œβ”€β”€ imagenet-r/
    β”‚   β”‚   β”œβ”€β”€ real/
    β”‚   β”‚   └── full_files.csv
    β”‚   β”œβ”€β”€ ltll/
    |   |   β”œβ”€β”€ New/
    |   |   β”œβ”€β”€ Old/
    |   |   └── full_files.csv
    |   β”œβ”€β”€ minidn/
    |   |   β”œβ”€β”€ clipart/
    |   |   β”œβ”€β”€ painting/
    |   |   β”œβ”€β”€ real/
    |   |   β”œβ”€β”€ sketch/
    |   |   β”œβ”€β”€ database_files.csv
    |   |   └── query_files.csv
    |   └── nico/
    |       β”œβ”€β”€ autumn/
    |       β”œβ”€β”€ dim/
    |       β”œβ”€β”€ grass/
    |       β”œβ”€β”€ outdoor/
    |       β”œβ”€β”€ rock/
    |       β”œβ”€β”€ water/
    |       β”œβ”€β”€ database_files.csv
    |       └── query_files.csv

Experiments

Extract features

To run any experiment you first need to extract features with the script create_features.py. Both the features of the corpus and the features of the wanted dataset are needed, but for a given backbone, they need to be extracted only once. You can specify what features to produce with --dataset that can take the inputs corpus, imagenet_r, nico, minidn, and ltll. Also, you will need to specify the --backbone, which can either be clip or siglip. You can specify the GPU ID with --gpu. For example, to run experiments on ImageNet-R with CLIP on the GPU 0 you should extract the following features:

python create_features.py --dataset corpus --backbone clip --gpu 0
python create_features.py --dataset imagenet_r --backbone clip --gpu 0

Run experiments

The experiments can run through the run_retrieval.py script. You should specify:

  1. --dataset from imagenet_r, nico, minidn, and ltll.
  2. --backbone from clip, and siglip.
  3. --method from freedom, image, text, sum, and product. For example, you can run:
python run_retrieval.py --dataset imagenet_r --backbone clip --method freedom --gpu 0

Specifically for freedom experiments, you can change the hyperparameters described in the paper by specifying --kappa, --miu, and --ni.

Expected results

All mAPs in this paper are calculated as in Radenović et. al.

The experiments conducted with the run_retrieval.py script produce the following Domain Conversion mAP (%) results for the datasets and methods outlined above. By running the experiments as described, you should expect the following numbers for the CLIP backbone:

ImageNet-R

MethodCARORIPHOSCUTOYAVG
Text0.820.630.680.780.780.74
Image4.273.120.845.865.083.84
Text + Image6.614.452.179.188.626.21
Text Γ— Image8.215.626.988.959.417.83
FreeDom35.9711.8027.9736.5837.2129.91

MiniDomainNet

MethodCLIPPAINTPHOSKEAVG
Text0.630.520.630.510.57
Image7.157.314.387.786.66
Text + Image9.599.979.228.539.33
Text Γ— Image9.018.6615.875.909.86
FreeDom41.9631.6541.1234.3637.27

NICO++

MethodAUTDIMGRAOUTROCWATAVG
Text1.000.991.151.231.101.051.09
Image6.454.855.677.677.538.756.82
Text + Image8.466.589.2211.9111.208.419.30
Text Γ— Image8.246.3612.1112.7110.468.849.79
FreeDom24.3524.4130.0630.5126.9220.3726.10

LTLL

MethodTODAYARCHIVEAVG
Text5.286.165.72
Image8.4724.5116.49
Text + Image9.6026.1317.86
Text Γ— Image16.4229.9023.16
FreeDom30.9535.5233.24

Acknowledgement

NTUA thanks NVIDIA for the support with the donation of GPU hardware.

License

This repository is released under the MIT license as found in the LICENSE file.

Citation

If you find this repository useful, please consider giving a star 🌟 and citation:

@inproceedings{efth2025composed,
  title={Composed Image Retrieval for Training-Free Domain Conversion},
  author={Efthymiadis, Nikos and Psomas, Bill and Laskar, Zakaria and Karantzalos, Konstantinos and Avrithis, Yannis and Chum, OndΕ™ej and Tolias, Giorgos},
  booktitle={IEEE Winter Conference on Applications of Computer Vision},
  year={2025},
  organization={IEEE}
}