Awesome
Towards Resolving Propensity Contradiction in Offline Recommender Learning
About
This repository contains the code to replicate the experiments conducted in the paper "Towards Resolving Propensity Contradiction in Offline Recommender Learning" accepted at IJCAI2022.
If you find this code useful in your research then please site:
@inproceedings{saito2022towards,
author = {Saito, Yuta and Nomura, Masahiro},
title = {Towards Resolving Propensity Contradiction in Offline Recommender Learning},
booktitle = {Proceedings of the 31st International Joint Conference on Artificial Intelligence},
pages = {xxx-xxx},
year = {2022},
}
Dependencies
- numpy==1.19.1
- pandas==1.1.2
- optuna==0.17.0
- scikit-learn==0.23.1
- tensorflow==1.15.4
- plotly==3.10.0
- pyyaml==5.1.2
Datasets
To run the experiments with real-world datasets, the following datasets need to be prepared.
- download the Coat dataset and put
train.ascii
andtest.ascii
files into./data/coat/
directory. - download the Yahoo! R3 dataset and put
train.txt
andtest.txt
files into./data/yahoo/
directory.
It should be noted that we use the original Yahoo! R3 and Coat datasets, as they contain MCAR test data.
Running the code
Navigate to the src
directory and run the command
for data in yahoo coat
do
python main.py -d $data -m damf -t
done
This will run experiments conducted in Section 4 in the paper. You can see the default settings used in our experiments in the config.yaml
file.
Once the simulations of all methods are finished executing, you can summarize the results reported in Table 1 by running the command below in ./src/
directory.
python summarize_results.py -d $data yahoo coat
Then, you can find the results in ./paper_results/
directory.