Awesome
BM3 (WWW'23)
Pytorch implementation for "Bootstrap Latent Representations for Multi-modal Recommendation"-WWW'23 Official ACM
- Trained logs&models are stored at: https://github.com/enoche/BM3/tree/master/trained-models-logs
- :twisted_rightwards_arrows: This model is integrated into the MMRec framework.
- :point_right: Check the awesome multimodal recommendation resources.
Overview of BM3
<p> <img src="./images/bm3.png" width="800"> </p>Data
Download from Google Drive: Baby/Sports/Elec
The data already contains text and image features extracted from Sentence-Transformers and CNN.
How to run
- Put your downloaded data (e.g.
baby
) underdata
dir. - Enter
src
folder and run with
python main.py -m BM3 -d baby
You may specify other parameters in CMD or config withconfigs/model/*.yaml
andconfigs/dataset/*.yaml
.
Best hyper-parameters for reproducibility
We report the best hyper-parameters of BM3 to reproduce the results in Table III of our paper as:
Datasets | layers | dropout | reg_weight |
---|---|---|---|
Baby | 1 | 0.5 | 0.1 |
Sports | 1 | 0.5 | 0.01 |
Elec | 2 | 0.3 | 0.1 |
Citation
@inproceedings{zhou2023bootstrap,
author = {Zhou, Xin and Zhou, Hongyu and Liu, Yong and Zeng, Zhiwei and Miao, Chunyan and Wang, Pengwei and You, Yuan and Jiang, Feijun},
title = {Bootstrap Latent Representations for Multi-Modal Recommendation},
booktitle = {Proceedings of the ACM Web Conference 2023},
pages = {845–854},
year = {2023}
}