Awesome
MMRec
<div align="center"> <a href="https://github.com/enoche/MultimodalRecSys"><img width="300px" height="auto" src="https://github.com/enoche/MMRec/blob/master/images/logo.png"></a> </div>$\text{MMRec}$: A modern <ins>M</ins>ulti<ins>M</ins>odal <ins>Rec</ins>ommendation toolbox that simplifies your research arXiv.
:point_right: Check our comprehensive survey on MMRec, arXiv.
:point_right: Check the awesome multimodal recommendation resources.
Toolbox
<p> <img src="./images/MMRec.png" width="500"> </p>Supported Models
source code at: src\models
Please consider to cite our paper if this framework helps you, thanks:
@inproceedings{zhou2023bootstrap,
author = {Zhou, Xin and Zhou, Hongyu and Liu, Yong and Zeng, Zhiwei and Miao, Chunyan and Wang, Pengwei and You, Yuan and Jiang, Feijun},
title = {Bootstrap Latent Representations for Multi-Modal Recommendation},
booktitle = {Proceedings of the ACM Web Conference 2023},
pages = {845–854},
year = {2023}
}
@article{zhou2023comprehensive,
title={A Comprehensive Survey on Multimodal Recommender Systems: Taxonomy, Evaluation, and Future Directions},
author={Hongyu Zhou and Xin Zhou and Zhiwei Zeng and Lingzi Zhang and Zhiqi Shen},
year={2023},
journal={arXiv preprint arXiv:2302.04473},
}
@inproceedings{zhou2023mmrec,
title={Mmrec: Simplifying multimodal recommendation},
author={Zhou, Xin},
booktitle={Proceedings of the 5th ACM International Conference on Multimedia in Asia Workshops},
pages={1--2},
year={2023}
}