Awesome
Cornac
Cornac is a comparative framework for multimodal recommender systems. It focuses on making it convenient to work with models leveraging auxiliary data (e.g., item descriptive text and image, social network, etc). Cornac enables fast experiments and straightforward implementations of new models. It is highly compatible with existing machine learning libraries (e.g., TensorFlow, PyTorch).
Cornac is one of the frameworks recommended by ACM RecSys 2023 for the evaluation and reproducibility of recommendation algorithms.
Quick Links
Website | Documentation | Tutorials | Examples | Models | Datasets | Paper | Preferred.AI
Installation
Currently, we are supporting Python 3. There are several ways to install Cornac:
-
From PyPI (recommended):
pip3 install cornac
-
From Anaconda:
conda install cornac -c conda-forge
-
From the GitHub source (for latest updates):
pip3 install Cython numpy scipy pip3 install git+https://github.com/PreferredAI/cornac.git
Note:
Additional dependencies required by models are listed here.
Some algorithm implementations use OpenMP
to support multi-threading. For Mac OS users, in order to run those algorithms efficiently, you might need to install gcc
from Homebrew to have an OpenMP compiler:
brew install gcc | brew link gcc
Getting started: your first Cornac experiment
<p align="center"><i>Flow of an Experiment in Cornac</i></p>import cornac
from cornac.eval_methods import RatioSplit
from cornac.models import MF, PMF, BPR
from cornac.metrics import MAE, RMSE, Precision, Recall, NDCG, AUC, MAP
# load the built-in MovieLens 100K and split the data based on ratio
ml_100k = cornac.datasets.movielens.load_feedback()
rs = RatioSplit(data=ml_100k, test_size=0.2, rating_threshold=4.0, seed=123)
# initialize models, here we are comparing: Biased MF, PMF, and BPR
mf = MF(k=10, max_iter=25, learning_rate=0.01, lambda_reg=0.02, use_bias=True, seed=123)
pmf = PMF(k=10, max_iter=100, learning_rate=0.001, lambda_reg=0.001, seed=123)
bpr = BPR(k=10, max_iter=200, learning_rate=0.001, lambda_reg=0.01, seed=123)
models = [mf, pmf, bpr]
# define metrics to evaluate the models
metrics = [MAE(), RMSE(), Precision(k=10), Recall(k=10), NDCG(k=10), AUC(), MAP()]
# put it together in an experiment, voilà!
cornac.Experiment(eval_method=rs, models=models, metrics=metrics, user_based=True).run()
Output:
MAE | RMSE | AUC | MAP | NDCG@10 | Precision@10 | Recall@10 | Train (s) | Test (s) | |
---|---|---|---|---|---|---|---|---|---|
MF | 0.7430 | 0.8998 | 0.7445 | 0.0548 | 0.0761 | 0.0675 | 0.0463 | 0.13 | 1.57 |
PMF | 0.7534 | 0.9138 | 0.7744 | 0.0671 | 0.0969 | 0.0813 | 0.0639 | 2.18 | 1.64 |
BPR | N/A | N/A | 0.8695 | 0.1042 | 0.1500 | 0.1110 | 0.1195 | 3.74 | 1.49 |
Model serving
Here, we provide a simple way to serve a Cornac model by launching a standalone web service with Flask. It is very handy for testing or creating a demo application. First, we install the dependency:
$ pip3 install Flask
Supposed that we want to serve the trained BPR model from previous example, we need to save it:
bpr.save("save_dir", save_trainset=True)
After that, the model can be deployed easily by running Cornac serving app as follows:
$ FLASK_APP='cornac.serving.app' \
MODEL_PATH='save_dir/BPR' \
MODEL_CLASS='cornac.models.BPR' \
flask run --host localhost --port 8080
# Running on http://localhost:8080
Here we go, our model service is now ready. Let's get top-5
item recommendations for the user "63"
:
$ curl -X GET "http://localhost:8080/recommend?uid=63&k=5&remove_seen=false"
# Response: {"recommendations": ["50", "181", "100", "258", "286"], "query": {"uid": "63", "k": 5, "remove_seen": false}}
If we want to remove seen items during training, we need to provide TRAIN_SET
which has been saved with the model earlier, when starting the serving app. We can also leverage WSGI server for model deployment in production. Please refer to this guide for more details.
Model A/B testing
Cornac-AB is an extension of Cornac using the Cornac Serving API. Easily create and manage A/B testing experiments to further understand your model performance with online users.
User Interaction Solution | Recommendations Dashboard | Feedback Dashboard |
---|---|---|
<img src="assets/demo.png" alt="demo" width="250"/> | <img src="assets/recommendation-dashboard.png" alt="recommendations" width="250"/> | <img src="assets/feedback-dashboard.png" alt="feedback" width="250"/> |
Efficient retrieval with ANN search
One important aspect of deploying recommender model is efficient retrieval via Approximate Nearest Neighbor (ANN) search in vector space. Cornac integrates several vector similarity search frameworks for the ease of deployment. This example demonstrates how ANN search will work seamlessly with any recommender models supporting it (e.g., matrix factorization).
Supported Framework | Cornac Wrapper | Example |
---|---|---|
spotify/annoy | AnnoyANN | quick-start, deep-dive |
meta/faiss | FaissANN | quick-start, deep-dive |
nmslib/hnswlib | HNSWLibANN | quick-start, hnsw-lib, deep-dive |
google/scann | ScaNNANN | quick-start, deep-dive |
Models
The table below lists the recommendation models/algorithms featured in Cornac. Examples are provided as quick-start showcasing an easy to run script, or as deep-dive explaining the math and intuition behind each model. Why don't you join us to lengthen the list?
Resources
- Cornac Examples
- Cornac Tutorials
- RecSys Tutorials by Preferred.AI
- Running Cornac Model with Microsoft Recommenders (BPR), (BiVAE)
- Multimodal RecSys Tutorial at TheWebConf/WWW 2023, earlier version at RecSys 2021
Contributing
This project welcomes contributions and suggestions. Before contributing, please see our contribution guidelines.
Citation
If you use Cornac in a scientific publication, we would appreciate citations to the following papers:
<details> <summary><a href="http://jmlr.org/papers/v21/19-805.html">Cornac: A Comparative Framework for Multimodal Recommender Systems</a>, Salah <i>et al.</i>, Journal of Machine Learning Research, 21(95):1–5, 2020.</summary>@article{salah2020cornac,
title={Cornac: A Comparative Framework for Multimodal Recommender Systems},
author={Salah, Aghiles and Truong, Quoc-Tuan and Lauw, Hady W},
journal={Journal of Machine Learning Research},
volume={21},
number={95},
pages={1--5},
year={2020}
}
</details>
<details>
<summary><a href="https://ieeexplore.ieee.org/abstract/document/9354572">Exploring Cross-Modality Utilization in Recommender Systems</a>, Truong <i>et al.</i>, IEEE Internet Computing, 25(4):50–57, 2021.</summary>
@article{truong2021exploring,
title={Exploring Cross-Modality Utilization in Recommender Systems},
author={Truong, Quoc-Tuan and Salah, Aghiles and Tran, Thanh-Binh and Guo, Jingyao and Lauw, Hady W},
journal={IEEE Internet Computing},
year={2021},
publisher={IEEE}
}
</details>
<details>
<summary><a href="http://jmlr.org/papers/v21/19-805.html">Multi-Modal Recommender Systems: Hands-On Exploration</a>, Truong <i>et al.</i>, ACM Conference on Recommender Systems, 2021.</summary>
@inproceedings{truong2021multi,
title={Multi-modal recommender systems: Hands-on exploration},
author={Truong, Quoc-Tuan and Salah, Aghiles and Lauw, Hady},
booktitle={Fifteenth ACM Conference on Recommender Systems},
pages={834--837},
year={2021}
}
</details>