Home

Awesome

CRSLab

Pypi Latest Version Release License arXiv Documentation Status

Paper | Docs | 中文版

CRSLab is an open-source toolkit for building Conversational Recommender System (CRS). It is developed based on Python and PyTorch. CRSLab has the following highlights:

<p align="center"> <img src="https://i.loli.net/2020/12/30/6TPVG4pBg2rcDf9.png" alt="RecBole v0.1 architecture" width="400"> <br> <b>Figure 1</b>: The overall framework of CRSLab </p>

Installation

CRSLab works with the following operating systems:

CRSLab requires Python version 3.7 or later.

CRSLab requires torch version 1.8. If you want to use CRSLab with GPU, please ensure that CUDA or CUDAToolkit version is 10.2 or later. Please use the combinations shown in this Link to ensure the normal operation of PyTorch Geometric.

Install PyTorch

Use PyTorch Locally Installation or Previous Versions Installation commands to install PyTorch. For example, on Linux and Windows 10:

# CUDA 10.2
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=10.2 -c pytorch

# CUDA 11.1
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c conda-forge

# CPU Only
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cpuonly -c pytorch

If you want to use CRSLab with GPU, make sure the following command prints True after installation:

$ python -c "import torch; print(torch.cuda.is_available())"
>>> True

Install PyTorch Geometric

Ensure that at least PyTorch 1.8.0 is installed:

$ python -c "import torch; print(torch.__version__)"
>>> 1.8.0

Find the CUDA version PyTorch was installed with:

$ python -c "import torch; print(torch.version.cuda)"
>>> 11.1

For Linux:

Install the relevant packages:

conda install pyg -c pyg

For others:

Check PyG installation documents to install the relevant packages.

Install CRSLab

You can install from pip:

pip install crslab

OR install from source:

git clone https://github.com/RUCAIBox/CRSLab && cd CRSLab
pip install -e .

Quick-Start

With the source code, you can use the provided script for initial usage of our library with cpu by default:

python run_crslab.py --config config/crs/kgsf/redial.yaml

The system will complete the data preprocessing, and training, validation, testing of each model in turn. Finally it will get the evaluation results of specified models.

If you want to save pre-processed datasets and training results of models, you can use the following command:

python run_crslab.py --config config/crs/kgsf/redial.yaml --save_data --save_system

In summary, there are following arguments in run_crslab.py:

Models

In CRSLab, we unify the task description of conversational recommendation into three sub-tasks, namely recommendation (recommend user-preferred items), conversation (generate proper responses) and policy (select proper interactive action). The recommendation and conversation sub-tasks are the core of a CRS and have been studied in most of works. The policy sub-task is needed by recent works, by which the CRS can interact with users through purposeful strategy. As the first release version, we have implemented 18 models in the four categories of CRS model, Recommendation model, Conversation model and Policy model.

CategoryModelGraph Neural Network?Pre-training Model?
CRS ModelReDial<br/>KBRD<br/>KGSF<br/>TG-ReDial<br/>INSPIRED×<br/><br/><br/>×<br/>××<br/>×<br/>×<br/><br/>
Recommendation modelPopularity<br/>GRU4Rec<br/>SASRec<br/>TextCNN<br/>R-GCN<br/>BERT×<br/>×<br/>×<br/>×<br/><br/>××<br/>×<br/>×<br/>×<br/>×<br/>
Conversation modelHERD<br/>Transformer<br/>GPT-2×<br/>×<br/>××<br/>×<br/>
Policy modelPMI<br/>MGCG<br/>Conv-BERT<br/>Topic-BERT<br/>Profile-BERT×<br/>×<br/>×<br/>×<br/>××<br/>×<br/><br/><br/>

Among them, the four CRS models integrate the recommendation model and the conversation model to improve each other, while others only specify an individual task.

For Recommendation model and Conversation model, we have respectively implemented the following commonly-used automatic evaluation metrics:

CategoryMetrics
Recommendation MetricsHit@{1, 10, 50}, MRR@{1, 10, 50}, NDCG@{1, 10, 50}
Conversation MetricsPPL, BLEU-{1, 2, 3, 4}, Embedding Average/Extreme/Greedy, Distinct-{1, 2, 3, 4}
Policy MetricsAccuracy, Hit@{1,3,5}

Datasets

We have collected and preprocessed 6 commonly-used human-annotated datasets, and each dataset was matched with proper KGs as shown below:

DatasetDialogsUtterancesDomainsTask DefinitionEntity KGWord KG
ReDial10,006182,150Movie--DBpediaConceptNet
TG-ReDial10,000129,392MovieTopic GuideCN-DBpediaHowNet
GoRecDial9,125170,904MovieAction ChoiceDBpediaConceptNet
DuRecDial10,200156,000Movie, MusicGoal PlanCN-DBpediaHowNet
INSPIRED1,00135,811MovieSocial StrategyDBpediaConceptNet
OpenDialKG13,80291,209Movie, BookPath GenerateDBpediaConceptNet

Performance

We have trained and test the integrated models on the TG-Redial dataset, which is split into training, validation and test sets using a ratio of 8:1:1. For each conversation, we start from the first utterance, and generate reply utterances or recommendations in turn by our model. We perform the evaluation on the three sub-tasks.

Recommendation Task

ModelHit@1Hit@10Hit@50MRR@1MRR@10MRR@50NDCG@1NDCG@10NDCG@50
SASRec0.0004460.001340.01600.0004460.0005760.001140.0004450.000750.00380
TextCNN0.002670.01030.02360.002670.004340.004930.002670.005700.00860
BERT0.007220.004900.02810.007220.01060.01240.004900.01470.0239
KBRD0.004010.02540.05880.004010.008910.01030.004010.01270.0198
KGSF0.005350.02850.07710.005350.01140.01350.005350.01540.0259
TG-ReDial0.007930.02510.05240.007930.01220.01340.007930.01520.0211

Conversation Task

ModelBLEU@1BLEU@2BLEU@3BLEU@4Dist@1Dist@2Dist@3Dist@4AverageExtremeGreedyPPL
HERD0.1200.01410.001360.0003500.1810.3690.8471.300.6970.3820.639472
Transformer0.2660.04400.01450.006510.3240.8372.023.060.8790.4380.68030.9
GPT20.08580.01190.003770.01102.354.628.8412.50.7630.2970.5839.26
KBRD0.2670.04580.01340.005790.4691.503.404.900.8630.3980.71052.5
KGSF0.3830.1150.04440.02000.3400.9103.506.200.8880.4770.76750.1
TG-ReDial0.1250.02040.003540.0008030.8811.757.0012.00.8100.3320.5987.41

Policy Task

ModelHit@1Hit@10Hit@50MRR@1MRR@10MRR@50NDCG@1NDCG@10NDCG@50
MGCG0.5910.8180.8830.5910.6800.6830.5910.7120.729
Conv-BERT0.5970.8140.8810.5970.6840.6870.5970.7160.731
Topic-BERT0.5980.8280.8850.5980.6900.6930.5980.7240.737
TG-ReDial0.6000.8300.8930.6000.6930.6960.6000.7270.741

The above results were obtained from our CRSLab in preliminary experiments. However, these algorithms were implemented and tuned based on our understanding and experiences, which may not achieve their optimal performance. If you could yield a better result for some specific algorithm, please kindly let us know. We will update this table after the results are verified.

Releases

ReleasesDateFeatures
v0.1.11 / 4 / 2021Basic CRSLab
v0.1.23 / 28 / 2021CRSLab

Contributions

Please let us know if you encounter a bug or have any suggestions by filing an issue.

We welcome all contributions from bug fixes to new features and extensions.

We expect all contributions discussed in the issue tracker and going through PRs.

We thank the nice contributions through PRs from @shubaoyu, @ToheartZhang.

Citing

If you find CRSLab useful for your research or development, please cite our Paper:

@article{crslab,
    title={CRSLab: An Open-Source Toolkit for Building Conversational Recommender System},
    author={Kun Zhou, Xiaolei Wang, Yuanhang Zhou, Chenzhan Shang, Yuan Cheng, Wayne Xin Zhao, Yaliang Li, Ji-Rong Wen},
    year={2021},
    journal={arXiv preprint arXiv:2101.00939}
}

Team

CRSLab was developed and maintained by AI Box group in RUC.

License

CRSLab uses MIT License.