Home

Awesome

ConvLab-2

Build Status

ConvLab-2 is an open-source toolkit that enables researchers to build task-oriented dialogue systems with state-of-the-art models, perform an end-to-end evaluation, and diagnose the weakness of systems. As the successor of ConvLab, ConvLab-2 inherits ConvLab's framework but integrates more powerful dialogue models and supports more datasets. Besides, we have developed an analysis tool and an interactive tool to assist researchers in diagnosing dialogue systems. [paper]

Updates

2022.11.30:

2022.11.14:

2021.9.13:

2021.6.18:

Installation

Require python >= 3.6.

Clone this repository:

git clone https://github.com/thu-coai/ConvLab-2.git

Install ConvLab-2 via pip:

cd ConvLab-2
pip install -e .

Tutorials

Documents

Our documents are on https://thu-coai.github.io/ConvLab-2_docs/convlab2.html.

Models

We provide following models:

For more details about these models, You can refer to README.md under convlab2/$module/$model/$dataset dir such as convlab2/nlu/jointBERT/multiwoz/README.md.

Supported Datasets

End-to-end Performance on MultiWOZ

Notice: The results are for commits before bdc9dba (inclusive). We will update the results after improving user policy.

We perform end-to-end evaluation (1000 dialogues) on MultiWOZ using the user simulator below (a full example on tests/test_end2end.py) :

# BERT nlu trained on sys utterance
user_nlu = BERTNLU(mode='sys', config_file='multiwoz_sys_context.json', model_file='https://huggingface.co/ConvLab/ConvLab-2_models/resolve/main/bert_multiwoz_sys_context.zip')
user_dst = None
user_policy = RulePolicy(character='usr')
user_nlg = TemplateNLG(is_user=True)
user_agent = PipelineAgent(user_nlu, user_dst, user_policy, user_nlg, name='user')

analyzer = Analyzer(user_agent=user_agent, dataset='multiwoz')

set_seed(20200202)
analyzer.comprehensive_analyze(sys_agent=sys_agent, model_name='sys_agent', total_dialog=1000)

Main metrics (refer to convlab2/evaluator/multiwoz_eval.py for more details):

Performance (the first row is the default config for each module. Empty entries are set to default config.):

NLUDSTPolicyNLGComplete rateSuccess rateBook rateInform P/R/F1Turn(succ/all)
BERTNLURuleDSTRulePolicyTemplateNLG90.581.391.179.7/92.6/83.511.6/12.3
MILURuleDSTRulePolicyTemplateNLG93.381.893.080.4/94.7/84.811.3/12.1
BERTNLURuleDSTRulePolicySCLSTM48.540.256.962.3/62.5/58.711.9/27.1
BERTNLURuleDSTMLEPolicyTemplateNLG42.735.917.662.8/69.8/62.912.1/24.1
BERTNLURuleDSTPGPolicyTemplateNLG37.431.717.457.4/63.7/56.911.0/25.3
BERTNLURuleDSTPPOPolicyTemplateNLG75.571.786.669.4/85.8/74.113.1/17.8
BERTNLURuleDSTGDPLPolicyTemplateNLG49.438.420.164.5/73.8/65.611.5/21.3
NoneTRADERulePolicyTemplateNLG32.420.134.746.9/48.5/44.011.4/23.9
NoneSUMBTRulePolicyTemplateNLG34.529.462.454.1/50.3/48.311.0/28.1
BERTNLURuleDSTMDRGNone21.617.831.239.9/36.3/34.815.6/30.5
BERTNLURuleDSTLaRLNone34.827.029.649.1/53.6/47.813.2/24.4
NoneSUMBTLaRLNone32.923.725.948.6/52.0/46.712.5/24.3
NoneNoneDAMD*None39.534.351.460.4/59.8/56.315.8/29.8

*: end-to-end models used as sys_agent directly.

Module Performance on MultiWOZ

NLU

By running convlab2/nlu/evaluate.py MultiWOZ $model all:

PrecisionRecallF1
BERTNLU82.4885.5984.01
MILU80.2983.6381.92
SVMNLU74.9650.7460.52

DST

By running convlab2/dst/evaluate.py MultiWOZ $model:

Joint accuracySlot accuracyJoint F1
MDBT0.060.890.43
SUMBT0.300.960.83
TRADE0.400.960.84

Policy

Notice: The results are for commits before bdc9dba (inclusive). We will update the results after improving user policy.

By running convlab2/policy/evalutate.py --model_name $model

Task Success Rate
MLE0.56
PG0.54
PPO0.89
GDPL0.58

NLG

By running convlab2/nlg/evaluate.py MultiWOZ $model sys

corpus BLEU-4
Template0.3309
SCLSTM0.4884

Translation-train SUMBT for cross-lingual DST

Train

With Convlab-2, you can train SUMBT on a machine-translated dataset like this:

# train.py
import os
from sys import argv

if __name__ == "__main__":
    if len(argv) != 2:
        print('usage: python3 train.py [dataset]')
        exit(1)
    assert argv[1] in ['multiwoz', 'crosswoz']

    from convlab2.dst.sumbt.multiwoz_zh.sumbt import SUMBT_PATH
    if argv[1] == 'multiwoz':
        from convlab2.dst.sumbt.multiwoz_zh.sumbt import SUMBTTracker as SUMBT
    elif argv[1] == 'crosswoz':
        from convlab2.dst.sumbt.crosswoz_en.sumbt import SUMBTTracker as SUMBT

    sumbt = SUMBT()
    sumbt.train(True)

Evaluate

Execute evaluate.py (under convlab2/dst/) with following command:

python3 evaluate.py [CrossWOZ-en|MultiWOZ-zh] [val|test|human_val]

evaluation of our pre-trained models are: (joint acc.)

typeCrossWOZ-enMultiWOZ-zh
val12.4%48.5%
test12.4%46.0%
human_val10.6%47.4%

human_val option will make the model evaluate on the validation set translated by human.

Note: You may want to download pre-traiend BERT models and translation-train SUMBT models provided by us.

Without modifying any code, you could:

Issues

You are welcome to create an issue if you want to request a feature, report a bug or ask a general question.

Contributions

We welcome contributions from community.

Team

ConvLab-2 is maintained and developed by Tsinghua University Conversational AI group (THU-coai) and Microsoft Research (MSR).

We would like to thank:

Yan Fang, Zhuoer Feng, Jianfeng Gao, Qihan Guo, Kaili Huang, Minlie Huang, Sungjin Lee, Bing Li, Jinchao Li, Xiang Li, Xiujun Li, Jiexi Liu, Lingxiao Luo, Wenchang Ma, Mehrad Moradshahi, Baolin Peng, Runze Liang, Ryuichi Takanobu, Hongru Wang, Jiaxin Wen, Yaoqin Zhang, Zheng Zhang, Qi Zhu, Xiaoyan Zhu.

Citing

If you use ConvLab-2 in your research, please cite:

@inproceedings{zhu2020convlab2,
    title={ConvLab-2: An Open-Source Toolkit for Building, Evaluating, and Diagnosing Dialogue Systems},
    author={Qi Zhu and Zheng Zhang and Yan Fang and Xiang Li and Ryuichi Takanobu and Jinchao Li and Baolin Peng and Jianfeng Gao and Xiaoyan Zhu and Minlie Huang},
    year={2020},
    booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
}

@inproceedings{liu2021robustness,
    title={Robustness Testing of Language Understanding in Task-Oriented Dialog},
    author={Liu, Jiexi and Takanobu, Ryuichi and Wen, Jiaxin and Wan, Dazhen and Li, Hongguang and Nie, Weiran and Li, Cheng and Peng, Wei and Huang, Minlie},
    year={2021},
    booktitle={Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
}

License

Apache License 2.0