Home

Awesome

English | 中文

Overview

This is the code of our paper NSP-BERT: A Prompt-based Zero-Shot Learner Through an Original Pre-training Task —— Next Sentence Prediction. We use a sentence-level pre-training task NSP (Next Sentence Prediction) to realize prompt-learning and perform various downstream tasks, such as single sentence classification, sentence pair classification, coreference resolution, cloze-style task, entity linking, entity typing.

On the FewCLUE benchmark, our NSP-BERT outperforms other zero-shot methods (GPT-1-zero and PET-zero) on most of these tasks and comes close to the few-shot methods. We hope NSP-BERT can be an unsupervised tool that can assist other language tasks or models.

News

2022/8/17 New version! For both zero-shot and few-shot NSP-BERT, English and Chinese. Baselines such as fine-tuning, NSP and PET are also implemented. https://github.com/sunyilgdx/Prompts4Keras

2022/6/16 There will be a major update soon!!!

2021/11/12 GLUE and more English datasets have been added. We can download these datasets on LM-BFF. Thanks to Gao Tianyu.

2021/10/11 We uploaded the code on several English classification task datasets, AG’s News, DBPedia, Amazon and IMDB. The Acc. of NSP-BERT on those datasets are about 81.8, 70.9, 71.9 and 70.7 (We tested about 1K samples). Thanks to Shengding Hu and his KnowledgeablePromptTuning.

Guide

SectionDescription
EnvironmentThe required deployment environment
DownloadsDownload links for the models' checkpoints used by NSP-BERT
DemosChinese and English demos
EvaluationEvaluate NSP-BERT for different downstream tasks
BaselinesBaseline results for several Chinese NLP datasets (partial)
Model ComparisonCompare the models published in this repository
Strategy DetailsSome of the strategies used in the paper
DiscussionDiscussion and Discrimination for future work
AcknowledgementsAcknowledgements

Environment

The environments are as follows:

Python 3.6
bert4keras 0.10.6
tensorflow-gpu 1.15.0

Downloads

Models

We should dowmload the checkpoints of different models. The vocab.txt and the config.json are already in our repository.

OrganizationModel NameModel ParametersDownload LinkingTips
GoogleBERT-uncasedL=12 H=769 A=12 102MTensorflow
BERT-ChineseL=12 H=769 A=12 102MTensorflow
HFLBERT-wwmL=12 H=769 A=12 102MTensorflow
BERT-wwm-extL=12 H=769 A=12 102MTensorflow
UERBERT-mixed-tinyL=3 H=384 A=6 14MPytorch*
BERT-mixed-SmallL=6 H=512 A=8 31MPytorch*
BERT-mixed-BaseL=12 H=769 A=12 102MPytorch*
BERT-mixed-LargeL=24 H=1024 A=16 327MPytorch*

* We need to use UER's convert tool to convert UER pytorch to Original Tensorflow.

Datasets

We use FewCLUE datasets and DuEL2.0 (CCKS2020) in our experiments.

DatasetsDownload Links
FewCLUEhttps://github.com/CLUEbenchmark/FewCLUE/tree/main/datasets
DuEL2.0 (CCKS2020)https://aistudio.baidu.com/aistudio/competition/detail/83
EnEvalhttps://github.com/ShengdingHu/KnowledgeablePromptTuning

Put the datasets into the NSP-BERT/datasets/.

Demos

Try to use ./demos/nsp_bert_classification_demo.py and ./demos/nsp_bert_classification_demo_en.py to accomplish your own classification tasks. Edit your own Labels and Samples, then create your own Prompt Templates, then you can classify them.

...
label_names = ['entertainment', 'sports', 'music', 'games', 'economics', 'education']
patterns = ["This is {} news".format(label) for label in label_names]
demo_data_en = ['FIFA unveils biennial World Cup plan, UEFA threatens boycott',
               'COVID vaccines hold up against severe Delta: US data',
               'Justin Drew Bieber was born on March 1, 1994 at St. ',
               'Horizon launches latest chip to take on global rivals',
               'Twitch video gamers rise up to stop ‘hate raids’']
...

Output

Sample 0:
Original Text: FIFA unveils biennial World Cup plan, UEFA threatens boycott
Predict label: sports
Logits: [0.50525445, 0.9874593, 0.40805838, 0.9633584, 0.39732504, 0.22665949]

Sample 1:
Original Text: COVID vaccines hold up against severe Delta: US data
Predict label: economics
Logits: [0.8868228, 0.9359472, 0.795272, 0.93895626, 0.99118936, 0.86002237]

Sample 2:
Original Text: Justin Drew Bieber was born on March 1, 1994 at St. 
Predict label: music
Logits: [0.98517805, 0.97300863, 0.98871416, 0.95968705, 0.9250582, 0.9211884]
...

Evaluation

We can run individual python files in the project directly to evaluate our NSP-BERT.

NSP-BERT
    |- datasets
        |- clue_datasets
           |- ...
        |- DuEL 2.0
           |- dev.json
           |- kb.json
        |- enEval
           |- agnews
           |- amazon
           |- dbpedia
           |- imdb
           
    |- demos
        |- nsp_bert_classification_demo.py
        |- nsp_bert_classification_demo_en.py
    |- models
        |- uer_mixed_corpus_bert_base
           |- bert_config.json
           |- vocab.txt
           |- bert_model.ckpt...
           |- ...
    |- nsp_bert_classification.py             # Single Sentence Classification
    |- nsp_bert_sentence_pair.py              # Sentence Pair Classification
    |- nsp_bert_cloze_style.py                # Cloze-style Task
    |- nsp_bert_coreference_resolution.py     # Coreference Resolution
    |- nsp_bert_entity_linking.py             # Entity Linking and Entity Typing
    |- utils.py
Python FileTaskDatasets
nsp_bert_classification.pySingle Sentence ClassificationEPRSTMT, TNEWS, CSLDCP, IFLYTEK
AG’s News, DBPedia, Amazon, IMDB
nsp_bert_sentence_pair.pySentence Pair ClassificationOCNLI, BUSTM, CSL
nsp_bert_cloze_style.pyCloze-style TaskChID
nsp_bert_coreference_resolution.pyCoreference ResolutionCLUEWSC
nsp_bert_entity_linking.pyEntity Linking and Entity TypingDuEL2.0

Baselines

Reference FewCLUE, we choos 3 training scenarios, fine-tuning, few-shot and zero-shot. The baselines use Chineses-RoBERTa-Base and Chinses-GPT-1 as the backbone model.

Methods

ScenariosMethods
Fine-tuningBERT, RoBERTa
Few-ShotPET, ADAPET, P-tuning, LM-BFF, EFL
Zero-ShotGPT-zero, PET-zero

Downloads

OrganizationModel NameModel ParametersDownload Linking
huawei-noahChinese GPTL=12 H=769 A=12 102MTensorflow
HFLRoBERTa-wwm-extL=12 H=769 A=12 102MTensorflow

Model Comparison

<br/><img src="./images/main_results.png" width="800" alt="Main Results"/><br/>

Strategy Details

<br/><img src="./images/strategies.png" width="600" alt="Strategies"/><br/>

Discussion

Acknowledgements

Citation

@misc{sun2021nspbert,
    title={NSP-BERT: A Prompt-based Zero-Shot Learner Through an Original Pre-training Task--Next Sentence Prediction},
    author={Yi Sun and Yu Zheng and Chao Hao and Hangping Qiu},
    year={2021},
    eprint={2109.03564},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
@inproceedings{sun-etal-2022-nsp,
    title = "{NSP}-{BERT}: A Prompt-based Few-Shot Learner through an Original Pre-training Task {---}{---} Next Sentence Prediction",
    author = "Sun, Yi  and
      Zheng, Yu  and
      Hao, Chao  and
      Qiu, Hangping",
    booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
    month = oct,
    year = "2022",
    address = "Gyeongju, Republic of Korea",
    publisher = "International Committee on Computational Linguistics",
    url = "https://aclanthology.org/2022.coling-1.286",
    pages = "3233--3250"
}