Home

Awesome

Introduction

Source code for the long paper Learning Robust Representations for Continual Relation Extraction via Adversarial Class Augmentation in EMNLP 2022. In this paper:

Environment

pip install torch==1.8.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html
pip install -r requirements.txt

Dataset

We use two datasets in our experiments, FewRel and TACRED:

We construct 5 CRE task sequences in both FewRel and TACRED the same as RP-CRE and CRL.

Backbone CRE Models

Because our proposed ACA is orthogonal to previous work, we adopt ACA to $2$ strong CRE baselines, EMAR and RP-CRE. Note that the original EMAR was based on a Bi-LSTM encoder, and we re-implement EMAR with BERT.

Run

bash bash/[dataset]/[model].sh [gpu id]
    - dataset: the dataset name, e.g.,:
        - fewrel/tacred
    - model: the model name, e.g.,:
        - emar/rp_cre (the vanilla models)
        - emar_aca/rp_cre_aca (the vanilla models with our ACA)
    - gpu id: the single gpu id, e,g, 0

For example,

bash bash/fewrel/emar 0
bash bash/fewrel/emar_aca 0

Citation

@article{wang2022learning,
  title={Learning Robust Representations for Continual Relation Extraction via Adversarial Class Augmentation},
  author={Wang, Peiyi and Song, Yifan and Liu, Tianyu and Lin, Binghuai and Cao, Yunbo and Li, Sujian and Sui, Zhifang},
  booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
  year = "2022",
  publisher = "Association for Computational Linguistics",
}