Awesome
PyContinual (An Easy and Extendible Framework for Continual Learning)
News
🔥 If you're here for the code from our latest EMNLP23 paper, please check the developing branch. We have added continual_finetune.ipynb as a self-contained example of the soft-masking scenario. It runs well without GPUs!
🔥 Our works on continual pre-training of LMs have now been accepted at EMNLP22 and ICLR23. Check out our repo for continual pre-training / post-training!
🔥 Our latest survey on continual learning of NLP is now in arXiv. Take a look if you are interested in CL and NLP!
🔥 Are you interested in additional baselines, tasks (including extraction and generation), LMs (such as RoBERTa and BART), and efficient training methods (like fp16 and multi-node)? Check out our developing branch!
Easy to Use
You can sumply change the baseline
, backbone
and task
, and then ready to go.
Here is an example:
python run.py \
--bert_model 'bert-base-uncased' \
--backbone bert_adapter \ #or other backbones (bert, w2v...)
--baseline ctr \ #or other avilable baselines (classic, ewc...)
--task asc \ #or other avilable task/dataset (dsc, newsgroup...)
--eval_batch_size 128 \
--train_batch_size 32 \
--scenario til_classification \ #or other avilable scenario (dil_classification...)
--idrandom 0 \ #which random sequence to use
--use_predefine_args #use pre-defined arguments
Easy to Extend
You only need to write your own ./dataloader
, ./networks
and ./approaches
. You are ready to go!
Performance
<p align="center"> <br> <a href="https://github.com/ZixuanKe/PyContinual"> <img src="https://github.com/ZixuanKe/PyContinual/blob/main/docs/benchmarks.png" width="500"/> </a> <br> <p>Introduction
Recently, continual learning approaches have drawn more and more attention. This repo contains pytorch implementation of a set of (improved) SoTA methods using the same training and evaluation pipeline.
This repository contains the code for the following papers:
- Achieving Forgetting Prevention and Knowledge Transfer in Continual Learning, Zixuan Ke, Bing Liu, Nianzu Ma, Hu Xu and Lei Shu, NeurIPS 2021
- CLASSIC: Continual and Contrastive Learning of Aspect Sentiment Classification Tasks, Zixuan Ke, Bing Liu, Hu Xu and Lei Shu, EMNLP 2021
- Adapting BERT for Continual Learning of a Sequence of Aspect Sentiment Classification Tasks, Zixuan Ke, Hu Xu and Bing Liu, NAACL 2021
- Continual Learning of a Mixed Sequence of Similar and Dissimilar Tasks, Zixuan Ke, Bing Liu and Xingchang Huang, NeurIPS 2020 (if you only care about this model, you can also check CAT)
- Continual Learning with Knowledge Transfer for Sentiment Classification, Zixuan Ke, Bing Liu, Hao Wang and Lei Shu, ECML-PKDD 2020 (if you only care about this model, you can also check LifelongSentClass)
- 40+ baselines and variants (and keeps "continually" growing!)
Features
- Datasets: It currently supports Language Datasets (Document/Sentence/Aspect Sentiment Classification, Natural Language Inference, Topic Classification) and Image Datasets (CelebA, CIFAR10, CIFAR100, FashionMNIST, F-EMNIST, MNIST, VLCS)
- Scenarios: It currently supports Task Incremental Learning and Domain Incremental Learning
- Training Modes: It currently supports single-GPU. You can also change it to multi-node distributed training and the mixed precision training.
Architecture
./res
: all results saved in this folder.
./dat
: processed data
./data
: raw data
./dataloader
: contained dataloader for different data
./approaches
: code for training
./networks
: code for network architecture
./data_seq
: some reference sequences (e.g. asc_random)
./tools
: code for preparing the data
Setup
- If you want to run the existing systems, please see run_exist.md
- If you want to expand the framework with your own model, please see run_own.md
- If you want to see the full list of baselines and variants, please see baselines.md
Reference
If using this code, parts of it, or developments from it, please consider cite the references bellow.
@inproceedings{ke2021achieve,
title={Achieving Forgetting Prevention and Knowledge Transfer in Continual Learning},
author={Ke, Zixuan and Liu, Bing and Ma, Nianzu and Xu, Hu, and Lei Shu},
booktitle={NeurIPS},
year={2021}
}
@inproceedings{ke2021contrast,
title={CLASSIC: Continual and Contrastive Learning of Aspect Sentiment Classification Tasks},
author={Ke, Zixuan and Liu, Bing and Xu, Hu, and Lei Shu},
booktitle={EMNLP},
year={2021}
}
@inproceedings{ke2021adapting,
title={Adapting BERT for Continual Learning of a Sequence of Aspect Sentiment Classification Tasks},
author={Ke, Zixuan and Xu, Hu and Liu, Bing},
booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
pages={4746--4755},
year={2021}
}
@inproceedings{ke2020continualmixed,
author= {Ke, Zixuan and Liu, Bing and Huang, Xingchang},
title= {Continual Learning of a Mixed Sequence of Similar and Dissimilar Tasks},
booktitle = {Advances in Neural Information Processing Systems},
volume={33},
year = {2020}}
@inproceedings{ke2020continual,
author= {Zixuan Ke and Bing Liu and Hao Wang and Lei Shu},
title= {Continual Learning with Knowledge Transfer for Sentiment Classification},
booktitle = {ECML-PKDD},
year = {2020}}
Contact
Please drop an email to Zixuan Ke, Xingchang Huang or Nianzu Ma if you have any questions regarding to the code. We thank Bing Liu, Hu Xu and Lei Shu for their valuable comments and opinioins.