Home

Awesome

Autoregressive Structured Prediction with Language Models

This repository contains PyTorch implementation and pre-trained models for ASP, described in Autoregressive Structured Prediction with Language Models.

<p align="right" width="100%"> Links: <a href="https://github.com/eth-nlped">ETH-NLPED lab</a> , <a href="https://github.com/rycolab">Rycolab</a> </p>

Contents

Setup

1. Clone this repo:

git clone https://github.com/lyutyuh/ASP.git
cd ASP
export ASP=$PWD # setting environment variable

2. Prepare the environment

2.1 Create virtual environment with:

<details> <summary> <code> pip </code> </summary>
python -m venv <path_to_venv>/asp    # create a new environment (asp)
source <path_to_venv>/asp/bin/activate
pip install -r requirements.txt
</details> or <details> <summary> <code>conda</code> </summary>
conda env create -f environment.yml    # create a new environment (asp)
</details>

Download and preprocess the datasets

<details> <summary> <code> named entity recognition </code> </summary>

CoNLL-03

  wget https://polybox.ethz.ch/index.php/s/bFf8vJBonIT7sr8/download -O ./data/conll03_ner.zip
  unzip ./data/conll03_ner.zip -d ./data
  rm ./data/conll03_ner.zip
  python ./data/conll03_ner/conll03_to_json.py
  python ./data/t5minimize_ner.py ./data/conll03_ner ./data/conll03_ner

OntoNotes V5

Coming soon!

</details> <details> <summary> <code> end-to-end relation extraction </code> </summary>

CoNLL-04

  wget https://polybox.ethz.ch/index.php/s/Lk44AwhOeDSeZTh/download -O ./data/conll04_ere.zip
  unzip ./data/conll04_ere.zip -d ./data
  rm ./data/conll04_ere.zip
  python ./data/t5minimize_ere.py ./data/conll04_ere/ ./data/conll04_ere

ACE-05

ACE-05 is not a publically available dataset. Please follow https://github.com/luanyi/DyGIE/tree/master/preprocessing to obtain the dataset json files {train,dev,test}.json and copy them to ./data/ace05_ere/.

Then:

  python ./data/ace05_ere/ace05_to_json.py
  python ./data/t5minimize_ere.py ./data/ace05_ere ./data/ace05_ere
</details> <details> <summary> <code> coreference resolution </code> </summary>

CoNLL-12 (OntoNotes)

OntoNotes is not a publically available dataset. Please follow http://conll.cemantix.org/2012/data.html and https://catalog.ldc.upenn.edu/LDC2013T19 to obtain the files {train,dev,test}.english.v4_gold_conll and copy them to ./data/ontonotes_coref/.

Then:

python ./data/t5minimize_coref.py ./data/ontonotes_coref/ ./data/ontonotes_coref/
</details>

Tasks

For task in {ner,ere,coref}:

  python run_{task}.py <config_name> 0 

Please find the <config_name> in each {ner,ere,coref}.conf file under configs

Running on New Datasets

1. prepare the data

[{
    "tokens": ["John", "Wilkes", "Booth", ",", "who", "assassinated", "President", "Lincoln", ",", "was", "an", "actor", "."], 
    "entities": [{"type": "Peop", "start": 0, "end": 3}, {"type": "Peop", "start": 6, "end": 8}], 
    "relations": [{"type": "Kill", "head": 0, "tail": 1}] # Not necessary for NER
}, ...]

and <newdataset>_types.json:

{
    "entities": {
        "Loc": {"short": "Loc", "verbose": "Location"}, 
        "Org": {"short": "Org", "verbose": "Organization"}, 
        "Peop": {"short": "Peop", "verbose":"People"}, 
        "Other": {"short": "Other", "verbose": "Other"}
    }, 
    "relations": { # Not necessary for NER
        "Work_For": {"short": "Work", "verbose": "Work for", "symmetric": false}, 
        "Kill": {"short": "Kill", "verbose": "Kill", "symmetric": false}, 
        "OrgBased_In": {"short": "OrgBI", "verbose": "Organization based in", "symmetric": false}, 
        "Live_In": {"short": "Live", "verbose": "Live in", "symmetric": false}, 
        "Located_In": {"short": "LocIn", "verbose": "Located in", "symmetric": false}
    }
}

and run

  python ./data/t5minimize_ere.py ./data/<newdataset>/ ./data/<newdataset>/
python ./data/t5minimize_coref.py ./data/<newdataset>/ ./data/<newdataset>/

2. Prepare the configuration

Add a new entry in the corresponding .conf file under configs with the directory to the new dataset data_dir = ${ASP}/data/<newdataset>/

Pre-trained models

Use the following command to load the pre-trained model and evaluate on the corresponding task. config_name refers to the experiment name given in the .conf file under configs.

python evaluate_<task>.py <config_name> <checkpoint_name> <gpu_id>

1. Coreference resolution

config_namecheckpoint_namedatasetlinkparams
flant5_basetliu/asp-coref-flan-t5-baseCoNLL-2012 (OntoNotes)link220 M
flant5_largetliu/asp-coref-flan-t5-largeCoNLL-2012 (OntoNotes)link770 M
flant5_xltliu/asp-coref-flan-t5-xlCoNLL-2012 (OntoNotes)link3 B
t0_3btliu/asp-coref-t0-3bCoNLL-2012 (OntoNotes)link3 B

2. Named entity recognition (NER)

config_namecheckpoint_namedatasetlinkparams
flant5_basetliu/asp-ner-flan-t5-baseCoNLL-03 NERlink220 M
flant5_largetliu/asp-ner-flan-t5-largeCoNLL-03 NERlink770 M

3. End-to-end relation extraction (ERE)

config_namecheckpoint_namedatasetlinkparams
flant5_base_conll04tliu/asp-re-flan-t5-baseCoNLL-04 RElink220 M
flant5_large_conll04tliu/asp-re-flan-t5-largeCoNLL-04 RElink770 M
flant5_xl_conll04tliu/asp-re-flan-t5-xlCoNLL-04 RElink3 B

Citation

@inproceedings{liu-etal-2022-autoregressive,
    title={Autoregressive Structured Prediction with Language Models},
    author={Tianyu Liu and Yuchen Jiang and Nicholas Monath and Ryan Cotterell and Mrinmaya Sachan},
    year={2022},
    url={https://arxiv.org/abs/2210.14698},
    eprint={2210.14698},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}