Home

Awesome

AMR-parser

Code for our EMNLP2019 paper,

Core Semantic First: A Top-down Approach for AMR Parsing. [paper][bib]

Deng Cai and Wai Lam.

Requirement

python2==Python2.7

python3==Python3.6

sh setup.sh (install dependencies)

Preprocessing

in the directory preprocessing

  1. make a directory, for example preprocessing/2017. Put files train.txt, dev.txt,test.txtin it. The format of these files be the same as our example file preprocessing/data/dev.txt.
  2. sh go.sh (you may make necessary changes in convertingAMR.java)
  3. python2 preprocess.py

Training

in the directory parser

  1. python3 extract.py && mv *vocab *table ../preprocessing/2017/. Make vocabularies for the dataset in ../preprocessing/2017 (you may make necessary changes in extract.py and the command line as well)
  2. sh train.sh Be patient! Checkpoints will be saved in the directory ckpt by default. (you may make necessary changes in train.sh).

Testing

in the directory parser

  1. sh work.sh (you should make necessary changes in work.sh)

Evaluation

in the directory amr-evaluation-tool-enhanced

  1. python2 smatch/smatch.py --help A large of portion of the code under this directory is borrowed from ChunchuanLv/amr-evaluation-tool-enhanced, we add more options as follows.
  --weighted           whether to use a weighted smatch or not
  --levels LEVELS      how deep you want to evaluate, -1 indicates unlimited, i.e., full graph
  --max_size MAX_SIZE  only consider AMR graphs with limited size <= max_size, -1 indicates no limit
  --min_size MIN_SIZE  only consider AMR graphs with limited size >= min_size, -1 indicates no limit

For examples:

  1. To calculate the smatch-weighted metric in our paper.

    python2 smatch/smatch.py --pr -f parsed_data golden_data --weighted

  2. To calculate the smatch-core metric in our paper

    python2 smatch/smatch.py --pr -f parsed_data golden_data --levels 4

Pretrained Model

We release our pretrained model at Google Drive.

To use the pretrained model, move the vocabulary files under [Google Drive]/vocabs to preprocessing/2017/ and adjust work.sh accordingly (set --load_path point to [Google Drive]/model.ckpt).

We also provide the exact model output reported in our paper. The output file and the corresponding reference file are in the legacy folder.

Citation

If you find the code useful, please cite our paper.

@inproceedings{cai-lam-2019-core,
    title = "Core Semantic First: A Top-down Approach for {AMR} Parsing",
    author = "Cai, Deng  and
      Lam, Wai",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
    month = nov,
    year = "2019",
    address = "Hong Kong, China",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/D19-1393",
    pages = "3790--3800",
}

Contact

For any questions, please drop an email to Deng Cai.