Home

Awesome

GNN-LM

Introduction

The repository contains the code for the recent research advances at Shannon.AI.

GNN-LM: Language Modeling based on Global Contexts via GNN <br> Yuxian Meng, Shi Zong, Xiaoya Li, Xiaofei Sun, Tianwei Zhang, Fei Wu, Jiwei Li<br> If you find this repository helpful, please cite the following:

 @article{meng2021gnn,
  title={GNN-LM: Language Modeling based on Global Contexts via GNN},
  author={Meng, Yuxian and Zong, Shi and Li, Xiaoya and Sun, Xiaofei and Zhang, Tianwei and Wu, Fei and Li, Jiwei},
  journal={arXiv preprint arXiv:2110.08743},
  year={2021}
}

Results

Model# ParamsTest ppl
base LM247M18.7
+ GNN274M16.8
+ GNN + KNN274M14.8
Model# ParamsTest ppl
base LM1.03B23.0
+ GNN1.05B22.7
+ GNN + KNN1.05B22.5
Model# ParamsTest BPC
base LM41M1.06
+ GNN48M1.04
+ GNN + KNN48M1.03

Requirements

A Note about Hardware

Experiments for this paper were conducted on machines that contain 500GB of RAM, NVIDIA V100 32GB GPUs and flash storage (SSDs). Saving the Wikitext-103 datastore requires 400GB of disk space. The speed of saving the datastore, building the FAISS index and evaluating the nearest neighbors language model heavily depends on the amount of RAM available for each job. Some of these steps can be sped up by parallelizing, which we leave for users to do in order to best cater to their setup.

If you are working with a remote cluster, please note that we use memmaps for saving the datastore. This allows us to keep the data on disk while accessing it by loading small chunks into memory, depending on the available RAM. This means there are a large number of disk seeks. In order to prevent slowing down your entire cluster, we suggest always reading/writing this data to/from local disks (as opposed to NFS directories), and flash storage is best for faster access.

Preparing the Data & Pretrained Models

KNN Search and Feature Quantization

Training/Evaluation GNN-LM

TODOs

Acknowledgements

For KNN baselines, we fork knnlm repository from commit-id fb6b50e48136b2c201f4768005474dc90e7791df, which we wish to acknowledge.