Home

Awesome

Neural Symbolic Reasoning on Knowledge Graph: RuleGuider

Pytorch implementation of our EMNLP 2020 paper: Learning Collaborative Agents with Rule Guidance for Knowledge Graph Reasoning.

We propose a neural symbolic method for knolwege graph reasoning that leverages symbolic rules.

<p align="center"><img src="figs/model.png" width="500"/></p>

Walk-based models have shown their advantages in knowledge graph (KG) reasoning by achieving decent performance while providing interpretable decisions. However, the sparse reward signals offered by the KG during traversal are often insufficient to guide a sophisticated walk-based reinforcement learning (RL) model. An alternate approach is to use traditional symbolic methods (e.g., rule induction), which achieve good performance but can be hard to generalize due to the limitation of symbolic representation. In this paper, we propose RuleGuider, which leverages high-quality rules generated by symbolicbased methods to provide reward supervision for walk-based agents. Experiments on benchmark datasets show that RuleGuider improves the performance of walk-based models without losing interpretability.

If you find the repository or ruleGuider helpful, please cite the following paper

@inproceedings{lei2020ruleguider,
  title={Learning Collaborative Agents with Rule Guidance for Knowledge Graph Reasoning},
  author={Lei, Deren and Jiang, Gangrong and Gu, Xiaotao and Sun, Kexuan and Mao, Yuning and Ren, Xiang},
  journal={EMNLP},
  year={2020}
}

Installation

Install PyTorch (>= 1.4.0) following the instructions on the PyTorch. Our code is written in Python3.

Run the following commands to install the required packages:

pip3 install -r requirements.txt

Data Preparation

Unpack the data files:

unzip data.zip

It will generate three dataset folders in the ./data directory. In our experiments, the datasets used are: fb15k-237, wn18rr and nell-995.

Rule Mining

Training

  1. Train embedding-based models:
./experiment-emb.sh configs/<dataset>-<model>.sh --train <gpu-ID>
  1. Pretrain relation agent using top rules:
./experiment-pretrain.sh configs/<dataset>-rs.sh --train <gpu-ID> <rule-path> --model point.rs.<embedding-model>
  1. Jointly train relation agent and entity agent with reward shaping
./experiment-rs.sh configs/<dataset>-rs.sh --train <gpu-ID> <rule-path> --model point.rs.<embedding-model> --checkpoint_path <pretrain-checkpoint-path>

Note:

Evaluation

  1. Evaluate embedding-based models:
./experiment-emb.sh configs/<dataset>-<model>.sh --inference <gpu-ID>
  1. Evaluate the pretraining of relation agent :
./experiment-pretrain.sh configs/<dataset>-rs.sh --inference <gpu-ID> <rule-path> --model point.rs.<embedding-model> --checkpoint_path <pretrain-checkpoint-path>
  1. Evaluate the final result:
./experiment-rs.sh configs/<dataset>-rs.sh --inference <gpu-ID> <rule-path> --model point.rs.<embedding-model> --checkpoint_path <checkpoint-path>