Home

Awesome

HAKE: Hierarchy-Aware Knowledge Graph Embedding

This is the code of paper Learning Hierarchy-Aware Knowledge Graph Embeddings for Link Prediction. Zhanqiu Zhang, Jianyu Cai, Yongdong Zhang, Jie Wang. AAAI 2020. arxiv

Dependencies

Results

The results of HAKE and the baseline model ModE on WN18RR, FB15k-237 and YAGO3-10 are as follows.

WN18RR

MRRHITS@1HITS@3HITS@10
ModE0.4720.4270.4860.564
HAKE0.496 ± 0.0010.4520.5160.582

FB15k-237

MRRHITS@1HITS@3HITS@10
ModE0.3410.2440.3800.534
HAKE0.346 ± 0.0010.2500.3810.542

YAGO3-10

MRRHITS@1HITS@3HITS@10
ModE0.5100.4210.5620.660
HAKE0.546 ± 0.0010.4620.5960.694

Running the code

Usage

bash runs.sh {train | valid | test} {ModE | HAKE} {wn18rr | FB15k-237 | YAGO3-10} <gpu_id> \
<save_id> <train_batch_size> <negative_sample_size> <hidden_dim> <gamma> <alpha> \
<learning_rate> <num_train_steps> <test_batch_size> [modulus_weight] [phase_weight]

Remark: [modulus_weight] and [phase_weight] are available only for the HAKE model.

To reproduce the results of HAKE and ModE, run the following commands.

HAKE

# WN18RR
bash runs.sh train HAKE wn18rr 0 0 512 1024 500 6.0 0.5 0.00005 80000 8 0.5 0.5

# FB15k-237
bash runs.sh train HAKE FB15k-237 0 0 1024 256 1000 9.0 1.0 0.00005 100000 16 3.5 1.0

# YAGO3-10
bash runs.sh train HAKE YAGO3-10 0 0 1024 256 500 24.0 1.0 0.0002 180000 4 1.0 0.5

ModE

# WN18RR
bash runs.sh train ModE wn18rr 0 0 512 1024 500 6.0 0.5 0.0001 80000 8 --no_decay

# FB15k-237
bash runs.sh train ModE FB15k-237 0 0 1024 256 1000 9.0 1.0 0.0001 100000 16

# YAGO3-10
bash runs.sh train ModE YAGO3-10 0 0 1024 256 500 24.0 1.0 0.0002 80000 4

Visualization

To plot entity embeddings on a 2D plane (Figure 4 in our paper), please refer to this issue.

Citation

If you find this code useful, please consider citing the following paper.

@inproceedings{zhang2020learning,
  title={Learning Hierarchy-Aware Knowledge Graph Embeddings for Link Prediction},
  author={Zhang, Zhanqiu and Cai, Jianyu and Zhang, Yongdong and Wang, Jie},
  booktitle={Thirty-Fourth {AAAI} Conference on Artificial Intelligence},
  pages={3065--3072},
  publisher={{AAAI} Press},
  year={2020}
}

Acknowledgement

We refer to the code of RotatE. Thanks for their contributions.

Other Repositories

If you are interested in our work, you may find the following paper useful.

Duality-Induced Regularizer for Tensor Factorization Based Knowledge Graph Completion. Zhanqiu Zhang, Jianyu Cai, Jie Wang. NeurIPS 2020. [paper] [code]