Home

Awesome

<h1 align="center"> CompGCN </h1> <h4 align="center">Composition-Based Multi-Relational Graph Convolutional Networks</h4> <p align="center"> <a href="https://iclr.cc/"><img src="http://img.shields.io/badge/ICLR-2020-4b44ce.svg"></a> <a href="https://arxiv.org/abs/1911.03082"><img src="http://img.shields.io/badge/Paper-PDF-red.svg"></a> <a href="https://iclr.cc/virtual/poster_BylA_C4tPr.html"><img src="http://img.shields.io/badge/Video-ICLR-green.svg"></a> <a href="https://medium.com/@mgalkin/knowledge-graphs-iclr-2020-f555c8ef10e3"><img src="http://img.shields.io/badge/Blog-Medium-B31B1B.svg"></a> <a href="https://github.com/malllabiisc/CompGCN/blob/master/LICENSE"> <img src="https://img.shields.io/badge/License-Apache%202.0-blue.svg"> </a> </p> <h2 align="center"> Overview of CompGCN <img align="center" src="./overview.png" alt="..."> </h2> Given node and relation embeddings, CompGCN performs a composition operation φ(·) over each edge in the neighborhood of a central node (e.g. Christopher Nolan above). The composed embeddings are then convolved with specific filters WO and WI for original and inverse relations respectively. We omit self-loop in the diagram for clarity. The message from all the neighbors are then aggregated to get an updated embedding of the central node. Also, the relation embeddings are transformed using a separate weight matrix. Please refer to the paper for details.

Dependencies

Dataset:

Training model:

Citation:

Please cite the following paper if you use this code in your work.

@inproceedings{
    vashishth2020compositionbased,
    title={Composition-based Multi-Relational Graph Convolutional Networks},
    author={Shikhar Vashishth and Soumya Sanyal and Vikram Nitin and Partha Talukdar},
    booktitle={International Conference on Learning Representations},
    year={2020},
    url={https://openreview.net/forum?id=BylA_C4tPr}
}

For any clarification, comments, or suggestions please create an issue or contact Shikhar.