Home

Awesome

<h1 align="left"> Poisoning Knowledge Graph Embeddings <br> via Relation Inference Patterns </h1> <p align="left"> <a href="https://aclanthology.org/2021.acl-long.147/"><img src="https://img.shields.io/badge/ACL--2021-Abstract-blueviolet.svg"></a> <a href="https://aclanthology.org/2021.acl-long.147.pdf"><img src="https://img.shields.io/badge/ACL--2021-Paper-red.svg"></a> <a href="https://underline.io/lecture/25460-poisoning-knowledge-graph-embeddings-via-relation-inference-patterns"><img src="https://img.shields.io/badge/ACL--2021-Video-yellow.svg"></a> <a href="https://aclanthology.org/2021.acl-long.147/"><img src="https://img.shields.io/badge/ACL--2021-Citation-9cf.svg"></a> <a href="https://underline.io/events/167/sessions?eventSessionId=5520"><img src="https://img.shields.io/badge/ACL--2021-Session--4D-green.svg"></a> </p> <p align="left"> <a href="https://arxiv.org/abs/2111.06345"><img src="http://img.shields.io/badge/Paper-Pre--print-orange.svg"></a> <a href="https://github.com/PeruBhardwaj/InferenceAttack/blob/master/LICENSE"><img src="https://img.shields.io/badge/License-MIT-blue.svg"></a> </p> <h4 align="left">This is the code repository to accompany the ACL 2021 paper on poisoning attacks on KGE models. <br> The work is a part of my PhD study at Trinity College Dublin and is funded by Accenture Labs and ADAPT Centre. <br> For any questions or feedback, add an issue or email me at: bhardwap at tcd dot ie</h4>

Related Publications

  1. Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods (EMNLP 2021) - <a href="https://aclanthology.org/2021.emnlp-main.648/">Paper</a>, <a href="https://github.com/PeruBhardwaj/AttributionAttack">Codebase</a>
  2. Adversarial Robustness of Representation Learning for Knowledge Graphs (PhD Thesis) - <a href="http://www.tara.tcd.ie/bitstream/handle/2262/101176/Thesis_PeruBhardwaj.pdf">Link</a>

Overview

The figure illustrates the composition based adversarial attack on missing link prediction for fraud detection. The knowledge graph consists of two types of entities - Person and BankAccount. The target triple to predict is (Karl, affiliated with, Joe the mobster). Original KGE model predicts this triple as True, i.e. assigns it a higher rank than the synthetic negative triples. But a malicious attacker can add adversarial triples (in purple) that connect Karl with a non-suspicious person Bob through composition pattern. Now, the KGE model predicts the target triple as False.

Thus, the proposed adversarial attacks are based on a reformulation of the problem of poisoning attacks on KGE models for missing link prediction. Instead of degrading the rank of a target triple directly, the attacker aims to improve the rank of a decoy triple. To do so, they exploit the inductive abilities of KGE models which are expressed through connectivity patterns like symmetry, inversion or composition. This problem reformulation for poisoning attacks also helps to understand the behaviour of KGE models because the extent of effectiveness of the attack relying on an inference pattern indicates the KGE model's sensitivity to that inference pattern.

Reproducing the results

Setup

Experiments reported in the paper were run in the conda environment inference_attack.yml

Usage

References

Parts of this codebase are based on the code from following repositories

Citation

@inproceedings{bhardwaj-etal-2021-poisoning,
    title = "Poisoning Knowledge Graph Embeddings via Relation Inference Patterns",
    author = "Bhardwaj, Peru  and
      Kelleher, John  and
      Costabello, Luca  and
      O{'}Sullivan, Declan",
    booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
    month = aug,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.acl-long.147",
    pages = "1875--1888"
    }