Home

Awesome

<h1 align="center"> CSProm-KG </h1> <h4 align="center">Dipping PLMs Sauce: Bridging Structure and Text for Effective Knowledge Graph Completion via Conditional Soft Prompting</h4> <h2 align="center"> Overview of KG-S2S <img align="center" src="./overview.png" alt="..."> </h2> This repository includes the source code of the paper accepted by ACL 2023 Findings.

"Dipping PLMs Sauce: Bridging Structure and Text for Effective Knowledge Graph Completion via Conditional Soft Prompting".

Dependencies

Dataset:

Pretrained Checkpoint:

To enable a quick evaluation, we upload the trained model. Download the checkpoint folders to ./checkpoint/, and run the evaluation commandline for corresponding dataset.

The results are:

DatasetMRRH@1H@3H@10
WN18RR0.57266052.06%59.00%67.79%
FB15k-2370.35770126.90%39.07%53.55%
Wikidata5m0.37978934.32%39.91%44.57%
ICEWS140.62797154.74%67.73%77.30%
ICEWS05-150.62689054.27%67.84%78.22%

Training and testing:

Citation

If you used our work or found it helpful, please use the following citation:

@inproceedings{chen-etal-2023-dipping,
    title = "Dipping {PLM}s Sauce: Bridging Structure and Text for Effective Knowledge Graph Completion via Conditional Soft Prompting",
    author = "Chen, Chen  and
      Wang, Yufei  and
      Sun, Aixin  and
      Li, Bing  and
      Lam, Kwok-Yan",
    booktitle = "Findings of the Association for Computational Linguistics: ACL 2023",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.findings-acl.729",
    pages = "11489--11503",
    abstract = "Knowledge Graph Completion (KGC) often requires both KG structural and textual information to be effective. Pre-trained Language Models (PLMs) have been used to learn the textual information, usually under the fine-tune paradigm for the KGC task. However, the fine-tuned PLMs often overwhelmingly focus on the textual information and overlook structural knowledge. To tackle this issue, this paper proposes CSProm-KG (Conditional Soft Prompts for KGC) which maintains a balance between structural information and textual knowledge. CSProm-KG only tunes the parameters of Conditional Soft Prompts that are generated by the entities and relations representations. We verify the effectiveness of CSProm-KG on three popular static KGC benchmarks WN18RR, FB15K-237 and Wikidata5M, and two temporal KGC benchmarks ICEWS14 and ICEWS05-15. CSProm-KG outperforms competitive baseline models and sets new state-of-the-art on these benchmarks. We conduct further analysis to show (i) the effectiveness of our proposed components, (ii) the efficiency of CSProm-KG, and (iii) the flexibility of CSProm-KG.",
}