Home

Awesome

ChronoKGE

ChronoKGE is a knowledge graph embedding framework to ease time-focused research in representation learning.

alt text

<br>

Requirements

Python 3.6+ https://www.python.org/downloads/

PyTorch 1.8+ https://pytorch.org/get-started/locally/

Optuna 2.9+ https://optuna.readthedocs.io/en/latest/installation.html

Selenium 3.14+ https://selenium-python.readthedocs.io/installation.html

<br>

Installation

Install dependencies

pip3 install -r requirements.txt
<br>

Usage

Run

run Executes the experiment with the given parameters.

Via Command Line:
python3 -m chrono_kge [<model-args>] run [<run-args>]

For more details, see model arguments and run arguments.

Via YAML Config:
python3 -m chrono_kge -mc <main_config> run -rc <run_config>
<br>

Tune

tune Performs parameter tuning with the provided amount of trials.

Via Command Line:
python3 -m chrono_kge [<model-args>] tune [<tune-args>]

For more details, see model arguments and tune arguments.

Via YAML Config:
python3 -m chrono_kge -mc <main_config> tune -tc <tune_config>
<br>

Examples

Example 1

Run with default parameters model=lowfer-tnt, kg=icews14, dim=300, lr=0.01 using YAML.

python3 -m chrono_kge -mc "config/main/default.yaml" run -rc "config/run/default.yaml"

Example 2

Run with default parameters model=lowfer-tnt, kg=icews14, dim=300, lr=0.01 using CMD.

python3 -m chrono_kge -m "t-lowfer" -d "icews14" -am 1 -mm 1 run -lr 0.01 -ed 300
<br>

Optional arguments

1. Model arguments

-m MODEL, --model MODEL<br> Learning model.<br> Supported models: lowfer, tlowfer.<br> Default tlowfer.

-d DATASET, --dataset DATASET<br> Which dataset to use.<br> Supported datasets: see knowledge graphs below.<br> Default icews14.

-e EPOCHS, --epochs EPOCHS<br> Number of total epochs.<br> Default 1000.

-am AUGMENT_MODE, --aug_mode AUGMENT_MODE<br> The mode of augmentation.<br> Supported methods: see augmentation modes below.<br> Default 0.

-rm REG_MODE, --reg_mode REG_MODE<br> The mode of regularization.<br> Supported methods: see regularization modes below.<br> Default 0.

-mm MODULATION_MODE, --mod_mode MODULATION_MODE<br> Modulation mode.<br> Supported modulations: see modulation modes below.<br> Default 0.

-em ENC_MODE, --enc_mode ENC_MODE<br> Supported methods: see encoding modes below.<br> Default 0.

-c, --cuda<br> Whether to use cuda (GPU) or not (CPU).<br> Default CPU.

--save<br> Whether to save results.<br> Default False.

<br>

Augmentation modes

0: None<br> 1: Reverse triples<br> 2: Back translation (pre)<br> 3: Back translation (ad-hoc)<br> 4: Reverse triples + Back translation (pre)<br>

<br>

2: Augmentation using precomputed translations.

3: Ad-hoc back translation using free Google Translate service.<br> High confidence, max. 2 translations, language order ch-zn, es, de, en.<br> Supported KB: ICEWS14, ICEWS05-15, ICEWS18

<br>

Regularization modes

0: None<br> 1: Omega (embedding regularization)<br> 2: Lambda (time regularisation)<br> 3: Omega + Lambda<br>

* Tensor norms: Omega: p=3, Lambda: p=4

<br>

Modulation modes

1. Time Modulation (T)

Extends LowFER with dynamic relations.<br> mode=0

2. Time-no-Time Modulation (TNT)

Extends LowFER with dynamic and static relations.<br> mode=1

<br>

Benchmark results

Results for semantic matching models on ICEWS14 and ICEWS05-15.

ICEWS14

MethodMRRH@10H@3H@1
DE-DistMult0.5010.7080.5690.392
DE-SimplE0.5260.7250.5920.418
TComplEx0.5600.7300.6100.470
TNTComplEx0.5600.7400.6100.460
TuckERT0.5940.7310.6400.518
TuckERTNT0.6040.7530.6550.521
LowFER-T0.5840.7340.6300.505
LowFER-TNT0.5860.7350.6320.507
LowFER-CFB0.6230.7570.6710.549
LowFER-FTP0.6170.7650.6650.537
<br>

ICEWS05-15

MethodMRRH@10H@3H@1
DE-DistMult0.4840.7180.5460.366
DE-SimplE0.5130.7480.5780.392
TComplEx0.5800.7600.6400.490
TNTComplEx0.6000.7800.6500.500
TuckERT0.6270.7690.6740.550
TuckERTNT0.6380.7830.6860.559
LowFER-T0.5590.7140.6050.476
LowFER-TNT0.5620.7170.6080.480
LowFER-CFB0.6380.7910.6900.555
LowFER-FTP0.6250.7920.6810.534
<br>

Additional benchmarks

For an exhaustive summary of related benchmark results, visit TKGC Benchmark Results.

Citation

If you find our work useful, please consider citing.

@inproceedings{dikeoulias-etal-2022-temporal,
    title = "Temporal Knowledge Graph Reasoning with Low-rank and Model-agnostic Representations",
    author = {Dikeoulias, Ioannis  and
      Amin, Saadullah  and
      Neumann, G{\"u}nter},
    booktitle = "Proceedings of the 7th Workshop on Representation Learning for NLP",
    month = may,
    year = "2022",
    address = "Dublin, Ireland",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.repl4nlp-1.12",
    doi = "10.18653/v1/2022.repl4nlp-1.12",
    pages = "111--120",
}