Home

Awesome

<p align="center"> <img src="docs/source/logo.png" height="150"> </p> <h1 align="center"> PyKEEN </h1> <p align="center"> <a href="https://github.com/pykeen/pykeen/actions/workflows/common.yml"> <img src="https://github.com/pykeen/pykeen/actions/workflows/common.yml/badge.svg" alt="GitHub Actions"> </a> <a href='https://opensource.org/licenses/MIT'> <img src='https://img.shields.io/badge/License-MIT-blue.svg' alt='License'/> </a> <a href="https://zenodo.org/badge/latestdoi/242672435"> <img src="https://zenodo.org/badge/242672435.svg" alt="DOI"> </a> <a href="https://optuna.org"> <img src="https://img.shields.io/badge/Optuna-integrated-blue" alt="Optuna integrated" height="20"> </a> <a href="https://pytorchlightning.ai"> <img src="https://img.shields.io/badge/-Lightning-792ee5?logo=pytorchlightning&logoColor=white" alt="PyTorch Lightning"> </a> <a href="https://github.com/astral-sh/ruff"> <img src="https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json" alt="Ruff" style="max-width:100%;"> </a> <a href=".github/CODE_OF_CONDUCT.md"> <img src="https://img.shields.io/badge/Contributor%20Covenant-2.1-4baaaa.svg" alt="Contributor Covenant"> </a> </p> <p align="center"> <b>PyKEEN</b> (<b>P</b>ython <b>K</b>nowl<b>E</b>dge <b>E</b>mbeddi<b>N</b>gs) is a Python package designed to train and evaluate knowledge graph embedding models (incorporating multi-modal information). </p> <p align="center"> <a href="#installation">Installation</a> • <a href="#quickstart">Quickstart</a> • <a href="#datasets">Datasets (37)</a> • <a href="#inductive-datasets">Inductive Datasets (5)</a> • <a href="#models">Models (40)</a> • <a href="#supporters">Support</a> • <a href="#citation">Citation</a> </p>

Installation PyPI - Python Version PyPI

The latest stable version of PyKEEN requires Python 3.9+. It can be downloaded and installed from PyPI with:

pip install pykeen

The latest version of PyKEEN can be installed directly from the source code on GitHub with:

pip install git+https://github.com/pykeen/pykeen.git

More information about installation (e.g., development mode, Windows installation, Colab, Kaggle, extras) can be found in the installation documentation.

Quickstart Documentation Status

This example shows how to train a model on a dataset and test on another dataset.

The fastest way to get up and running is to use the pipeline function. It provides a high-level entry into the extensible functionality of this package. The following example shows how to train and evaluate the TransE model on the Nations dataset. By default, the training loop uses the stochastic local closed world assumption (sLCWA) training approach and evaluates with rank-based evaluation.

from pykeen.pipeline import pipeline

result = pipeline(
    model='TransE',
    dataset='nations',
)

The results are returned in an instance of the PipelineResult dataclass that has attributes for the trained model, the training loop, the evaluation, and more. See the tutorials on using your own dataset, understanding the evaluation, and making novel link predictions.

PyKEEN is extensible such that:

The full documentation can be found at https://pykeen.readthedocs.io.

Implementation

Below are the models, datasets, training modes, evaluators, and metrics implemented in pykeen.

Datasets

The following 37 datasets are built in to PyKEEN. The citation for each dataset corresponds to either the paper describing the dataset, the first paper published using the dataset with knowledge graph embedding models, or the URL for the dataset if neither of the first two are available. If you want to use a custom dataset, see the Bring Your Own Dataset tutorial. If you have a suggestion for another dataset to include in PyKEEN, please let us know here.

NameDocumentationCitationEntitiesRelationsTriples
Aristo-v4pykeen.datasets.AristoV4Chen et al., 2021420161593279425
BioKGpykeen.datasets.BioKGWalsh et al., 2019105524172067997
Clinical Knowledge Graphpykeen.datasets.CKGSantos et al., 202076174191126691525
CN3l Familypykeen.datasets.CN3lChen et al., 201732064221777
CoDEx (large)pykeen.datasets.CoDExLargeSafavi et al., 20207795169612437
CoDEx (medium)pykeen.datasets.CoDExMediumSafavi et al., 20201705051206205
CoDEx (small)pykeen.datasets.CoDExSmallSafavi et al., 202020344236543
ConceptNetpykeen.datasets.ConceptNetSpeer et al., 2017283700835034074917
Countriespykeen.datasets.CountriesBouchard et al., 201527121158
Commonsense Knowledge Graphpykeen.datasets.CSKGIlievski et al., 20202087833584598728
DB100Kpykeen.datasets.DB100KDing et al., 201899604470697479
DBpedia50pykeen.datasets.DBpedia50Shi et al., 20172462435134421
Drug Repositioning Knowledge Graphpykeen.datasets.DRKGgnn4dr/DRKG972381075874257
FB15kpykeen.datasets.FB15kBordes et al., 2013149511345592213
FB15k-237pykeen.datasets.FB15k237Toutanova et al., 201514505237310079
Global Biotic Interactionspykeen.datasets.GlobiPoelen et al., 2014404207391966385
Hetionetpykeen.datasets.HetionetHimmelstein et al., 201745158242250197
Kinshipspykeen.datasets.KinshipsKemp et al., 20061042510686
Nationspykeen.datasets.NationsZhenfengLei/KGDatasets14551992
NationsLpykeen.datasets.NationsLiteralpykeen/pykeen14551992
OGB BioKGpykeen.datasets.OGBBioKGHu et al., 202093773515088434
OGB WikiKG2pykeen.datasets.OGBWikiKG2Hu et al., 2020250060453517137181
OpenBioLinkpykeen.datasets.OpenBioLinkBreit et al., 2020180992284563407
OpenBioLink LQpykeen.datasets.OpenBioLinkLQBreit et al., 20204808763227320889
OpenEA Familypykeen.datasets.OpenEASun et al., 20201500024838265
PharMeBINetpykeen.datasets.PharMeBINetKönigs et al., 2022286940720815883653
PharmKGpykeen.datasets.PharmKGZheng et al., 2020188296391093236
PharmKG8kpykeen.datasets.PharmKG8kZheng et al., 2020724728485787
PrimeKGpykeen.datasets.PrimeKGChandak et al., 2022129375308100498
Unified Medical Language Systempykeen.datasets.UMLSZhenfengLei/KGDatasets135466529
WD50K (triples)pykeen.datasets.WD50KTGalkin et al., 202040107473232344
Wikidata5Mpykeen.datasets.Wikidata5MWang et al., 2019459414982220624239
WK3l-120k Familypykeen.datasets.WK3l120kChen et al., 201711974831091375406
WK3l-15k Familypykeen.datasets.WK3l15kChen et al., 2017151261841209041
WordNet-18pykeen.datasets.WN18Bordes et al., 20144094318151442
WordNet-18 (RR)pykeen.datasets.WN18RRToutanova et al., 2015405591192583
YAGO3-10pykeen.datasets.YAGO310Mahdisoltani et al., 2015123143371089000

Inductive Datasets

The following 5 inductive datasets are built in to PyKEEN.

NameDocumentationCitation
ILPC2022 Largepykeen.datasets.ILPC2022LargeGalkin et al., 2022
ILPC2022 Smallpykeen.datasets.ILPC2022SmallGalkin et al., 2022
FB15k-237pykeen.datasets.InductiveFB15k237Teru et al., 2020
NELLpykeen.datasets.InductiveNELLTeru et al., 2020
WordNet-18 (RR)pykeen.datasets.InductiveWN18RRTeru et al., 2020

Representations

The following 20 representations are implemented by PyKEEN.

NameReference
Backfillpykeen.nn.BackfillRepresentation
Text Encodingpykeen.nn.BiomedicalCURIERepresentation
Combinedpykeen.nn.CombinedRepresentation
Embeddingpykeen.nn.Embedding
Featurized Message Passingpykeen.nn.FeaturizedMessagePassingRepresentation
Low Rank Embeddingpykeen.nn.LowRankRepresentation
NodePiecepykeen.nn.NodePieceRepresentation
Partitionpykeen.nn.PartitionRepresentation
R-GCNpykeen.nn.RGCNRepresentation
Simple Message Passingpykeen.nn.SimpleMessagePassingRepresentation
CompGCNpykeen.nn.SingleCompGCNRepresentation
Subset Representationpykeen.nn.SubsetRepresentation
Tensor-Trainpykeen.nn.TensorTrainRepresentation
Text Encodingpykeen.nn.TextRepresentation
Tokenizationpykeen.nn.TokenizationRepresentation
Transformedpykeen.nn.TransformedRepresentation
Typed Message Passingpykeen.nn.TypedMessagePassingRepresentation
Visualpykeen.nn.VisualRepresentation
Wikidata Text Encodingpykeen.nn.WikidataTextRepresentation
Wikidata Visualpykeen.nn.WikidataVisualRepresentation

Interactions

The following 34 interactions are implemented by PyKEEN.

NameReferenceCitation
AutoSFpykeen.nn.AutoSFInteractionZhang et al., 2020
BoxEpykeen.nn.BoxEInteractionAbboud et al., 2020
ComplExpykeen.nn.ComplExInteractionTrouillon et al., 2016
ConvEpykeen.nn.ConvEInteractionDettmers et al., 2018
ConvKBpykeen.nn.ConvKBInteractionNguyen et al., 2018
Canonical Tensor Decompositionpykeen.nn.CPInteractionLacroix et al., 2018
CrossEpykeen.nn.CrossEInteractionZhang et al., 2019
DistMApykeen.nn.DistMAInteractionShi et al., 2019
DistMultpykeen.nn.DistMultInteractionYang et al., 2014
ER-MLPpykeen.nn.ERMLPInteractionDong et al., 2014
ER-MLP (E)pykeen.nn.ERMLPEInteractionSharifzadeh et al., 2019
HolEpykeen.nn.HolEInteractionNickel et al., 2016
KG2Epykeen.nn.KG2EInteractionHe et al., 2015
LineaREpykeen.nn.LineaREInteractionPeng et al., 2020
MultiLinearTuckerpykeen.nn.MultiLinearTuckerInteractionTucker et al., 1966
MuREpykeen.nn.MuREInteractionBalažević et al., 2019
NTNpykeen.nn.NTNInteractionSocher et al., 2013
PairREpykeen.nn.PairREInteractionChao et al., 2020
ProjEpykeen.nn.ProjEInteractionShi et al., 2017
QuatEpykeen.nn.QuatEInteractionZhang et al., 2019
RESCALpykeen.nn.RESCALInteractionNickel et al., 2011
RotatEpykeen.nn.RotatEInteractionSun et al., 2019
Structured Embeddingpykeen.nn.SEInteractionBordes et al., 2011
SimplEpykeen.nn.SimplEInteractionKazemi et al., 2018
TorusEpykeen.nn.TorusEInteractionEbisu et al., 2018
TransDpykeen.nn.TransDInteractionJi et al., 2015
TransEpykeen.nn.TransEInteractionBordes et al., 2013
TransFpykeen.nn.TransFInteractionFeng et al., 2016
Transformerpykeen.nn.TransformerInteractionGalkin et al., 2020
TransHpykeen.nn.TransHInteractionWang et al., 2014
TransRpykeen.nn.TransRInteractionLin et al., 2015
TripleREpykeen.nn.TripleREInteractionYu et al., 2021
TuckERpykeen.nn.TuckERInteractionBalažević et al., 2019
Unstructured Modelpykeen.nn.UMInteractionBordes et al., 2014

Models

The following 40 models are implemented by PyKEEN.

NameModelCitation
AutoSFpykeen.models.AutoSFZhang et al., 2020
BoxEpykeen.models.BoxEAbboud et al., 2020
Canonical Tensor Decompositionpykeen.models.CPLacroix et al., 2018
CompGCNpykeen.models.CompGCNVashishth et al., 2020
ComplExpykeen.models.ComplExTrouillon et al., 2016
ComplEx Literalpykeen.models.ComplExLiteralKristiadi et al., 2018
ConvEpykeen.models.ConvEDettmers et al., 2018
ConvKBpykeen.models.ConvKBNguyen et al., 2018
CooccurrenceFilteredpykeen.models.CooccurrenceFilteredModelBerrendorf et al., 2022
CrossEpykeen.models.CrossEZhang et al., 2019
DistMApykeen.models.DistMAShi et al., 2019
DistMultpykeen.models.DistMultYang et al., 2014
DistMult Literalpykeen.models.DistMultLiteralKristiadi et al., 2018
DistMult Literal (Gated)pykeen.models.DistMultLiteralGatedKristiadi et al., 2018
ER-MLPpykeen.models.ERMLPDong et al., 2014
ER-MLP (E)pykeen.models.ERMLPESharifzadeh et al., 2019
Fixed Modelpykeen.models.FixedModelBerrendorf et al., 2021
HolEpykeen.models.HolENickel et al., 2016
InductiveNodePiecepykeen.models.InductiveNodePieceGalkin et al., 2021
InductiveNodePieceGNNpykeen.models.InductiveNodePieceGNNGalkin et al., 2021
KG2Epykeen.models.KG2EHe et al., 2015
MuREpykeen.models.MuREBalažević et al., 2019
NTNpykeen.models.NTNSocher et al., 2013
NodePiecepykeen.models.NodePieceGalkin et al., 2021
PairREpykeen.models.PairREChao et al., 2020
ProjEpykeen.models.ProjEShi et al., 2017
QuatEpykeen.models.QuatEZhang et al., 2019
R-GCNpykeen.models.RGCNSchlichtkrull et al., 2018
RESCALpykeen.models.RESCALNickel et al., 2011
RotatEpykeen.models.RotatESun et al., 2019
SimplEpykeen.models.SimplEKazemi et al., 2018
Structured Embeddingpykeen.models.SEBordes et al., 2011
TorusEpykeen.models.TorusEEbisu et al., 2018
TransDpykeen.models.TransDJi et al., 2015
TransEpykeen.models.TransEBordes et al., 2013
TransFpykeen.models.TransFFeng et al., 2016
TransHpykeen.models.TransHWang et al., 2014
TransRpykeen.models.TransRLin et al., 2015
TuckERpykeen.models.TuckERBalažević et al., 2019
Unstructured Modelpykeen.models.UMBordes et al., 2014

Losses

The following 15 losses are implemented by PyKEEN.

NameReferenceDescription
Adversarially weighted binary cross entropy (with logits)pykeen.losses.AdversarialBCEWithLogitsLossAn adversarially weighted BCE loss.
Binary cross entropy (after sigmoid)pykeen.losses.BCEAfterSigmoidLossThe numerically unstable version of explicit Sigmoid + BCE loss.
Binary cross entropy (with logits)pykeen.losses.BCEWithLogitsLossThe binary cross entropy loss.
Cross entropypykeen.losses.CrossEntropyLossThe cross entropy loss that evaluates the cross entropy after softmax output.
Double Marginpykeen.losses.DoubleMarginLossA limit-based scoring loss, with separate margins for positive and negative elements from [sun2018]_.
Focalpykeen.losses.FocalLossThe focal loss proposed by [lin2018]_.
InfoNCE loss with additive marginpykeen.losses.InfoNCELossThe InfoNCE loss with additive margin proposed by [wang2022]_.
Margin rankingpykeen.losses.MarginRankingLossThe pairwise hinge loss (i.e., margin ranking loss).
Mean squared errorpykeen.losses.MSELossThe mean squared error loss.
Self-adversarial negative samplingpykeen.losses.NSSALossThe self-adversarial negative sampling loss function proposed by [sun2019]_.
Pairwise logisticpykeen.losses.PairwiseLogisticLossThe pairwise logistic loss.
Pointwise Hingepykeen.losses.PointwiseHingeLossThe pointwise hinge loss.
Soft margin rankingpykeen.losses.SoftMarginRankingLossThe soft pairwise hinge loss (i.e., soft margin ranking loss).
Softpluspykeen.losses.SoftplusLossThe pointwise logistic loss (i.e., softplus loss).
Soft Pointwise Hingepykeen.losses.SoftPointwiseHingeLossThe soft pointwise hinge loss.

Regularizers

The following 6 regularizers are implemented by PyKEEN.

NameReferenceDescription
combinedpykeen.regularizers.CombinedRegularizerA convex combination of regularizers.
lppykeen.regularizers.LpRegularizerA simple L_p norm based regularizer.
nopykeen.regularizers.NoRegularizerA regularizer which does not perform any regularization.
normlimitpykeen.regularizers.NormLimitRegularizerA regularizer which formulates a soft constraint on a maximum norm.
orthogonalitypykeen.regularizers.OrthogonalityRegularizerA regularizer for the soft orthogonality constraints from [wang2014]_.
powersumpykeen.regularizers.PowerSumRegularizerA simple x^p based regularizer.

Training Loops

The following 3 training loops are implemented in PyKEEN.

NameReferenceDescription
lcwapykeen.training.LCWATrainingLoopA training loop that is based upon the local closed world assumption (LCWA).
slcwapykeen.training.SLCWATrainingLoopA training loop that uses the stochastic local closed world assumption training approach.
symmetriclcwapykeen.training.SymmetricLCWATrainingLoopA "symmetric" LCWA scoring heads and tails at once.

Negative Samplers

The following 3 negative samplers are implemented in PyKEEN.

NameReferenceDescription
basicpykeen.sampling.BasicNegativeSamplerA basic negative sampler.
bernoullipykeen.sampling.BernoulliNegativeSamplerAn implementation of the Bernoulli negative sampling approach proposed by [wang2014]_.
pseudotypedpykeen.sampling.PseudoTypedNegativeSamplerA sampler that accounts for which entities co-occur with a relation.

Stoppers

The following 2 stoppers are implemented in PyKEEN.

NameReferenceDescription
earlypykeen.stoppers.EarlyStopperA harness for early stopping.
noppykeen.stoppers.NopStopperA stopper that does nothing.

Evaluators

The following 5 evaluators are implemented in PyKEEN.

NameReferenceDescription
classificationpykeen.evaluation.ClassificationEvaluatorAn evaluator that uses a classification metrics.
macrorankbasedpykeen.evaluation.MacroRankBasedEvaluatorMacro-average rank-based evaluation.
ogbpykeen.evaluation.OGBEvaluatorA sampled, rank-based evaluator that applies a custom OGB evaluation.
rankbasedpykeen.evaluation.RankBasedEvaluatorA rank-based evaluator for KGE models.
sampledrankbasedpykeen.evaluation.SampledRankBasedEvaluatorA rank-based evaluator using sampled negatives instead of all negatives.

Metrics

The following 44 metrics are implemented in PyKEEN.

NameIntervalDirectionDescriptionType
Accuracy$[0, 1]$📈The ratio of the number of correct classifications to the total number.Classification
Area Under The Receiver Operating Characteristic Curve$[0, 1]$📈The area under the receiver operating characteristic curve.Classification
Average Precision Score$[0, 1]$📈The average precision across different thresholds.Classification
Balanced Accuracy Score$[0, 1]$📈The average of recall obtained on each class.Classification
Diagnostic Odds Ratio$[0, ∞)$📈The ratio of positive and negative likelihood ratio.Classification
F1 Score$[0, 1]$📈The harmonic mean of precision and recall.Classification
False Discovery Rate$[0, 1]$📉The proportion of predicted negatives which are true positive.Classification
False Negative Rate$[0, 1]$📉The probability that a truly positive triple is predicted negative.Classification
False Omission Rate$[0, 1]$📉The proportion of predicted positives which are true negative.Classification
False Positive Rate$[0, 1]$📉The probability that a truly negative triple is predicted positive.Classification
Fowlkes Mallows Index$[0, 1]$📈The Fowlkes Mallows index.Classification
Informedness$[-1, 1]$📈The informedness metric.Classification
Matthews Correlation Coefficient$[-1, 1]$📈The Matthews Correlation Coefficient (MCC).Classification
Negative Likelihood Ratio$[0, ∞)$📉The ratio of false positive rate to true positive rate.Classification
Negative Predictive Value$[0, 1]$📈The proportion of predicted negatives which are true negatives.Classification
Number of Scores$[0, ∞)$📈The number of scores.Classification
Positive Likelihood Ratio$[0, ∞)$📈The ratio of true positive rate to false positive rate.Classification
Positive Predictive Value$[0, 1]$📈The proportion of predicted positives which are true positive.Classification
Prevalence Threshold$[0, ∞)$📉The prevalence threshold.Classification
Threat Score$[0, 1]$📈The harmonic mean of precision and recall.Classification
True Negative Rate$[0, 1]$📈The probability that a truly false triple is predicted negative.Classification
True Positive Rate$[0, 1]$📈The probability that a truly positive triple is predicted positive.Classification
Adjusted Arithmetic Mean Rank (AAMR)$[0, 2)$📉The mean over all ranks divided by its expected value.Ranking
Adjusted Arithmetic Mean Rank Index (AAMRI)$[-1, 1]$📈The re-indexed adjusted mean rank (AAMR)Ranking
Adjusted Geometric Mean Rank Index (AGMRI)$(\frac{-E[f]}{1-E[f]}, 1]$📈The re-indexed adjusted geometric mean rank (AGMRI)Ranking
Adjusted Hits at K$(\frac{-E[f]}{1-E[f]}, 1]$📈The re-indexed adjusted hits at KRanking
Adjusted Inverse Harmonic Mean Rank$(\frac{-E[f]}{1-E[f]}, 1]$📈The re-indexed adjusted MRRRanking
Geometric Mean Rank (GMR)$[1, ∞)$📉The geometric mean over all ranks.Ranking
Harmonic Mean Rank (HMR)$[1, ∞)$📉The harmonic mean over all ranks.Ranking
Hits @ K$[0, 1]$📈The relative frequency of ranks not larger than a given k.Ranking
Inverse Arithmetic Mean Rank (IAMR)$(0, 1]$📈The inverse of the arithmetic mean over all ranks.Ranking
Inverse Geometric Mean Rank (IGMR)$(0, 1]$📈The inverse of the geometric mean over all ranks.Ranking
Inverse Median Rank$(0, 1]$📈The inverse of the median over all ranks.Ranking
Mean Rank (MR)$[1, ∞)$📉The arithmetic mean over all ranks.Ranking
Mean Reciprocal Rank (MRR)$(0, 1]$📈The inverse of the harmonic mean over all ranks.Ranking
Median Rank$[1, ∞)$📉The median over all ranks.Ranking
z-Geometric Mean Rank (zGMR)$(-∞, ∞)$📈The z-scored geometric mean rankRanking
z-Hits at K$(-∞, ∞)$📈The z-scored hits at KRanking
z-Mean Rank (zMR)$(-∞, ∞)$📈The z-scored mean rankRanking
z-Mean Reciprocal Rank (zMRR)$(-∞, ∞)$📈The z-scored mean reciprocal rankRanking

Trackers

The following 8 trackers are implemented in PyKEEN.

NameReferenceDescription
consolepykeen.trackers.ConsoleResultTrackerA class that directly prints to console.
csvpykeen.trackers.CSVResultTrackerTracking results to a CSV file.
jsonpykeen.trackers.JSONResultTrackerTracking results to a JSON lines file.
mlflowpykeen.trackers.MLFlowResultTrackerA tracker for MLflow.
neptunepykeen.trackers.NeptuneResultTrackerA tracker for Neptune.ai.
pythonpykeen.trackers.PythonResultTrackerA tracker which stores everything in Python dictionaries.
tensorboardpykeen.trackers.TensorBoardResultTrackerA tracker for TensorBoard.
wandbpykeen.trackers.WANDBResultTrackerA tracker for Weights and Biases.

Experimentation

Reproduction

PyKEEN includes a set of curated experimental settings for reproducing past landmark experiments. They can be accessed and run like:

pykeen experiments reproduce tucker balazevic2019 fb15k

Where the three arguments are the model name, the reference, and the dataset. The output directory can be optionally set with -d.

Ablation

PyKEEN includes the ability to specify ablation studies using the hyper-parameter optimization module. They can be run like:

pykeen experiments ablation ~/path/to/config.json

Large-scale Reproducibility and Benchmarking Study

We used PyKEEN to perform a large-scale reproducibility and benchmarking study which are described in our article:

@article{ali2020benchmarking,
  author={Ali, Mehdi and Berrendorf, Max and Hoyt, Charles Tapley and Vermue, Laurent and Galkin, Mikhail and Sharifzadeh, Sahand and Fischer, Asja and Tresp, Volker and Lehmann, Jens},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  title={Bringing Light Into the Dark: A Large-scale Evaluation of Knowledge Graph Embedding Models under a Unified Framework},
  year={2021},
  pages={1-1},
  doi={10.1109/TPAMI.2021.3124805}}
}

We have made all code, experimental configurations, results, and analyses that lead to our interpretations available at https://github.com/pykeen/benchmarking.

Contributing

Contributions, whether filing an issue, making a pull request, or forking, are appreciated. See CONTRIBUTING.md for more information on getting involved.

If you have questions, please use the GitHub discussions feature at https://github.com/pykeen/pykeen/discussions/new.

Acknowledgements

Supporters

This project has been supported by several organizations (in alphabetical order):

Funding

The development of PyKEEN has been funded by the following grants:

Funding BodyProgramGrant
DARPAYoung Faculty Award (PI: Benjamin Gyori)W911NF2010255
DARPAAutomating Scientific Knowledge Extraction (ASKE)HR00111990009
German Federal Ministry of Education and Research (BMBF)Maschinelles Lernen mit Wissensgraphen (MLWin)01IS18050D
German Federal Ministry of Education and Research (BMBF)Munich Center for Machine Learning (MCML)01IS18036A
Innovation Fund Denmark (Innovationsfonden)Danish Center for Big Data Analytics driven Innovation (DABAI)Grand Solutions

Logo

The PyKEEN logo was designed by Carina Steinborn

Citation

If you have found PyKEEN useful in your work, please consider citing our article:

@article{ali2021pykeen,
    author = {Ali, Mehdi and Berrendorf, Max and Hoyt, Charles Tapley and Vermue, Laurent and Sharifzadeh, Sahand and Tresp, Volker and Lehmann, Jens},
    journal = {Journal of Machine Learning Research},
    number = {82},
    pages = {1--6},
    title = {{PyKEEN 1.0: A Python Library for Training and Evaluating Knowledge Graph Embeddings}},
    url = {http://jmlr.org/papers/v22/20-825.html},
    volume = {22},
    year = {2021}
}