Awesome
<div align="center"> <img src="https://github.com/zjukg/MEAformer/blob/main/IMG/MEAformer7.png" alt="Logo" width="400"> </div>ποΈ MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid
<!--<div align="center"> <img src="https://github.com/zjukg/MEAformer/blob/main/IMG/MEAformer.jpg" width="95%" height="auto" /> </div> --> <p align="center"><i><b>π Click to see the Video</b></i></p>This paper introduces MEAformer, a multi-modal entity alignment transformer approach for meta modality hybrid, which dynamically predicts the mutual correlation coefficients among modalities for more fine-grained entity-level modality fusion and alignment.
π News
2024-03
Our paper NativE: Multi-modal Knowledge Graph Completion in the Wild [Repo
] is accepted by SIGIR 2024 !.2024-02
We preprint our Survey Knowledge Graphs Meet Multi-Modal Learning: A Comprehensive Survey [Repo
].2024-02
We release the [Repo
] for our paper Unleashing the Power of Imbalanced Modality Information for Multi-modal Knowledge Graph Completion , COLING 20242024-02
We preprint our Paper ASGEA: Exploiting Logic Rules from Align-Subgraphs for Entity Alignment [Repo
].2024-01
Our paper Revisit and Outstrip Entity Alignment: A Perspective of Generative Models [Repo] is accepted by ICLR 2024 !2023-07
We release the [Repo] for our paper: Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment ! [Slide
], ISWC 20232023-04
We release the complete code and data for MEAformer ! [Slide
] [Vedio
], ACM MM 2023
π¬ Dependencies
pip install -r requirement.txt
Details
- Python (>= 3.7)
- PyTorch (>= 1.6.0)
- numpy (>= 1.19.2)
- Transformers (== 4.21.3)
- easydict (>= 1.10)
- unidecode (>= 1.3.6)
- tensorboard (>= 2.11.0)
π Train
- Quick start: Using script file (
run.sh
)
>> cd MEAformer
>> bash run.sh
- Optional: Using the
bash command
>> cd MEAformer
# -----------------------
# ---- non-iterative ----
# -----------------------
# ---- w/o surface ----
# FBDB15K
>> bash run_meaformer.sh 1 FBDB15K norm 0.8 0
>> bash run_meaformer.sh 1 FBDB15K norm 0.5 0
>> bash run_meaformer.sh 1 FBDB15K norm 0.2 0
# FBYG15K
>> bash run_meaformer.sh 1 FBYG15K norm 0.8 0
>> bash run_meaformer.sh 1 FBYG15K norm 0.5 0
>> bash run_meaformer.sh 1 FBYG15K norm 0.2 0
# DBP15K
>> bash run_meaformer.sh 1 DBP15K zh_en 0.3 0
>> bash run_meaformer.sh 1 DBP15K ja_en 0.3 0
>> bash run_meaformer.sh 1 DBP15K fr_en 0.3 0
# ---- w/ surface ----
# DBP15K
>> bash run_meaformer.sh 1 DBP15K zh_en 0.3 1
>> bash run_meaformer.sh 1 DBP15K ja_en 0.3 1
>> bash run_meaformer.sh 1 DBP15K fr_en 0.3 1
# -----------------------
# ------ iterative ------
# -----------------------
# ---- w/o surface ----
# FBDB15K
>> bash run_meaformer_il.sh 1 FBDB15K norm 0.8 0
>> bash run_meaformer_il.sh 1 FBDB15K norm 0.5 0
>> bash run_meaformer_il.sh 1 FBDB15K norm 0.2 0
# FBYG15K
>> bash run_meaformer_il.sh 1 FBYG15K norm 0.8 0
>> bash run_meaformer_il.sh 1 FBYG15K norm 0.5 0
>> bash run_meaformer_il.sh 1 FBYG15K norm 0.2 0
# DBP15K
>> bash run_meaformer_il.sh 1 DBP15K zh_en 0.3 0
>> bash run_meaformer_il.sh 1 DBP15K ja_en 0.3 0
>> bash run_meaformer_il.sh 1 DBP15K fr_en 0.3 0
# ---- w/ surface ----
# DBP15K
>> bash run_meaformer_il.sh 1 DBP15K zh_en 0.3 1
>> bash run_meaformer_il.sh 1 DBP15K ja_en 0.3 1
>> bash run_meaformer_il.sh 1 DBP15K fr_en 0.3 1
βTips: you can open the run_meaformer.sh
or run_meaformer_il.sh
file for parameter or training target modification.
π― Results
$\bf{H@1}$ Performance with the Settings: w/o surface & Non-iterative
in UMAEA. We modified part of the MSNEA to involve not using the content of attribute values but only the attribute types themselves (See issues for details):
Method | $\bf{DBP15K_{ZH-EN}}$ | $\bf{DBP15K_{JA-EN}}$ | $\bf{DBP15K_{FR-EN}}$ |
---|---|---|---|
MSNEA | .609 | .541 | .557 |
EVA | .683 | .669 | .686 |
MCLEA | .726 | .719 | .719 |
MEAformer | .772 | .764 | .771 |
UMAEA | .800 | .801 | .818 |
π Dataset
- βNOTE: Download from GoogleDrive (1.26G) and unzip it to make those files satisfy the following file hierarchy:
ROOT
βββ data
βΒ Β βββ mmkg
βββ code
Β Β βββ MEAformer
- Case analysis Jupyter script: GoogleDrive (180M) base on the raw images of entities (need to be unzip). I hope this gives you a good understanding of this dataset.
- [ Option ] The raw Relations & Attributes appeared in DBP15k and case from MEAformer can be downloaded from
Huggingface
(150M). - [ Option ] The raw images of entities appeared in DBP15k can be downloaded from
Baidu Cloud Drive
(50GB) with the pass codemmea
. All images are saved as title-image pairs in dictionaries and can be accessed with the following code :
import pickle
zh_images = pickle.load(open("eva_image_resources/dbp15k/zh_dbp15k_link_img_dict_full.pkl",'rb'))
print(en_images["http://zh.dbpedia.org/resource/ι¦ζΈ―ζη·ι»θ¦"].size)
Code Path
<details> <summary>π π Click</summary>MEAformer
βββ config.py
βββ main.py
βββ requirement.txt
βββ run_meaformer.sh
βββ run_meaformer_il.sh
βββ run.sh
βββ model
β βββ __init__.py
β βββ layers.py
β βββ MEAformer_loss.py
β βββ MEAformer.py
β βββ MEAformer_tools.py
β βββ Tool_model.py
βββ src
β βββ __init__.py
β βββ distributed_utils.py
β βββ data.py
β βββ utils.py
βββ torchlight
βββ __init__.py
βββ logger.py
βββ metric.py
βββ utils.py
</details>
Data Path
<details> <summary>π π Click</summary>mmkg
βββ DBP15K
βΒ Β βββ fr_en
βΒ Β βΒ Β βββ ent_ids_1
βΒ Β βΒ Β βββ ent_ids_2
βΒ Β βΒ Β βββ ill_ent_ids
βΒ Β βΒ Β βββ training_attrs_1
βΒ Β βΒ Β βββ training_attrs_2
βΒ Β βΒ Β βββ triples_1
βΒ Β βΒ Β βββ triples_2
βΒ Β βββ ja_en
βΒ Β βΒ Β βββ ent_ids_1
βΒ Β βΒ Β βββ ent_ids_2
βΒ Β βΒ Β βββ ill_ent_ids
βΒ Β βΒ Β βββ training_attrs_1
βΒ Β βΒ Β βββ training_attrs_2
βΒ Β βΒ Β βββ triples_1
βΒ Β βΒ Β βββ triples_2
βΒ Β βββ translated_ent_name
βΒ Β βΒ Β βββ dbp_fr_en.json
βΒ Β βΒ Β βββ dbp_ja_en.json
βΒ Β βΒ Β βββ dbp_zh_en.json
βΒ Β βββ zh_en
βΒ Β βββ ent_ids_1
βΒ Β βββ ent_ids_2
βΒ Β βββ ill_ent_ids
βΒ Β βββ training_attrs_1
βΒ Β βββ training_attrs_2
βΒ Β βββ triples_1
βΒ Β βββ triples_2
βββ FBDB15K
βΒ Β βββ norm
βΒ Β βββ ent_ids_1
βΒ Β βββ ent_ids_2
βΒ Β βββ ill_ent_ids
βΒ Β βββ training_attrs_1
βΒ Β βββ training_attrs_2
βΒ Β βββ triples_1
βΒ Β βββ triples_2
βββ FBYG15K
βΒ Β βββ norm
βΒ Β βββ ent_ids_1
βΒ Β βββ ent_ids_2
βΒ Β βββ ill_ent_ids
βΒ Β βββ training_attrs_1
βΒ Β βββ training_attrs_2
βΒ Β βββ triples_1
βΒ Β βββ triples_2
βββ embedding
βΒ Β βββ glove.6B.300d.txt
βββ pkls
βΒ Β βββ dbpedia_wikidata_15k_dense_GA_id_img_feature_dict.pkl
βΒ Β βββ dbpedia_wikidata_15k_norm_GA_id_img_feature_dict.pkl
βΒ Β βββ FBDB15K_id_img_feature_dict.pkl
βΒ Β βββ FBYG15K_id_img_feature_dict.pkl
βΒ Β βββ fr_en_GA_id_img_feature_dict.pkl
βΒ Β βββ ja_en_GA_id_img_feature_dict.pkl
βΒ Β βββ zh_en_GA_id_img_feature_dict.pkl
βββ MEAformer
βββ dump
</details>
π€ Cite:
Please condiser citing this paper if you use the code
or data
from our work.
Thanks a lot :)
@inproceedings{DBLP:conf/mm/ChenCZGFHZGPSC23,
author = {Zhuo Chen and
Jiaoyan Chen and
Wen Zhang and
Lingbing Guo and
Yin Fang and
Yufeng Huang and
Yichi Zhang and
Yuxia Geng and
Jeff Z. Pan and
Wenting Song and
Huajun Chen},
title = {MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality
Hybrid},
booktitle = {{ACM} Multimedia},
pages = {3317--3327},
publisher = {{ACM}},
year = {2023}
}
π‘ Acknowledgement
We appreciate MCLEA, MSNEA, EVA, MMEA and many other related works for their open-source contributions.
<a href="https://info.flagcounter.com/VOlE"><img src="https://s11.flagcounter.com/count2/VOlE/bg_FFFFFF/txt_000000/border_F7F7F7/columns_6/maxflags_12/viewers_3/labels_0/pageviews_0/flags_0/percent_0/" alt="Flag Counter" border="0"></a>