Awesome
<!--<div align="center"> <img src="https://github.com/zjukg/UMAEA/blob/main/IMG/UMAEA2.jpg" alt="Logo" width="600"> </div>-->ποΈ Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment
<div align="center"> <img src="https://github.com/zjukg/UMAEA/blob/main/IMG/case.jpg" width="70%" height="auto" /> </div>In the face of modality incompleteness, some models succumb to overfitting the modality noise, and exhibit performance oscillations or declines at high modality missing rates. This indicates that the inclusion of additional multi-modal data can sometimes adversely affect EA. To address these challenges, we introduces
UMAEA
, a robust multi-modal entity alignment approach designed to tackle uncertainly missing and ambiguous visual modalities.
π News
2024-02
We preprint our Survey Knowledge Graphs Meet Multi-Modal Learning: A Comprehensive Survey [Repo
].2024-02
We release the [Repo
] for our paper Unleashing the Power of Imbalanced Modality Information for Multi-modal Knowledge Graph Completion , COLING 20242024-02
We preprint our Paper ASGEA: Exploiting Logic Rules from Align-Subgraphs for Entity Alignment [Repo
].2024-01
Our paper Revisit and Outstrip Entity Alignment: A Perspective of Generative Models [Repo] is accepted by ICLR 2024 !2023-10
We preprint our paper Universal Multi-modal Entity Alignment via Iteratively Fusing Modality Similarity Paths !2023-07
We release the [Repo] for our paper: Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment ! [Slide
], ISWC 20232023-04
We release the complete code and data for MEAformer ! [Slide
] [Vedio
], ACM MM 2023
π¬ Dependencies
pip install -r requirement.txt
Details
- Python (>= 3.7)
- PyTorch (>= 1.6.0)
- numpy (>= 1.19.2)
- Transformers (== 4.21.3)
- easydict (>= 1.10)
- unidecode (>= 1.3.6)
- tensorboard (>= 2.11.0)
π Train
- Quick start: Using script file (
run.sh
)
>> cd UMAEA
>> bash run.sh
- Optional: Using the
bash command
- Model Training Recommendationπ: For more stable and efficient model training, we suggest using the code
without CMMI
(w/o CMMI
) initially. If you plan to use this model as a baseline, we also recommend using the scriptwithout CMMI
to directly measure the model's performance in an End2End scenario.
# Command Details:
# Bash file / GPU / Dataset / Data Split / R_{sa} / R_{img}
# Begin:
# ---------- R_{img} = 0.4 & iter. & w/o CMMI ----------
>> bash run_umaea_00.sh 0 OEA_D_W_15K_V1 norm 0.2 0.4
>> bash run_umaea_00.sh 0 OEA_D_W_15K_V2 norm 0.2 0.4
>> bash run_umaea_00.sh 0 OEA_EN_FR_15K_V1 norm 0.2 0.4
>> bash run_umaea_00.sh 0 OEA_EN_DE_15K_V1 norm 0.2 0.4
>> bash run_umaea_00.sh 0 DBP15K fr_en 0.3 0.4
>> bash run_umaea_00.sh 0 DBP15K ja_en 0.3 0.4
>> bash run_umaea_00.sh 0 DBP15K zh_en 0.3 0.4
# ---------- R_{img} = 0.6 & non-iter. & w/o CMMI ----------
>> bash run_umaea_0.sh 0 OEA_D_W_15K_V1 norm 0.2 0.6
>> bash run_umaea_0.sh 0 OEA_D_W_15K_V2 norm 0.2 0.6
>> bash run_umaea_0.sh 0 OEA_EN_FR_15K_V1 norm 0.2 0.6
>> bash run_umaea_0.sh 0 OEA_EN_DE_15K_V1 norm 0.2 0.6
>> bash run_umaea_0.sh 0 DBP15K fr_en 0.3 0.6
>> bash run_umaea_0.sh 0 DBP15K ja_en 0.3 0.6
>> bash run_umaea_0.sh 0 DBP15K zh_en 0.3 0.6
# --------- R_{img} = 0.1 & non-iter. & w/ CMMI ---------
>> bash run_umaea_012.sh 0 OEA_D_W_15K_V1 norm 0.2 0.1
>> bash run_umaea_012.sh 0 OEA_D_W_15K_V2 norm 0.2 0.1
>> bash run_umaea_012.sh 0 OEA_EN_FR_15K_V1 norm 0.2 0.1
>> bash run_umaea_012.sh 0 OEA_EN_DE_15K_V1 norm 0.2 0.1
>> bash run_umaea_012.sh 0 DBP15K fr_en 0.3 0.1
>> bash run_umaea_012.sh 0 DBP15K ja_en 0.3 0.1
>> bash run_umaea_012.sh 0 DBP15K zh_en 0.3 0.1
# --------- R_{img} = 0.2 & iter. & w/ CMMI ---------
>> bash run_umaea_012012.sh 0 OEA_D_W_15K_V1 norm 0.2 0.2
>> bash run_umaea_012012.sh 0 OEA_D_W_15K_V2 norm 0.2 0.2
>> bash run_umaea_012012.sh 0 OEA_EN_FR_15K_V1 norm 0.2 0.2
>> bash run_umaea_012012.sh 0 OEA_EN_DE_15K_V1 norm 0.2 0.2
>> bash run_umaea_012012.sh 0 DBP15K fr_en 0.3 0.2
>> bash run_umaea_012012.sh 0 DBP15K ja_en 0.3 0.2
>> bash run_umaea_012012.sh 0 DBP15K zh_en 0.3 0.2
π Tips: you can open the run_umaea_X.sh
file for parameter or training target modification.
stage_epoch
: The number of epochs in each stage ( 1 / 2-1 / 2-2 )- E.g., "250,0,0"
il_stage_epoch
: The number of epochs in each iterative stage ( 1 / 2-1 / 2-2 )- E.g., "0,0,0"
π― Standard Results
$\bf{H@1}$ Performance with the Settings: w/o surface & Non-iterative
. We modified part of the MSNEA to involve not using the content of attribute values but only the attribute types themselves (See issues for details):
Method | $\bf{DBP15K_{ZH-EN}}$ | $\bf{DBP15K_{JA-EN}}$ | $\bf{DBP15K_{FR-EN}}$ |
---|---|---|---|
MSNEA | .609 | .541 | .557 |
EVA | .683 | .669 | .686 |
MCLEA | .726 | .719 | .719 |
MEAformer | .772 | .764 | .771 |
UMAEA | .800 | .801 | .818 |
π Dataset (MMEA-UMVM)
<div align="center">To create our
MMEA-UMVM
(uncertainly missing visual modality) datasets, we perform random image dropping on MMEA datasets. Specifically, we randomly discard entity images to achieve varying degrees of visual modality missing, ranging from 0.05 to the maximum $R_{img}$ of the raw datasets with a step of 0.05 or 0.1. Finally, we get a total number of 97 data split as follow:
Dataset | $R_{img}$ |
---|---|
$DBP15K_{ZH-EN}$ | $0.05, 0.1, 0.15, 0.2, 0.3, 0.4, 0.45, 0.5, 0.55, 0.6, 0.7, 0.75, 0.7829~(STD)$ |
$DBP15K_{JA-EN}$ | $0.05, 0.1, 0.15, 0.2, 0.3, 0.4, 0.45, 0.5, 0.55, 0.6, 0.7, 0.7032~(STD)$ |
$DBP15K_{FR-EN}$ | $0.05, 0.1, 0.15, 0.2, 0.3, 0.4, 0.45, 0.5, 0.55, 0.6, 0.6758~(STD)$ |
$OpenEA_{EN-FR}$ | $0.05, 0.1, 0.15, 0.2, 0.3, 0.4, 0.45, 0.5, 0.55, 0.6, 0.7, 0.8, 0.9, 0.95, 1.0~(STD)$ |
$OpenEA_{EN-DE}$ | $0.05, 0.1, 0.15, 0.2, 0.3, 0.4, 0.45, 0.5, 0.55, 0.6, 0.7, 0.8, 0.9, 0.95, 1.0~(STD)$ |
$OpenEA_{D-W-V1}$ | $0.05, 0.1, 0.15, 0.2, 0.3, 0.4, 0.45, 0.5, 0.55, 0.6, 0.7, 0.8, 0.9, 0.95, 1.0~(STD)$ |
$OpenEA_{D-W-V2}$ | $0.05, 0.1, 0.15, 0.2, 0.3, 0.4, 0.45, 0.5, 0.55, 0.6, 0.7, 0.8, 0.9, 0.95, 1.0~(STD)$ |
π Download:
- [ Option ] The raw Multi-OpenEA images are available at
Baidu Cloud Drive
with the pass codeaoo1
. We only filter theRANK NO.1
image for each entity. - Case analysis Jupyter script: GoogleDrive (180M) base on the raw images of DBP15K entities (need to be unzip).
- [ Option ] The raw images of entities appeared in DBP15k can be downloaded from
Baidu Cloud Drive
((50GB)) with the pass codemmea
. All images are saved as title-image pairs in dictionaries and can be accessed with the following code :
import pickle
zh_images = pickle.load(open("eva_image_resources/dbp15k/zh_dbp15k_link_img_dict_full.pkl",'rb'))
print(en_images["http://zh.dbpedia.org/resource/ι¦ζΈ―ζη·ι»θ¦"].size)
- π― The training data is available at GoogleDrive (6.09G). Unzip it to make those files satisfy the following file hierarchy:
ROOT
βββ data
βΒ Β βββ mmkg
βββ code
Β Β βββ UMAEA
Code Path
<details> <summary>π π Click</summary>UMAEA
βββ config.py
βββ main.py
βββ requirement.txt
βββ run.sh
βββ run_umaea_00.sh
βββ run_umaea_012012.sh
βββ run_umaea_012.sh
βββ run_umaea_0.sh
βββ model
βΒ Β βββ __init__.py
βΒ Β βββ layers.py
βΒ Β βββ Tool_model.py
βΒ Β βββ UMAEA_loss.py
βΒ Β βββ UMAEA.py
βΒ Β βββ UMAEA_tools.py
βββ src
βΒ Β βββ data.py
βΒ Β βββ __init__.py
βΒ Β βββ utils.py
βββ torchlight
βΒ Β βββ __init__.py
βΒ Β βββ logger.py
βΒ Β βββ metric.py
βΒ Β βββ utils.py
βββ tree.txt
</details>
Data Path
<details> <summary>π π Click</summary>mmkg
βββ dump
βββ DBP15K
βΒ Β βββ fr_en
βΒ Β βΒ Β βββ ent_ids_1
βΒ Β βΒ Β βββ ent_ids_2
βΒ Β βΒ Β βββ ill_ent_ids
βΒ Β βΒ Β βββ training_attrs_1
βΒ Β βΒ Β βββ training_attrs_2
βΒ Β βΒ Β βββ triples_1
βΒ Β βΒ Β βββ triples_2
βΒ Β βββ ja_en
βΒ Β βΒ Β βββ ent_ids_1
βΒ Β βΒ Β βββ ent_ids_2
βΒ Β βΒ Β βββ ill_ent_ids
βΒ Β βΒ Β βββ training_attrs_1
βΒ Β βΒ Β βββ training_attrs_2
βΒ Β βΒ Β βββ triples_1
βΒ Β βΒ Β βββ triples_2
βΒ Β βββ translated_ent_name
βΒ Β βΒ Β βββ dbp_fr_en.json
βΒ Β βΒ Β βββ dbp_ja_en.json
βΒ Β βΒ Β βββ dbp_zh_en.json
βΒ Β βΒ Β βββ srprs_de_en.json
βΒ Β βΒ Β βββ srprs_fr_en.json
βΒ Β βββ zh_en
βΒ Β βββ ent_ids_1
βΒ Β βββ ent_ids_2
βΒ Β βββ ill_ent_ids
βΒ Β βββ training_attrs_1
βΒ Β βββ training_attrs_2
βΒ Β βββ triples_1
βΒ Β βββ triples_2
βββ OpenEA
βΒ Β βββ OEA_D_W_15K_V1
βΒ Β βΒ Β βββ ent_ids_1
βΒ Β βΒ Β βββ ent_ids_2
βΒ Β βΒ Β βββ ill_ent_ids
βΒ Β βΒ Β βββ rel_ids
βΒ Β βΒ Β βββ training_attrs_1
βΒ Β βΒ Β βββ training_attrs_2
βΒ Β βΒ Β βββ triples_1
βΒ Β βΒ Β βββ triples_2
βΒ Β βββ OEA_D_W_15K_V2
βΒ Β βΒ Β βββ ent_ids_1
βΒ Β βΒ Β βββ ent_ids_2
βΒ Β βΒ Β βββ ill_ent_ids
βΒ Β βΒ Β βββ rel_ids
βΒ Β βΒ Β βββ training_attrs_1
βΒ Β βΒ Β βββ training_attrs_2
βΒ Β βΒ Β βββ triples_1
βΒ Β βΒ Β βββ triples_2
βΒ Β βββ OEA_EN_DE_15K_V1
βΒ Β βΒ Β βββ ent_ids_1
βΒ Β βΒ Β βββ ent_ids_2
βΒ Β βΒ Β βββ ill_ent_ids
βΒ Β βΒ Β βββ rel_ids
βΒ Β βΒ Β βββ training_attrs_1
βΒ Β βΒ Β βββ training_attrs_2
βΒ Β βΒ Β βββ triples_1
βΒ Β βΒ Β βββ triples_2
βΒ Β βββ OEA_EN_FR_15K_V1
βΒ Β βΒ Β βββ ent_ids_1
βΒ Β βΒ Β βββ ent_ids_2
βΒ Β βΒ Β βββ ill_ent_ids
βΒ Β βΒ Β βββ rel_ids
βΒ Β βΒ Β βββ training_attrs_1
βΒ Β βΒ Β βββ training_attrs_2
βΒ Β βΒ Β βββ triples_1
βΒ Β βΒ Β βββ triples_2
βΒ Β βββ pkl
βΒ Β βββ OEA_D_W_15K_V1_id_img_feature_dict_0.05.pkl
βΒ Β βββ OEA_D_W_15K_V1_id_img_feature_dict_0.15.pkl
βΒ Β βββ OEA_D_W_15K_V1_id_img_feature_dict_0.1.pkl
βΒ Β βββ OEA_D_W_15K_V1_id_img_feature_dict_0.2.pkl
βΒ Β βββ OEA_D_W_15K_V1_id_img_feature_dict_0.3.pkl
βΒ Β βββ OEA_D_W_15K_V1_id_img_feature_dict_0.45.pkl
βΒ Β βββ OEA_D_W_15K_V1_id_img_feature_dict_0.4.pkl
βΒ Β βββ OEA_D_W_15K_V1_id_img_feature_dict_0.55.pkl
βΒ Β βββ OEA_D_W_15K_V1_id_img_feature_dict_0.5.pkl
βΒ Β βββ OEA_D_W_15K_V1_id_img_feature_dict_0.6.pkl
βΒ Β βββ OEA_D_W_15K_V1_id_img_feature_dict_0.75.pkl
βΒ Β βββ OEA_D_W_15K_V1_id_img_feature_dict_0.7.pkl
βΒ Β βββ OEA_D_W_15K_V1_id_img_feature_dict_0.8.pkl
βΒ Β βββ OEA_D_W_15K_V1_id_img_feature_dict_0.95.pkl
βΒ Β βββ OEA_D_W_15K_V1_id_img_feature_dict_0.9.pkl
βΒ Β βββ OEA_D_W_15K_V1_id_img_feature_dict.pkl
βΒ Β βββ OEA_D_W_15K_V2_id_img_feature_dict_0.05.pkl
βΒ Β βββ OEA_D_W_15K_V2_id_img_feature_dict_0.15.pkl
βΒ Β βββ OEA_D_W_15K_V2_id_img_feature_dict_0.1.pkl
βΒ Β βββ OEA_D_W_15K_V2_id_img_feature_dict_0.2.pkl
βΒ Β βββ OEA_D_W_15K_V2_id_img_feature_dict_0.3.pkl
βΒ Β βββ OEA_D_W_15K_V2_id_img_feature_dict_0.45.pkl
βΒ Β βββ OEA_D_W_15K_V2_id_img_feature_dict_0.4.pkl
βΒ Β βββ OEA_D_W_15K_V2_id_img_feature_dict_0.55.pkl
βΒ Β βββ OEA_D_W_15K_V2_id_img_feature_dict_0.5.pkl
βΒ Β βββ OEA_D_W_15K_V2_id_img_feature_dict_0.6.pkl
βΒ Β βββ OEA_D_W_15K_V2_id_img_feature_dict_0.75.pkl
βΒ Β βββ OEA_D_W_15K_V2_id_img_feature_dict_0.7.pkl
βΒ Β βββ OEA_D_W_15K_V2_id_img_feature_dict_0.8.pkl
βΒ Β βββ OEA_D_W_15K_V2_id_img_feature_dict_0.95.pkl
βΒ Β βββ OEA_D_W_15K_V2_id_img_feature_dict_0.9.pkl
βΒ Β βββ OEA_D_W_15K_V2_id_img_feature_dict.pkl
βΒ Β βββ OEA_EN_DE_15K_V1_id_img_feature_dict_0.05.pkl
βΒ Β βββ OEA_EN_DE_15K_V1_id_img_feature_dict_0.15.pkl
βΒ Β βββ OEA_EN_DE_15K_V1_id_img_feature_dict_0.1.pkl
βΒ Β βββ OEA_EN_DE_15K_V1_id_img_feature_dict_0.2.pkl
βΒ Β βββ OEA_EN_DE_15K_V1_id_img_feature_dict_0.3.pkl
βΒ Β βββ OEA_EN_DE_15K_V1_id_img_feature_dict_0.45.pkl
βΒ Β βββ OEA_EN_DE_15K_V1_id_img_feature_dict_0.4.pkl
βΒ Β βββ OEA_EN_DE_15K_V1_id_img_feature_dict_0.55.pkl
βΒ Β βββ OEA_EN_DE_15K_V1_id_img_feature_dict_0.5.pkl
βΒ Β βββ OEA_EN_DE_15K_V1_id_img_feature_dict_0.6.pkl
βΒ Β βββ OEA_EN_DE_15K_V1_id_img_feature_dict_0.75.pkl
βΒ Β βββ OEA_EN_DE_15K_V1_id_img_feature_dict_0.7.pkl
βΒ Β βββ OEA_EN_DE_15K_V1_id_img_feature_dict_0.8.pkl
βΒ Β βββ OEA_EN_DE_15K_V1_id_img_feature_dict_0.95.pkl
βΒ Β βββ OEA_EN_DE_15K_V1_id_img_feature_dict_0.9.pkl
βΒ Β βββ OEA_EN_DE_15K_V1_id_img_feature_dict.pkl
βΒ Β βββ OEA_EN_FR_15K_V1_id_img_feature_dict_0.05.pkl
βΒ Β βββ OEA_EN_FR_15K_V1_id_img_feature_dict_0.15.pkl
βΒ Β βββ OEA_EN_FR_15K_V1_id_img_feature_dict_0.1.pkl
βΒ Β βββ OEA_EN_FR_15K_V1_id_img_feature_dict_0.2.pkl
βΒ Β βββ OEA_EN_FR_15K_V1_id_img_feature_dict_0.3.pkl
βΒ Β βββ OEA_EN_FR_15K_V1_id_img_feature_dict_0.45.pkl
βΒ Β βββ OEA_EN_FR_15K_V1_id_img_feature_dict_0.4.pkl
βΒ Β βββ OEA_EN_FR_15K_V1_id_img_feature_dict_0.55.pkl
βΒ Β βββ OEA_EN_FR_15K_V1_id_img_feature_dict_0.5.pkl
βΒ Β βββ OEA_EN_FR_15K_V1_id_img_feature_dict_0.6.pkl
βΒ Β βββ OEA_EN_FR_15K_V1_id_img_feature_dict_0.75.pkl
βΒ Β βββ OEA_EN_FR_15K_V1_id_img_feature_dict_0.7.pkl
βΒ Β βββ OEA_EN_FR_15K_V1_id_img_feature_dict_0.8.pkl
βΒ Β βββ OEA_EN_FR_15K_V1_id_img_feature_dict_0.95.pkl
βΒ Β βββ OEA_EN_FR_15K_V1_id_img_feature_dict_0.9.pkl
βΒ Β βββ OEA_EN_FR_15K_V1_id_img_feature_dict.pkl
βββ pkls
βΒ Β βββ fr_en_GA_id_img_feature_dict_0.05.pkl
βΒ Β βββ fr_en_GA_id_img_feature_dict_0.15.pkl
βΒ Β βββ fr_en_GA_id_img_feature_dict_0.1.pkl
βΒ Β βββ fr_en_GA_id_img_feature_dict_0.2.pkl
βΒ Β βββ fr_en_GA_id_img_feature_dict_0.3.pkl
βΒ Β βββ fr_en_GA_id_img_feature_dict_0.45.pkl
βΒ Β βββ fr_en_GA_id_img_feature_dict_0.4.pkl
βΒ Β βββ fr_en_GA_id_img_feature_dict_0.55.pkl
βΒ Β βββ fr_en_GA_id_img_feature_dict_0.5.pkl
βΒ Β βββ fr_en_GA_id_img_feature_dict_0.6.pkl
βΒ Β βββ fr_en_GA_id_img_feature_dict_0.7.pkl
βΒ Β βββ fr_en_GA_id_img_feature_dict.pkl
βΒ Β βββ ja_en_GA_id_img_feature_dict_0.05.pkl
βΒ Β βββ ja_en_GA_id_img_feature_dict_0.15.pkl
βΒ Β βββ ja_en_GA_id_img_feature_dict_0.1.pkl
βΒ Β βββ ja_en_GA_id_img_feature_dict_0.2.pkl
βΒ Β βββ ja_en_GA_id_img_feature_dict_0.3.pkl
βΒ Β βββ ja_en_GA_id_img_feature_dict_0.45.pkl
βΒ Β βββ ja_en_GA_id_img_feature_dict_0.4.pkl
βΒ Β βββ ja_en_GA_id_img_feature_dict_0.55.pkl
βΒ Β βββ ja_en_GA_id_img_feature_dict_0.5.pkl
βΒ Β βββ ja_en_GA_id_img_feature_dict_0.6.pkl
βΒ Β βββ ja_en_GA_id_img_feature_dict.pkl
βΒ Β βββ zh_en_GA_id_img_feature_dict_0.05.pkl
βΒ Β βββ zh_en_GA_id_img_feature_dict_0.15.pkl
βΒ Β βββ zh_en_GA_id_img_feature_dict_0.1.pkl
βΒ Β βββ zh_en_GA_id_img_feature_dict_0.2.pkl
βΒ Β βββ zh_en_GA_id_img_feature_dict_0.3.pkl
βΒ Β βββ zh_en_GA_id_img_feature_dict_0.45.pkl
βΒ Β βββ zh_en_GA_id_img_feature_dict_0.4.pkl
βΒ Β βββ zh_en_GA_id_img_feature_dict_0.55.pkl
βΒ Β βββ zh_en_GA_id_img_feature_dict_0.5.pkl
βΒ Β βββ zh_en_GA_id_img_feature_dict_0.6.pkl
βΒ Β βββ zh_en_GA_id_img_feature_dict_0.75.pkl
βΒ Β βββ zh_en_GA_id_img_feature_dict_0.7.pkl
βΒ Β βββ zh_en_GA_id_img_feature_dict.pkl
βββ UMAEA
βββ save
</details>
π€ Cite:
Please condiser citing this paper if you use the code
or data
from our work.
Thanks a lot :)
@inproceedings{DBLP:conf/semweb/ChenGFZCPLCZ23,
author = {Zhuo Chen and
Lingbing Guo and
Yin Fang and
Yichi Zhang and
Jiaoyan Chen and
Jeff Z. Pan and
Yangning Li and
Huajun Chen and
Wen Zhang},
title = {Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal
Entity Alignment},
booktitle = {{ISWC}},
series = {Lecture Notes in Computer Science},
volume = {14265},
pages = {121--139},
publisher = {Springer},
year = {2023}
}