Home

Awesome

MNRE

Resource and Code for ICME 2021 paper MNRE: A Challenge Multimodal Dataset for Neural Relation Extraction with Visual Evidence in Social Media Posts

Recent Updates

Overview

This project aims to present a new task -- multimodal neural relation extraction and a dataset (MNRE) for model evaluation. The MNRE task requires a understanding of both vision and language. We envisage a range of well-designed methods and resources for such a challenge that would boost the development of multimodal alignment towards a higher semantic level.

<img src="pic/Intro.png" width="400">

This is an example of multimodal relation extraction in Twitter. There are three entities in this sentence: "JFK", "Obama" and "Harvard". The main task of relation extraction is to identify the relations of each entity pair. Previous works incorrectly classify the relations of "JFK" and "Harvard" as "Residence" and the "JFK" and "Obama" as "Spouse" due to the missing of contexts. However, we can know that "JFK" and "Harvard" are in the relation of "Graduated at" with the visual concepts "Bachelor cap" and "Gown". Still, the relations of "JFK" and "Obama" can be identified as "Alumni" with the guidance of all the visual objects about "Campus".

Data Statistics

Data Statistics Compared to Previous NRE Dataset

Dataset# Image# Word# Sentence# Entity# Relation# Instance
SemEval-2010 Task 8-205k10,71721,43498,853
ACE 2003-2004-297k12,78346,1082416,771
TACRED-1,823k53,791152,5274121,773
FewRel-1,397k56,10972,12410070,000
MNRE9,201258k9,20130,9702315,485

This is the version 2 of our MNRE dataset. We refine and merge some categories for better understandings. The dataset contains 15,484 samples and 9,201 images with 23 relation categories. We split the dataset into training, development and testing set with 12247, 1624 and 1614 samples, respectively.

Category Distribution

<img src="pic/statistic.png" width="600">

We start tagging relation types depending on the entity types. For example, the relations between one person and another person can be classified into ''alumni'', ''couple'' and ''relative'' et al.

Data Collection

We build the original corpus from three sources: two available multimodal named entity recognition datasets - Twitter15 and Twitter17, and crawling data from Twitter.

We utilize a pretrained NER tagging tool elmo for extracting both entities and their corresponding types.

Data Usage

Our processed textual relations are in ./mnre_txt/, the image data can be downloaded here.

Each sentence is split into several instances (depending on the number of relations). Each line contains

'token': Texts preprocessed by a tokenizer
'h': Head entities and their positions in a sentence
't': Tail entities and their positions in a sentence
'image_id': You can find the corresponding images using the link above
'relation': The relations and entity categories

Case Study

<img src="pic/case1.png" width="900">

Four examples for illustrating the effectiveness of visual information in extracting relations. The first line shows that the visual objects and their attributes can help in identifying relations. Further more, we show that the interactions of person-to-person or person-to-object can also provide clues for classifying relations.

Citation

If you find this repo helpful, please cite the following:

@inproceedings{zheng2021mnre,
  title={MNRE: A Challenge Multimodal Dataset for Neural Relation Extraction with Visual Evidence in Social Media Posts},
  author={Zheng, Changmeng and Wu, Zhiwei and Feng, Junhao and Fu, Ze and Cai, Yi},
  booktitle={2021 IEEE International Conference on Multimedia and Expo (ICME)},
  pages={1--6},
  year={2021},
  organization={IEEE}
}