Awesome
Convolutional Recurrent Neural Networks for Relation Extraction
Deep Learning Approach for Relation Extraction Challenge(SemEval-2010 Task #8: Multi-Way Classification of Semantic Relations Between Pairs of Nominals) using Convolutional Recurrent Neural Networks.
<p align="center"> <img width="700" height="400" src="https://user-images.githubusercontent.com/8953934/39967385-05995058-56f5-11e8-8080-73d8098cab6b.JPG"> </p>Experimental results
Parameters | Test Data Accuracy | F1 score |
---|---|---|
CRNN-Max | 73% | 74.28 |
CRNN-Att | 65.95% | 70.14 |
Usage
Train
-
train data is located in <U>"SemEval2010_task8_all_data/SemEval2010_task8_training/TRAIN_FILE.TXT"</U>
-
Display help message:
$ python train.py --help
optional arguments: -h, --help show this help message and exit --train_dir TRAIN_DIR Path of train data --dev_sample_percentage DEV_SAMPLE_PERCENTAGE Percentage of the training data to use for validation --max_sentence_length MAX_SENTENCE_LENGTH Max sentence length in train(98)/test(70) data (Default: 100) --word2vec WORD2VEC Word2vec file with pre-trained embeddings --text_embedding_dim TEXT_EMBEDDING_DIM Dimensionality of character embedding (Default: 300) --layers LAYERS Size of rnn output, no (Default: 100 --dropout_keep_prob DROPOUT_KEEP_PROB Dropout keep probability (Default: 0.5) --pooling_type POOLING_TYPE pooling method, max or att (Default: max) --l2_reg_lambda L2_REG_LAMBDA L2 regularization lambda (Default: 3.0) --f1 F1 f1 filter size (Default : 2) --f2 F2 f2 filter size (Default : 5) --n_channels N_CHANNELS the number of channels-output vector size, nc(Default : 100 --batch_size BATCH_SIZE Batch Size (Default: 64) --num_epochs NUM_EPOCHS Number of training epochs (Default: 100) --display_every DISPLAY_EVERY Number of iterations to display training info. --evaluate_every EVALUATE_EVERY Evaluate model on dev set after this many steps --checkpoint_every CHECKPOINT_EVERY Save model after this many steps --num_checkpoints NUM_CHECKPOINTS Number of checkpoints to store --learning_rate LEARNING_RATE Which learning rate to start with. (Default: 1e-3)
-
Train Example:
$ python train.py --train_dir "TRAIN_FILE.TXT"
Evalutation
-
test data is located in "<U>SemEval2010_task8_all_data/SemEval2010_task8_testing_keys/TEST_FILE_FULL.TXT</U>".
-
You must give "checkpoint_dir" argument, path of checkpoint(trained neural model) file, like below example.
-
Evaluation Example:
$ python eval.py --checkpoint_dir "runs/1523902663/checkpoints"
-
Official Evaluation of SemEval 2010 Task #8
- After evaluation like the example, you can get the "prediction.txt" and "answer.txt" in "result" directory.
- Install <U>perl</U>.
- Move to <U>SemEval2010_task8_all_data/SemEval2010_task8_scorer-v1.2</U>.
$ cd SemEval2010_task8_all_data/SemEval2010_task8_scorer-v1.2
- Check your prediction file format.
$ perl semeval2010_task8_format_checker.pl ../../result/prediction.txt
- Scoring your prediction.
$ perl semeval2010_task8_scorer-v1.2.pl ../../result/prediction.txt ../../result/answer.txt
- The scorer shows the 3 evaluation reuslts for prediction. The official evaluation result, (9+1)-WAY EVALUATION TAKING DIRECTIONALITY INTO ACCOUNT -- OFFICIAL, is the last one. See the README for more details.
SemEval-2010 Task #8
- Given: a pair of nominals
- Goal: recognize the semantic relation between these nominals.
- Example:
- "There were apples, <U>pears</U> and oranges in the <U>bowl</U>." <br> → CONTENT-CONTAINER(pears, bowl)
- “The cup contained <U>tea</U> from dried <U>ginseng</U>.” <br> → ENTITY-ORIGIN(tea, ginseng)
The Inventory of Semantic Relations
- Cause-Effect(CE): An event or object leads to an effect(those cancers were caused by radiation exposures)
- Instrument-Agency(IA): An agent uses an instrument(phone operator)
- Product-Producer(PP): A producer causes a product to exist (a factory manufactures suits)
- Content-Container(CC): An object is physically stored in a delineated area of space (a bottle full of honey was weighed) Hendrickx, Kim, Kozareva, Nakov, O S´ eaghdha, Pad ´ o,´ Pennacchiotti, Romano, Szpakowicz Task Overview Data Creation Competition Results and Discussion The Inventory of Semantic Relations (III)
- Entity-Origin(EO): An entity is coming or is derived from an origin, e.g., position or material (letters from foreign countries)
- Entity-Destination(ED): An entity is moving towards a destination (the boy went to bed)
- Component-Whole(CW): An object is a component of a larger whole (my apartment has a large kitchen)
- Member-Collection(MC): A member forms a nonfunctional part of a collection (there are many trees in the forest)
- Message-Topic(CT): An act of communication, written or spoken, is about a topic (the lecture was about semantics)
- OTHER: If none of the above nine relations appears to be suitable.
Distribution for Dataset
- SemEval-2010 Task #8 Dataset [Download]
Relation | Train Data | Test Data | Total Data |
---|---|---|---|
Cause-Effect | 1,003 (12.54%) | 328 (12.07%) | 1331 (12.42%) |
Instrument-Agency | 504 (6.30%) | 156 (5.74%) | 660 (6.16%) |
Product-Producer | 717 (8.96%) | 231 (8.50%) | 948 (8.85%) |
Content-Container | 540 (6.75%) | 192 (7.07%) | 732 (6.83%) |
Entity-Origin | 716 (8.95%) | 258 (9.50%) | 974 (9.09%) |
Entity-Destination | 845 (10.56%) | 292 (10.75%) | 1137 (10.61%) |
Component-Whole | 941 (11.76%) | 312 (11.48%) | 1253 (11.69%) |
Member-Collection | 690 (8.63%) | 233 (8.58%) | 923 (8.61%) |
Message-Topic | 634 (7.92%) | 261 (9.61%) | 895 (8.35%) |
Other | 1,410 (17.63%) | 454 (16.71%) | 1864 (17.39%) |
Total | 8,000 (100.00%) | 2,717 (100.00%) | 10,717 (100.00%) |