Home

Awesome

LeNER-Br: a Dataset for Named Entity Recognition in Brazilian Legal Text

This repo holds the dataset and source code described in the paper below, which was generated as a collaboration between two institutions of the University of Brasília: NEXT (Núcleo de P&D para Excelência e Transformação do Setor Público) and CiC (Departamento de Ciência da Computação).

@InProceedings{luz_etal_propor2018,
          author = {Pedro H. {Luz de Araujo} and Te\'{o}filo E. {de Campos} and
                    Renato R. R. {de Oliveira} and Matheus Stauffer and
                    Samuel Couto and Paulo Bermejo},
          title = {{LeNER-Br}: a Dataset for Named Entity Recognition in {Brazilian} Legal Text},
          booktitle = {International Conference on the Computational Processing of Portuguese
                       ({PROPOR})},
	  publisher = {Springer},
	  series = {Lecture Notes on Computer Science ({LNCS})},
	  pages = {313--323},
          year = {2018},
          month = {September 24-26},
          address = {Canela, RS, Brazil},	  
	  doi = {10.1007/978-3-319-99722-3_32},
	  url = {https://teodecampos.github.io/LeNER-Br/},
}	  

We also provide the LSTM-CRF model described in the paper, which achieved an average f1-score of 92.53% (token) and 86.61% (entity) on the test set.

The sections below describe the requirements and the dataset and model files.

We kindly request that users cite our paper in any publication that is generated as a result of the use of our source code, our dataset or our pre-trained models.

Note: although this GitHub repository was created in May 2020 to increase the visibility of this project, the dataset and source code has been available from the site of the authors since September 2018.

Requirements

  1. Python 3.6
  2. pip

LeNER-Br Dataset

The directory structure is as follows:

python textToConll.py path/to/txtfile

Model

The model code is adapted from this repo and implements a NER model using Tensorflow (LSTM + CRF + chars embeddings). All code files modified are marked as such at the beginning. The section below summarizes the use of the model. For more in depth explanations of how to use the model and change its configurations refer to the README of the original implementation.

Evaluation

pip install -r requirements.txt
python classScores.py train
python classScores.py dev
python classScores.py test
python evaluate.py train
python evaluate.py dev
python evaluate.py test
python evaluateText path/to/txtfile
python evaluate.py

or

python evaluateSentence.py
python train.py