Awesome
<h2 align="center"> HateBR - Offensive Language and Hate Speech Dataset in Brazilian Portuguese </h2> </br> <p align="justify"> HateBR is the first large-scale expert annotated dataset of Brazilian Instagram comments for abusive language detection on the web and social media. The HateBR was collected from Brazilian Instagram comments of politicians and manually annotated by specialists. It is composed of 7,000 documents annotated according to three different layers: a binary classification (offensive versus non-offensive comments), offensiveness-level (highly, moderately, and slightly offensive messages), and 9 (nine) hate speech targets (xenophobia, racism, homophobia, sexism, religious intolerance, partyism, apology for the dictatorship, antisemitism, and fatphobia). Each comment was annotated by three different annotators and achieved high inter-annotator agreement. Furthermore, baseline experiments were implemented reaching 85% of the F1-score outperforming the current literature dataset baselines for the Portuguese language. We hope that the proposed expert annotated dataset may foster research on hate speech detection in the Natural Language Processing area. </p><p align="justify"> This repository contains the corpus and the best models presented in the paper (see section "citing"). <b>HateBr.csv file</b> provides 4 (four) columns as described above: </p>
- 1st column: Instagram comments.
- 2nd column: Offensive language classification is divided into offensive comments versus non-offensive comments.
- 3rd column: Offensiveness-level classification is divided into highly offensive, moderately offensive, and slightly offensive.
- 4th column: Hate speech classification is divided into 9 (nine) different hate speech targets: antisemitism, apology for the dictatorship, fatphobia, homophobia, partyism, racism, religious intolerance, sexism, and xenophobia. At last, offensive & no hate speech comments were also classified.
The following table describes in detail the labels for each proposed layer of annotation:
<div align="center"> <table> <tr><th>Offensive Language</th><th>Offensiveness Levels</th><th>Hate Speech</th></tr> <tr><td>class | label | total |
---|---|---|
offensive | 1 | 3,500 |
non-offensive | 0 | 3,500 |
Total | 7,000 |
class | label | total |
---|---|---|
highly | 3 | 778 |
moderately | 2 | 1,044 |
slightly | 1 | 1,678 |
non-offensive | 0 | 3,500 |
Total | 7,000 |
class | label | total |
---|---|---|
antisemitism | 1 | 2 |
apology for the dictatorship | 2 | 32 |
fatphobia | 3 | 27 |
homophobia | 4 | 17 |
partyism | 5 | 496 |
racism | 6 | 8 |
religious intolerance | 7 | 47 |
sexism | 8 | 97 |
xenophobia | 9 | 1 |
offensive & non-hate speech | -1 | 2,773 |
non-offensive | 0 | 3,500 |
Total | 7,000 |
In addition, we also provide baseline machine learning results for both tasks: offensive language and hate speech detection. The best-obtained models are available here in .pkl files. File names are organized as [classification (offensive or hate)_representation (ngram or tfidf)_algorithms (nb, svm, mlp or lr)]
. For example, the file offensive_tfidf_svm.pkl presents the model of offensive detection with tf-idf representation using the support vector machine algorithm.
@article{Vargas_Carvalho_Pardo_Benevenuto_2024,
author={Vargas, Francielle and Carvalho, Isabelle and Pardo, Thiago A. S. and Benevenuto, Fabrício},
title={Context-aware and expert data resources for Brazilian Portuguese hate speech detection},
DOI={10.1017/nlp.2024.18},
journal={Natural Language Processing},
year={2024},
pages={1–22},
url={https://www.cambridge.org/core/journals/natural-language-processing/article/contextaware-and-expert-data-resources-for-brazilian-portuguese-hate-speech-detection/7D9019ED5471CD16E320EBED06A6E923#},
}
</br>