Awesome
<h2 align="center"> A Benchmark Dataset for Sentence-Level Factuality and Media Bias Prediction</h2> </br> <p align="justify">Automated fact-checking and news credibility verification at scale require accurate prediction of news factuality and media bias. Here, we introduce a large sentence-level dataset, titled FactNews, composed of 6,191 sentences expertly annotated according to factuality and media bias definitions proposed by AllSides. We used the FactNews to assess the overall reliability of news sources by formulating two text classification problems for predicting sentence-level factuality of news reporting and bias of media outlets. Our experiments demonstrate that biased sentences present a higher number of words compared to factual sentences, besides having a predominance of emotions. Hence, the fine-grained analysis of subjectivity and impartiality of news articles showed promising results for predicting the reliability of the entire media outlet. Finally, due to the severity of fake news and political polarization in Brazil, and the lack of research for Portuguese, both dataset and baseline were proposed for Brazilian Portuguese. The following table describes in detail the FactNews labels, documents, and stories: </p> </br> </br> <div align="center">Factual | Quotes | Biased | Total sentences | Total news stories | Total news documents |
---|---|---|---|---|---|
4,242 | 1,391 | 558 | 6,161 | 100 | 300 |
Media 1 | Media 2 | Media 3 |
---|---|---|
Folha de São Paulo | Estadão | O Globo |
Sentence-Level Media Bias Prediction | Sentenve-Level Factuality Prediction |
---|---|
67% (F1-Score) by Fine-tuned mBert-case | 88% (F1-Score) by Fine-tuned mBert-case |