Home

Awesome

OpenSources Data

Fake News Corpus

This is an open source dataset composed of millions of news articles mostly scraped from a curated list of 1001 domains from http://www.opensources.co/. Because the list does not contain many reliable websites, additionally NYTimes and WebHose English News Articles articles has been included to better balance the classes. Corpus is mainly intended for use in training deep learning algorithms for purpose of fake news recognition. The dataset is still work in progress and for now, the public version includes only 9,408,908 articles (745 out of 1001 domains).

Downloading

https://github.com/several27/FakeNewsCorpus/releases/tag/v1.0

How was the corpus created?

The corpus was created by scraping (using scrapy) all the domains as provided by http://www.opensources.co/. Then all the pure HTML content was processed to extract the article text with some additional fields (listed below) using the newspaper library. Each article has been attributed the same label as the label associated with its domain. All the source code is available at FakeNewsRecognition and will be made more “usable” in the next few months.

Formatting

The corpus is formatted as a CSV and contains the following fields:

Available types More information on http://www.opensources.co

TypeTagCount (so far)Description
Fake Newsfake928,083Sources that entirely fabricate information, disseminate deceptive content, or grossly distort actual news reports
Satiresatire146,080Sources that use humor, irony, exaggeration, ridicule, and false information to comment on current events.
Extreme Biasbias1,300,444Sources that come from a particular point of view and may rely on propaganda, decontextualized information, and opinions distorted as facts.
Conspiracy Theoryconspiracy905,981Sources that are well-known promoters of kooky conspiracy theories.
State Newsstate0Sources in repressive states operating under government sanction.
Junk Sciencejunksci144,939Sources that promote pseudoscience, metaphysics, naturalistic fallacies, and other scientifically dubious claims.
Hate Newshate117,374Sources that actively promote racism, misogyny, homophobia, and other forms of discrimination.
Clickbaitclickbait292,201Sources that provide generally credible content, but use exaggerated, misleading, or questionable headlines, social media descriptions, and/or images.
Proceed With Cautionunreliable319,830Sources that may be reliable but whose contents require further verification.
Politicalpolitical2,435,471Sources that provide generally verifiable information in support of certain points of view or political orientations.
Crediblereliable1,920,139Sources that circulate news and information in a manner consistent with traditional and ethical practices in journalism (Remember: even credible sources sometimes rely on clickbait-style headlines or occasionally make mistakes. No news organization is perfect, which is why a healthy news diet consists of multiple sources of information).

List of domains You can find the full list of domains in websites.csv.

Limitations

The dataset was not manually filtered, therefore some of the labels might not be correct and some of the URLs might not point to the actual articles but other pages on the website. However, because the corpus is intended for use in training machine learning algorithms, those problems should not pose a practical issue.

Additionally, when the dataset will be finalised (as for now only about 80% was cleaned and published), I do not intend to update it, therefore it might quickly become outdated for other purposes than content-based algorithms. However, any contributions are welcome!

Contributing

Because there’s currently only myself working on this corpus, I’d really appreciate all the contributions. If you have found wrong labels associated with any articles, weirdly formatted content or URLs that are not pointing to any articles, feel free to post an issue with the problem and exact article id and I will do my best to respond promptly. Because of the size of the corpus, I could not host it on GitHub, therefore, unfortunately, for now, pull requests cannot be used to collaboratively work on the data, however, I’m open to any ideas 🙂

Acknowledgments