Home

Awesome

GNN-based Fake News Detection

Open in Code Ocean PWC PWC

Installation | Datasets | Models | PyG Example | DGL Example | Benchmark | Intro Video | How to Contribute

This repo includes the Pytorch-Geometric implementation of a series of Graph Neural Network (GNN) based fake news detection models. All GNN models are implemented and evaluated under the User Preference-aware Fake News Detection (UPFD) framework. The fake news detection problem is instantiated as a graph classification task under the UPFD framework.

You can make reproducible run on CodeOcean without manual configuration.

The UPFD dataset and its example usage is also available at the PyTorch-Geometric official repo

We welcome contributions of results of existing models and the SOTA results of new models based on our dataset. You can check the benchmark hosted by PaperWithCode for SOTA models and their performances.

If you use the code in your project, please cite the following paper:

SIGIR'21 (PDF)

@inproceedings{dou2021user,
  title={User Preference-aware Fake News Detection},
  author={Dou, Yingtong and Shu, Kai and Xia, Congying and Yu, Philip S. and Sun, Lichao},
  booktitle={Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval},
  year={2021}
}

Installation

Install via PyG

Our dataset has been integrated with the official PyTorch-Geometric library. Please follow the installation instructions of PyTorch-Geometric to install the latest version of PyG and check the code example for dataset usage.

Install via DGL

Our dataset has been integrated with the official Deep Graph library(DGL). Please follow the installation instructions of DGL to install the latest version of DGL and check the docstring of the dataset class for dataset usage.

Manually Install

To run the code in this repo, you need to have Python>=3.6, PyTorch>=1.6, and PyTorch-Geometric>=1.6.1. Please follow the installation instructions of PyTorch-Geometric to install PyG.

Other dependencies can be installed using the following commands:

git clone https://github.com/safe-graph/GNN-FakeNews.git
cd GNN-FakeNews
pip install -r requirements.txt

Datasets

If you have installed the latest version of PyG or DGL, you can use their built-in dataloaders to download and load the UPFD dataset.

If you install the project manually, you need to download the dataset (1.2GB) via the links below and unzip the corresponding data under the \data\{dataset_name}\raw\ directory, the dataset_name is politifact or gossipcop.

Google Drive: https://drive.google.com/drive/folders/1OslTX91kLEYIi2WBnwuFtXsVz5SS_XeR?usp=sharing

Baidu Disk: https://pan.baidu.com/s/1NFtuwzmpAezNcJzlSlduSw Password: fj43

The dataset includes fake&real news propagation networks on Twitter built according to fact-check information from Politifact and Gossipcop. The news retweet graphs were originally extracted by FakeNewsNet. We crawled near 20 million historical tweets from users who participated in fake news propagation in FakeNewsNet to generate node features in the dataset.

The statistics of the dataset is shown below:

Data#Graphs#Fake News#Total Nodes#Total Edges#Avg. Nodes per Graph
Politifact31415741,05440,740131
Gossipcop54642732314,262308,79858

Due to the Twitter policy, we could not release the crawled user historical tweets publicly. To get the corresponding Twitter user information, you can refer to news lists and the node_id-twitter_id mappings under \data. Two xxx_id_twitter_mapping.pkl files include the dictionaries with the keys as the node_ids in the datasets and the values represent corresponding Twitter user_ids. For the news node, its value represents news id in the FakeNewsNet datasets. Similarly, two xxx_id_time_mapping.pkl files include the node_id to its corresponding Tweet timestamp mappings. Note that the timestamp is in UNIX timestamp format. The news node doesn’t contain timestamp even in the original FakeNewsNet dataset, you can either retrieve it on Twitter or use its most recent retweet time as an approximation. In the UPFD project, we use Tweepy and Twitter Developer API to get the user information, the crawler code can be found at \utils\twitter_crawler.py.

We incorporate four node feature types in the dataset, the 768-dimensional bert and 300-dimensional spacy features are encoded using pretrained BERT and spaCy word2vec, respectively. The 10-dimensional profile feature is obtained from a Twitter account's profile. You can refer to profile_feature.py for profile feature extraction. The 310-dimensional content feature is composed of a 300-dimensional user comment word2vec (spaCy) embedding plus a 10-dimensional profile feature.

Each graph is a hierarchical tree-structured graph where the root node represents the news, the leaf nodes are Twitter users who retweeted the root news. A user node has an edge to the news node if he/she retweeted the news tweet. Two user nodes have an edge if one user retweeted the news tweet from the other user. The following figure shows the UPFD framework including the dataset construction details You can refer to the paper for more details about the dataset.

<p align="center"> <br> <a href="https://github.com/safe-graph/GNN-FakeNews"> <img src="https://github.com/safe-graph/GNN-FakeNews/blob/main/overview.png" width="1000"/> </a> <br> <p>

Models

All GNN-based fake news detection models are under the \gnn_model directory. You can fine-tune each model according to arguments specified in the argparser of each model. The implemented models are as follows:

Since the UPFD framework is built upon the PyG, you can easily try other graph classification models like GIN and HGP-SL under our dataset.

How to Contribute

You are welcomed to submit your model code, hyper-parameters, and results to this repo via create a pull request. After verifying the results, your model will be added to the repo and the result will be updated to the benchmark. Please email to ytongdou@gmail.com for other inquiries.