Home

Awesome

DTGB: A Comprehensive Benchmark for Dynamic Text-Attributed Graphs (Accepted by NeurIPS 2024)

Dataset

All eight dynamic text-attributed graphs provided by DTGB can be downloaded from here. <img width="1230" alt="image" src="https://github.com/zjs123/DTGB/assets/17922610/2f714dd7-7928-4eed-8e55-8e1fa947e463">

Data Format

Each graph is preserved through three files.

Usage

Reproduce the Results

Future Link Prediction Task

python train_link_prediction.py --dataset_name GDELT --model_name DyGFormer --patch_size 2 --max_input_sequence_length 64 --num_runs 5 --gpu 0 --use_feature no
python train_link_prediction.py --dataset_name GDELT --model_name DyGFormer --patch_size 2 --max_input_sequence_length 64 --num_runs 5 --gpu 0 --use_feature Bert

Destination Node Retrieval Task

After obtaining the best checkpoint on the Future Link Prediction Task. The Hits@k metrics of the Destination Node Retrieval Task can be reproduced by running:

python evaluate_node_retrieval.py --dataset_name GDELT --model_name DyGFormer --patch_size 2 --max_input_sequence_length 64 --negative_sample_strategy random --num_runs 5 --gpu 0  --use_feature no

Edge Classification Task

python train_edge_classification.py --dataset_name GDELT --model_name DyGFormer --patch_size 2 --max_input_sequence_length 64 --num_runs 5 --gpu 0 --use_feature no

Textual Relation Generation Task

After obtaining the LLM_train.pkl and LLM_test.pkl files. You can directly reproduce the performance of original LLMs by running

python LLM_eval.py -config_path=LLM_configs/vicuna_7b_qlora_uncensored.yaml -model=raw

And then to get the Bert_score metrics, you should change the file path in LLM_metric.py and run:

python LLM_metric.py

If you want to fine-tune the LLMs, you should run:

python LLM_train.py LLM_configs/vicuna_7b_qlora_uncensored.yaml

and then reproduce the performance of the fine-tunned LLMs by running

python LLM_eval.py -config_path=LLM_configs/vicuna_7b_qlora_uncensored.yaml -model=lora

Contact

For any questions or suggestions, you can use the issues section or contact us at (zjss12358@gmail.com).

Acknowledge

Codes and model implementations are referred to DyGLib project. Thanks for their great contributions!

Reference

@article{zhang2024dtgb,
  title={DTGB: A Comprehensive Benchmark for Dynamic Text-Attributed Graphs},
  author={Zhang, Jiasheng and Chen, Jialin and Yang, Menglin and Feng, Aosong and Liang, Shuang and Shao, Jie and Ying, Rex},
  journal={arXiv preprint arXiv:2406.12072},
  year={2024}
}