Awesome
CuriousLLM: Elevating Multi-Document QA with Reasoning-Infused Knowledge Graph Prompting
In our work, we proposed a novel reasoning-infused LLM traversal agent which generates question-based query to guide the search through a knowledge graph. The overall logic is illustrated below. For detailed explanation, please refer to our paper.
Environment Setup
1. Clone the project
git clone https://github.com/zukangy/KGP-CuriousLLM.git
cd KGP-CuriousLLM
2. Virtual Environment (pyenv and MacOS)
Recommend installing with here
- In a terminal
# Install python3.8, only need to run once
pyenv install 3.8.16
cd KGP-CuriousLLM/
pyenv local 3.8.16
python3.8 -m venv .env --copies
- Activate the environment
. .env/bin/activate
Or
source .env/bin/activate
Scripts Breakdown
- KGP-CuriousLLM/create_dirs.py: create necessary empty folders for file management.
cd KGP-CuriousLLM/
python create_dirs.py
- KGP-CuriousLLM/MDR_main.py: Train a MDR encoder for passage embedding; the embedding will be used in the knowledge graph construction. Model configuration is in configs/MDR.yml.
cd KGP-CuriousLLM/
python MDR_main.py
- KGP-CuriousLLM/MDR_embedding_main.py: Generate embeddings with MDR model from test_docs.json.
cd KGP-CuriousLLM/
python MDR_embedding_main.py
- KGP-CuriousLLM/kg_construct_main: Construct a KG for either HotpotQA or 2WikiMQA.
cd KGP-CuriousLLM/
python kg_construct_main.py
-
Fine-tune a curious Mistral-7B model in MLX framework using QLora.
- Below scripts only work on Apple Silicon Mac.
- Recommend a 4-bit quantization for low-RAM Mac
- The equivalent HF implementation should be starightforward using the trainer class.
- For reference, the training time was roughly 8 hours based on the specs in the yaml file.
cd KGP-CuriousLLM/
# If quantize the model into 8 bit.
python quantize_mistral_main.py
# Fine-tune Mistral; modified config.yml if resume training from checkpoint.
python ft_mistral_main.py
# To perform grid search on the test set for evaluation
python grid_search_mistral_main.py
# To generate metrics
python Evaluations/eval_followup_llm.py
-
Finally, to start the experimentation of graph traversal to collect evidence.
- ./configs/kgp/ has config files for all methods in the paper. Please adjust line 15 of kgp_main.py to run different experiment.
- This isn't a complete pipeline. This script only collects evidence. The reason for so is for ease of downstream analysis. But you are wellcome to complete the entire pipeline by piecing the scripts together.
- Output will be saved to ./DATA/KG/evidence/[hotpot_evidence_100 or wiki_evidence_100].
cd KGP-CuriousLLM/
python kgp_main.py
-
Train a T5 model for graph traversal.
-
Generate answers based on output evidence from the experiments.
# For GPT
# Please modify the data_path and save_path arguments in the script if needed.
python KGP/LLMs/GPT/generate_answer_gpt.py
# For Mistral-7B
python KGP/LLMs/Mistral/generate_answer.py
Data
- Data, KGs, and passage embedding are provided here.
- Additionally, please refer to KGP-T5 for the raw data as well as other datasets we haven't tested in our experimentation.
Citation
If you like our work, CuriousLLM or Follow-upQA, please cite our work.
@misc{yang2024curiousllm,
title={CuriousLLM: Elevating Multi-Document QA with Reasoning-Infused Knowledge Graph Prompting},
author={Zukang Yang and Zixuan Zhu},
year={2024},
eprint={2404.09077},
archivePrefix={arXiv},
primaryClass={cs.CL}
}