Home

Awesome

Our weights for the instruction tuning model is uploading here

TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation is available at https://arxiv.org/abs/2305.00447.

Wrongly delete the line in evaluate.py by mistake, now it has been updated

We introduce a novel framework (TALLRec) that enables the efficient and effective adaptation of LLMs to recommendation tasks.

Main results

moviebook
Few-shot16642561664256
GRU49.0749.8752.8948.9549.6449.86
Caser49.6851.0654.2049.8449.7249.57
SASRec50.4350.4852.2549.4850.0650.20
DROS50.7651.5454.0749.2849.1349.13
GRU-BERT50.8551.6553.4450.0749.6449.79
DROS-BERT50.2151.7153.9450.0748.9850.20
TALLRec (ours)67.2467.4871.9856.3660.3964.38

Table 1. we shown the AUC results of the baseline models and our frameworks on movie and book scenarios.

Train TALLRec base on LLaMA7B:

bash ./shell/instruct_7B.sh  gpu_id random_seed

If you want to run it under your environment, you need to make changes to the sh file:

After training, you need to evluate the test result on the best model evaluated by the validation set.

bash ./shell/evaluate.sh  gpu_id  output_dir

If you want to run it under your environment, you need to make changes to the sh file:

Note that we will automatically detect all the different seed and sample files in the output_dir directory, and then integrate these results into the output_dir.json file.

Our project is developed based on the Alpaca_lora repo, thanks for their contributions.

For "Environment setting sharing for CUDA 12.0", please see here.