Home

Awesome

LLaRA

movielenssteamlastfm
ValidRatioHitRatio@1ValidRatioHitRatio@1ValidRatioHitRatio@1
LLaRA(GRU4Rec)0.96840.40000.98400.49160.96720.4918
LLaRA(Caser)0.96840.42110.95190.46210.97540.4836
LLaRA(SASRec)0.97890.45260.99580.50510.97540.5246
Preparation
  1. Prepare the environment:

    git clone https://github.com/ljy0ustc/LLaRA.git
    cd LLaRA
    pip install -r requirements.txt
    
  2. Prepare the pre-trained huggingface model of LLaMA2-7B (https://huggingface.co/meta-llama/Llama-2-7b-hf).

  3. Download the data and checkpoints.

  4. Prepare the data and checkpoints:

    Put the data to the dir path data/ref/ and the checkpoints to the dir path checkpoints/.

Train LLaRA

Train LLaRA with a single A100 GPU on MovieLens dataset:

sh train_movielens.sh

Train LLaRA with a single A100 GPU on Steam dataset:

sh train_steam.sh

Train LLaRA with a single A100 GPU on LastFM dataset:

sh train_lastfm.sh

Note that: set the llm_path argument with your own directory path of the Llama2 model.

Evaluate LLaRA

Test LLaRA with a single A100 GPU on MovieLens dataset:

sh test_movielens.sh

Test LLaRA with a single A100 GPU on Steam dataset:

sh test_steam.sh

Test LLaRA with a single A100 GPU on LastFM dataset:

sh test_lastfm.sh