Skip to content

ljy0ustc/LLaRA

Repository files navigation

LLaRA

  • 🔥 2024.3: Our paper is accepted by SIGIR'24! Thank all Collaborators! 🎉🎉
  • 🔥 2024.3: Our datasets and checkpoints are released on the huggingface.
Preparation
  1. Prepare the environment:

    git clone https://github.com/ljy0ustc/LLaRA.git
    cd LLaRA
    pip install -r requirements.txt
  2. Prepare the pre-trained huggingface model of LLaMA2-7B (https://huggingface.co/meta-llama/Llama-2-7b-hf).

  3. Download the data and checkpoints.

  4. Prepare the data and checkpoints:

    Put the data to the dir path data/ref/ and the checkpoints to the dir path checkpoints/.

Train LLaRA

Train LLaRA with a single A100 GPU on the MovieLens dataset:

sh train_movielens.sh

Train LLaRA with a single A100 GPU on Steam dataset:

sh train_steam.sh

Note that: set the llm_path argument with your own directory path of the LLaMA2 model.

Evaluate LLaRA

Test LLaRA with a single A100 GPU on the MovieLens dataset:

sh test_movielens.sh

Test LLaRA with a single A100 GPU on Steam dataset:

sh test_steam.sh

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published