Home

Awesome


title: Lora Cerebras Gpt2.7b Alpaca Shortprompt emoji: 🐨 colorFrom: yellow colorTo: pink sdk: gradio sdk_version: 3.23.0 app_file: app.py pinned: false license: apache-2.0

🦙🐕🧠 Cerebras-GPT2.7B LoRA Alpaca ShortPrompt

Open In Colab Open In Spaces

Scripts to finetune Cerebras GPT2.7B on the Alpaca dataset, as well as inference demos.

<img src="https://user-images.githubusercontent.com/1486609/229048081-57629025-cf4e-4771-9872-f10ee90751b1.gif" width="400" />

📈 Warnings

The model tends to be pretty coherent, but it also hallucinates a lot of factually incorrect responses. Avoid using it for anything requiring factual correctness.

📚 Instructions

  1. Be on a machine with an NVIDIA card with 12-24 GB of VRAM.

  2. Get the environment ready

conda create -n cerberas-lora python=3.10
conda activate cerberas-lora
conda install -y cuda -c nvidia/label/cuda-11.7.0
conda install -y pytorch=1.13.1 pytorch-cuda=11.7 -c pytorch
  1. Clone the repo and install requirements
git clone https://github.com/lxe/cerebras-lora-alpaca.git && cd !!
pip install -r requirements.txt
  1. Run the inference demo
python app.py

To reproduce the finetuning results, do the following:

  1. Install jupyter and run it
pip install jupyter
jupyter notebook
  1. Navigate to the inference.ipynb notebook and test out the inference demo.

  2. Navigate to the finetune.ipynb notebook and reproduce the finetuning results.

📝 License

Apache 2.0