Awesome
Instructive Decoding: Instruction-Tuned Large Language Models are Self-Refiner from Noisy Instructions
<a href="https://arxiv.org/abs/2311.00233"><img src="https://img.shields.io/badge/Paper-arXiv:2311.00233-Green"></a> <a href="https://colab.research.google.com/drive/1bHczXzppIF-AouiPyE8H89CQ9gL_0Xa2?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
This official repository contains the implementation for the research paper "Instructive Decoding: Instruction-Tuned Large Language Models are Self-Refiner from Noisy Instructions". We provide a tutorial in our Colab Notebook.<br/>
š Accepted to ICLR 2024 Spotlight [Link] <br/> š Accepted to Instruction Workshop @ NeurIPS 2023 [Link]
Taehyeon Kim*, Joonkee Kim*, Gihun Lee*, Se-Young Yun <br/> *: Equal Contribution
<!-- [Distort, Distract, Decode: Instruction-Tuned Model Can Refine its Response from Noisy Instructions ](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwjbndLpuu2CAxWX0GEKHY3dBOUQFnoECBAQAQ&url=https%3A%2F%2Farxiv.org%2Fabs%2F2311.00233&usg=AOvVaw0ZOF7zPWlPT11XCYhPTvrr&opi=89978449) -->š” Introduction
<p align="center"> <img src="./figures/overview.png" width="1394"/> </p>š¤ TL;DR: The paper presents "Instructive Decoding" (ID), a method enhancing instruction-following in language models by using "noisy instructions" to refine understanding and adherence to tasks. Tested across multiple models and tasks, ID consistently improves performance, especially in generalizing to new tasks, without needing extra training or parameter updates.
š¤ Getting Started
š» Environmental setup
1. Create a Conda Environment:
Use Conda to create a new environment specifically for this project. This helps keep dependencies organized and avoids conflicts with other projects. Run the following commands in your terminal:
conda create -n id python=3.9
conda activate id
2. Install Required Packages:
Next, install all the necessary packages. We've listed all the required dependencies in requirements.txt
. To install them, simply execute:
pip install -r requirements.txt
š Data preparation
1. Create the Directories:
Set up the directory structure for downloading and storing the datasets. Run these commands in your terminal:
mkdir -p data/downloads
mkdir -p data
2. SuperNatural Instruction Dataset:
Clone the SuperNatural Instruction dataset (Link) and organize it into the correct directory:
git clone https://github.com/allenai/natural-instructions.git data/downloads
mkdir -p data/supni
mv data/downloads/tasks data/downloads/splits data/supni/
rm -rf data/downloads/natural-instructions
3. MMLU Dataset:
Download and extract the MMLU dataset:
wget -O data/downloads/mmlu_data.tar https://people.eecs.berkeley.edu/~hendrycks/data.tar
mkdir -p data/mmlu
tar -xvf data/downloads/mmlu_data.tar -C data/mmlu
rm -rf data/downloads/mmlu_data.tar
Then, you will have a directory structure as follows šš»šš»:
Instructive-Decoding
āāā data
ā āāā supni
ā ā āāā splits
ā ā āāā tasks
ā āāā mmlu
ā ā āāā test
ā ā āāā ...
āāā scripts
ā āāā run_sni.sh
ā āāā run_mmlu.sh
ā āāā ...
āāā src
ā āāā run_eval.py
ā āāā base_generator.py
ā āāā ...
āāā requirements.txt
āāā ...
š How to Use
š» Prepare the Pretrained Weights
We utilized various models in our paper. You can directly load these models from the Huggingface Hub
or use specific weights as required. Here are the relevant links and information:
-
Tk-Instruct Models:
-
Additional Models:
-
Custom Tk-Large Model:
- For the Tk-Large model, we trained our version using the Tk-Instruct repository.
-
Open-Instruct (OpenSNI-7B):
- For specific weights related to open-instruct (OpenSNI-7B), refer to open-instruct on GitHub.
š» Run Experiments
To customize and experiment with your own noisy instructions
, modify the instructions in the inst_aware_batchify
function within xxx_generator.py
.
To reproduce our results, execute the following scripts in your terminal:
bash scripts/run_sni.sh
bash scripts/run_mmlu.sh
š» Key Arguments Explained
-
noisy
: This argument determines the decoding method to be used.- If this is set, the script employs Instructive Decoding, which involves the use of both the original and noisy instructions.
- If this is not set, it executes Standard Decoding, using only the original instruction without any noisy variants.
-
neg_type
: This specifies the type of noisy instruction to be used.- It allows you to choose from a range of predefined noisy instruction variants, each designed to test different aspects of the model's instruction-following capabilities.
-
eps
: This is a crucial hyperparameter for Instructive Decoding. We recommend to use-0.3
- It represents the balance factor between predictions that are guided by the original instruction and those influenced by the noisy instructions.
- A higher value of
eps
gives more weight to the influence of noisy instructions, while a lower value leans more towards the original instruction.
-
is_decoder
: This argument defines the architecture of the model in use.- If this is set, it indicates that the model is a decoder-only transformer model.
- If this is not set, it suggests that the model uses an encoder-decoder architecture.
š Misc.
Feel free to cite us.
@article{instructivedecoding,
title={Instructive Decoding: Instruction-Tuned Large Language Models are Self-Refiner from Noisy Instructions},
author={Kim, Taehyeon and Kim, Joonkee and Lee, Gihun and Yun, Se-Young},
journal={arXiv preprint arXiv:2311.00233},
year={2023}
}