Home

Awesome

Search Augmented Instruction Learning (SAIL)

<div align="center"> <img src="images/sail-7b-logo.png" width="280px">

Towards Robust Grounded Language Modeling [DEMO] | [WEB]

Hongyin Luo, Yung-Sung Chuang, Yuang Gong, Tianhua Zhang,

Yoon Kim, Xixin Wu, Danny Fox, Helen Meng, James Glass

</div>

Contents

About the Project

We answer the following questions:

Bing fine-tuning LLaMA-7B model with a search-augmented corpus, our SAIL-7b model outperforms ChatGPT and Vicuna-13B on instruction following! Scoring against GPT4 and ChatGPT

Our method also benefits AI for social good! The SAIL-7B model outperforms LLaMA-7B and Vicuna-13B on hate speec detection, stereotype recognition, and search-grounded fact checking tasks Fact checking example

Reproducing SAIL Models

We construct a search-augmented instruction training set with two steps:

We provide the collect search results, and a complete training corpus can be constructed by simply running

bash data_prep.sh

Note that this process including runing a 350M large language model (RoBERTa or DeBERTa based). This option can be switched in the data_prep.sh file.

The constructed training set can be used to fine-tuned LLaMA-based models with FastChat. If any tokenization error occurs, try replacing the following files with the code files we provide in this repository:

The training parameters are provided in train.sh.

Use the Pretrained SAIL-7b Model

The pretrained SAIL-7B model is based on LLaMA, so the applications of the model and demo should align with LLaMA's GPL-3.0 license.

Demo

We build a live demo on the Huggingface Space with Gradio. The demo time outs on 1 min so it cannot process very long texts. With the demo, you could test the instruction following ability of SAIL-7B with or without search augmentation.

Weights

We plan to release the Delta weights of the pretrained model before July.

Contact

If there is any question, submit an issue or contact hyluo AT mit DOT edu.