Awesome
Crosslingual Generalization through Multitask Finetuning
This repository provides an overview of all components used for the creation of BLOOMZ & mT0 and xP3 introduced in the paper Crosslingual Generalization through Multitask Finetuning. Link to 25min video on the paper by Samuel Albanie; Link to 4min video on the paper by Niklas Muennighoff.
<!-- TOC --> <!-- /TOC -->Data
<table> <tr> <th>Name</th> <th>Explanation</th> <th>Example models</th> </tr> <tr> <td><a href=https://huggingface.co/datasets/Muennighoff/xP3x>xP3x</a></t> <td>Mixture of 17 tasks in 277 languages with English prompts</td> <td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t> <td>Mixture of 13 training tasks in 46 languages with English prompts</td> <td><a href=https://huggingface.co/bigscience/bloomz>BLOOMZ</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mT0-13B</a></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t> <td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td> <td><a href=https://huggingface.co/bigscience/bloomz-mt>BLOOMZ-MT</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mT0-13B-MT</a></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t> <td>xP3 + our evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td> <td></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t> <td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td> <td><a href=https://huggingface.co/bigscience/bloomz>BLOOMZ</a></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t> <td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td> <td><a href=https://huggingface.co/bigscience/bloomz-p3>BLOOMZ-P3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mT0-13B-P3</a></td> </tr> </table>Models
<table> <tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English. </tr> <tr> <td>Parameters</td> <td>300M</td> <td>580M</td> <td>1.2B</td> <td>3.7B</td> <td>13B</td> <td>560M</td> <td>1.1B</td> <td>1.7B</td> <td>3B</td> <td>7.1B</td> <td>176B</td> </tr> <tr> <td>Finetuned Model</td> <td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td> <td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td> <td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td> <td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td> <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td> </tr> </tr> <tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th> </tr> <tr> <td>Finetuned Model</td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td> </tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th> </tr> <tr> <td>Finetuned Model</td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td> </tr> <th colspan="12">Original pretrained checkpoints. Not recommended.</th> <tr> <td>Pretrained Model</td> <td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td> <td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td> <td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td> <td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td> <td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td> <td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td> <td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td> <td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td> <td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td> <td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td> <td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td> </tr> </table>Create xP3(x)
We have processed & uploaded xP3. If you want to recreate it, follow these steps:
- Get promptsource: For xP3mt
git clone -b xp3mt https://github.com/Muennighoff/promptsource.git
, for xP3git clone -b tr13 https://github.com/Muennighoff/promptsource.git
& installcd promptsource; pip install -e .
- Get packages
pip install -q datasets iso-639
- Get the creation script & edit it if necessary:
- For xP3mt, set
USE_ENGLISH_PROMPTS = False
in the beginning - For xP3, set
USE_ENGLISH_PROMPTS = True
in the beginning
- Run the script, such as via
python prepare_xp3.py
or a SLURM script
For the new extension of xP3, xP3x, the process is largely the same except:
- Install the
xp3x
branch instead i.e.pip install git+https://github.com/Muennighoff/promptsource.git@xp3x
- The creation script is in this repository & named
create_xp3x.py
.
xP3x is a superset of xP3, so unless you want to reproduce the paper, we recommend always using xP3x (or xP3mt if you want machine-translated prompts).
Train models
BLOOMZ
- Download the pretrained model checkpoint, which is of shape PP=12, TP=4, DP=4. If you'd like to reshape the model you will also need to download the universal checkpoint. If you want to continue finetuning, you should use our finetuned checkpoint, which is of shape PP=72, TP=1, DP=4.
- Setup the training code:
git clone -b t0loading https://github.com/bigscience-workshop/Megatron-DeepSpeed
& follow its setup guide to create an environment with necessary packages. - Download the Megatron-DeepSpeed processed xP3megds or repreprocess it for Megatron-DeepSpeed yourself by downloading xP3, removing the
merged_{lang}.jsonl
files & preprocess it using the script here. - Setup & run the training script: We use SLURM scripts available at bigscience-workshop/bigscience/train/tr13-mtf and referred to as
xp3capmixnewcodelonglossseq
. E.g. this is the script launched to train bloomz. Important parts of the script to modify are:
#SBATCH
variables, such as nodes, gpus, time, etc. - Our SLURM guide is heresource $six_ALL_CCFRWORK/start-tr13f-6B3-ml-t0
to point to your own conda environment setup via Megatron-DeepSpeed- PATH environment variables, notably
TRAIN_DATA_PATH
&VALID_DATA_PATH
, which point to files pointing to your processed training and validation data. We provide our files in this repository (xp3capmixnewcodelong_train.txt
&xp3capmixnewcodelong_validation.txt
), but you will likely want to change the paths inside. The percentages per language are based on how much each language makes up in xP3 with code being slightly upsampled.
- PP_SIZE=72, TP_SIZE=1 & BATCH SIZE & co specifying the layout. This will depend on the hardware available to you. If you change, you may have to reshape the model. For reshaping you need to use the universal checkpoint and use the
--universal
flag in the script. We recommend saving a new checkpoint right after & then continuing training without--universal
, which will be faster. - If you want to restart from a saved checkpoint (e.g. after training a few steps like above), make sure to remove the
--no-load-optim
&--reset-progress
flags - After training, you can convert the checkpoint to transformers format using the script here
Helpful resources:
mT0
Follow the finetuning instructions here making sure to use pretrained mT5 models & the xP3 dataset.
Helpful resources:
Evaluate models
Evaluation results are all available in this repository: https://huggingface.co/datasets/bigscience/evaluation-results under the respective models. Below we explain how to run evaluation.
Rank Evaluation
We evaluate the models on Rank Evaluation on XCOPA, XNLI, XStoryCloze & XWinograd:
- Get promptsource fork:
git clone -b xp3mt https://github.com/Muennighoff/promptsource.git
&cd promptsource; pip install -e .
- Get t-zero fork:
git clone -b muennighoff/upgrdps https://github.com/Muennighoff/t-zero.git
&cd t-zero; pip install -e .
- Download model & run evaluation script, for example for bloomz.
Generation Evaluation
We evaluate generation on translation & summarization during training for validation:
- Get promptsource fork:
git clone -b xp3mt https://github.com/Muennighoff/promptsource
&cd promptsource; pip install -e .
- Get bigscience-workshop/lm-evaluation-harness:
git clone https://github.com/bigscience-workshop/lm-evaluation-harness
. The script for the 7.1B model, for example, is here.
We also evaluate code generation on HumanEval:
- Get code evaluation code
git clone https://github.com/loubnabnl/bloom-code-evaluation
& go through its setup. - Set
prepend_eos
toFalse
incode_eval.py
atcomplete_code(model, tokenizer, prompt, num_completions=1, prepend_eos=True, **gen_kwargs)
i.e.complete_code(model, tokenizer, prompt, num_completions=1, prepend_eos=False, **gen_kwargs)
. - Download model & run evaluation script swapping out MODEL_CKPT for your path, for example for bloomz use this.
Plots & Tables
Plots
- Figure 1:
plotstables/xp3_taxonomy.drawio
&plotstables/xp3_taxonomy.pdf
- Figure 2:
plotstables/xp3_languages.ipynb
& colab - Figure 3:
plotstables/xp3_variants.pdf
& drawings - Figure 4:
plotstables/xp3_generalization_bar.pdf
& colab - Figure 5:
plotstables/lang_generalization
& colab - Figure 6:
plotstables/scale.pdf
& colab - Figure 7:
plotstables/validation.pdf
& colab - Figure 8:
plotstables/pretraining_sizes.pdf
& colab - Figure 9:
plotstables/english_task_generalization.pdf
& colab - Figure 10:
plotstables/task_generalization.pdf
& colab - Figure 11:
plotstables/roots_xp3_languages.pdf
& colab requiring some of the files inplotstables/contamination
- Figure 12:
plotstables/examples/bloom_code_example.py
&plotstables/examples/bloom_code_light.pdf
&plotstables/examples/bloomz_code_light.pdf
; The raw code files can be found here & here - Figure 13 - Figure 16:
plotstables/examples/*.pdf
&plotstables/examples/generations.drawio
Tables
- Table 1: Colab & Colab for complex version
- Table 2: Adapted from the Codex paper
- Table 3: Manual
- Table 4:
plotstables/compute_codegen_len.ipynb
for generations &plotstables/countcode.py
for xP3 - Table 5: Manual
- Table 6: Manual
- Table 7:
plotstables/levenshtein.py
- Table 8: Same as Table 1 with languages swapped from L1 to L2
- Table 9: Colab
- Table 10: Colab
- Prompt Appendix: https://github.com/albanie/prompt_formatting_in_latex
Citation
@article{muennighoff2022crosslingual,
title={Crosslingual generalization through multitask finetuning},
author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others},
journal={arXiv preprint arXiv:2211.01786},
year={2022}
}