Home

Awesome

GINC small-scale in-context learning dataset

GINC (Generative In-Context learning Dataset) is a small-scale synthetic dataset for studying in-context learning. The pretraining data is generated by a mixture of HMMs and the in-context learning prompt examples are also generated from HMMs (either from the mixture or not). The prompt examples are out-of-distribution with respect to the pretraining data since every example is independent, concatenated, and separated by delimiters. We provide code to generate GINC-style datasets of varying vocabulary sizes, number of HMMs, and other parameters.

Quickstart

Please create a conda environment or virtualenv using the information in conda-env.yml, then install transformers by going into the transformers/ directory and running pip install -e .. Modify consts.sh to change the default output locations and insert code to activate the environment of choice. Run scripts/runner.sh to run all the experiments on sbatch.

Explore the data

The default dataset has vocab size 50 and the pretraining data is generated as a mixture of 5 HMMs. The provided scripts in scripts (mainly generate.sh) allows for generating more instances of the GINC dataset with different parameters. All datasets for the experiments in scripts/runner.sh will be automatically generated as part of the script. The pretraining dataset is in data/GINC_trans0.1_start10.0_nsymbols50_nvalues10_nslots10_vic0.9_nhmms10/train.json while in-context prompts are in data/GINC_trans0.1_start10.0_nsymbols50_nvalues10_nslots10_vic0.9_nhmms10/id_prompts_randomsample_*.json. Note that "values" corresponds to "entities" and "slots" corresponds to "properties", using terminology from the paper (below).

What does the data look like?

An example dataset is provided in the data directory, where an example pretraining dataset and an example set of in-context prompts can be found.

This repo contains the experiments for the paper An Explanation of In-context Learning as Implicit Bayesian Inference. If you found this repo useful, please cite

@inproceedings{xie2022incontext,
title={An Explanation of In-context Learning as Implicit Bayesian Inference},
author={Sang Michael Xie and Aditi Raghunathan and Percy Liang and Tengyu Ma},
booktitle={International Conference on Learning Representations (ICLR)},
year={2022}
}