Home

Awesome

Measuring Massive Multitask Language Understanding

This is the repository for Measuring Massive Multitask Language Understanding by Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt (ICLR 2021).

This repository contains OpenAI API evaluation code, and the test is available for download here.

Test Leaderboard

If you want to have your model added to the leaderboard, please reach out to us or submit a pull request.

Results of the test:

ModelAuthorsHumanitiesSocial SciencesSTEMOtherAverage
Chinchilla (70B, few-shot)Hoffmann et al., 202263.679.354.973.967.5
Gopher (280B, few-shot)Rae et al., 202156.271.947.466.160.0
GPT-3 (175B, fine-tuned)Brown et al., 202052.563.941.457.953.9
flan-T5-xlChung et al., 202246.357.739.055.149.3
UnifiedQAKhashabi et al., 202045.656.640.254.648.9
GPT-3 (175B, few-shot)Brown et al., 202040.850.436.748.843.9
GPT-3 (6.7B, fine-tuned)Brown et al., 202042.149.235.146.943.2
flan-T5-largeChung et al., 202239.149.133.247.441.9
flan-T5-baseChung et al., 202234.038.127.637.034.2
GPT-2Radford et al., 201932.833.330.233.132.4
flan-T5-smallChung et al., 202229.930.927.529.729.5
Random BaselineN/A25.025.025.025.025.0

Citation

If you find this useful in your research, please consider citing the test and also the ETHICS dataset it draws from:

@article{hendryckstest2021,
  title={Measuring Massive Multitask Language Understanding},
  author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
  journal={Proceedings of the International Conference on Learning Representations (ICLR)},
  year={2021}
}

@article{hendrycks2021ethics,
  title={Aligning AI With Shared Human Values},
  author={Dan Hendrycks and Collin Burns and Steven Basart and Andrew Critch and Jerry Li and Dawn Song and Jacob Steinhardt},
  journal={Proceedings of the International Conference on Learning Representations (ICLR)},
  year={2021}
}