Home

Awesome

KLUE: Korean Language Understanding Evaluation

The KLUE is introduced to make advances in Korean NLP. Korean pre-trained language models (PLMs) have appeared to solve Korean NLP problems since PLMs have brought significant performance gains in NLP problems in other languages. Despite the proliferation of Korean language models, however, none of the proper evaluation datasets has been opened yet. The lack of such benchmark dataset limits the fair comparison between the models and further progress on model architectures.

Along with the benchmark tasks and data, we provide suitable evaluation metrics and fine-tuning recipes for pretrained language models for each task. We furthermore release the PLMs, KLUE-BERT and KLUE-RoBERTa, to help reproducing baseline models on KLUE and thereby facilitate future research.

See our paper for more details.

Design Principles

In designing the Korean Language Understanding Evaluation (KLUE) benchmark, we aim to make KLUE;

  1. cover diverse tasks and corpora
  2. accessible to everyone without any restriction
  3. include accurate and unambiguous annotations
  4. mitigate AI ethical issues.

Benchmark Datasets

KLUE benchmark is composed of 8 tasks:

See wiki for dataset description. <br>

NOTE: In the paper, we describe more in detail how our 4 principles have guided creating KLUE from task selection, corpus selection, annotation protocols, determining evaluation metrics to baseline construction.

KLUE-PLMs

We have trained 2 models: KLUE-BERT and KLUE-RoBERTa. <br>

ModelEmbedding sizeHidden size# Layers# Heads
KLUE-BERT-base7687681212
KLUE-RoBERTa-small768768612
KLUE-RoBERTa-base7687681212
KLUE-RoBERTa-large102410242416

NOTE: All the pretrained models are uploaded in Huggingface Model Hub. Check https://huggingface.co/klue.

Baseline Scores

Evaluation results of our PLMs and other baselines on KLUE benchmark. Bold shows the best performance across the models, and Italic indicates the best performance among BASE models.

ModelTCSTSNLINERREDPMRCDST
F1Pearsons' rF1ACCentity F1char F1F1AUPRCUASLASEMROUGEJGASlot F1
mBERT-base81.5584.6676.0073.2076.5089.2357.8853.8290.3086.6644.6655.9235.4688.63
XLM-R-base83.5289.1682.0177.3380.3792.1257.4654.9889.2087.6927.4853.9339.8289.61
XLM-R-large86.0692.9785.8685.9382.2793.2258.3961.1592.7188.7035.9966.7741.2089.80
KR-BERT-base84.5888.6181.0777.1774.5890.1362.7460.9489.9287.4848.2858.5445.3390.70
koELECTRA-base84.5992.4684.8485.6386.1192.5662.8558.9492.9087.7759.8266.0541.5889.60
KLUE-BERT-base85.7390.8582.8481.6383.9791.3966.4466.1789.9688.0562.3268.5146.6491.61
KLUE-RoBERTa-small84.9891.5485.1679.3383.6591.1460.8958.9690.0488.1457.3262.7046.6291.44
KLUE-RoBERTa-base85.0792.5085.4084.8384.6091.4467.6568.5593.0488.3268.6773.9847.4991.64
KLUE-RoBERTa-large85.6993.3586.6389.1785.0091.8671.1372.9893.4888.3675.5880.5950.2292.23

Leaderboard

https://klue-benchmark.com

Submission Guideline

See https://aistages-prod-server-public.s3.amazonaws.com/app/Competitions/000065/data/klue_code.tar.gz

Members

Researchers

Sungjoon Park, Jihyung Moon, Sungdong Kim, Won Ik Cho, Jiyoon Han, Jangwon Park, Chisung Song, Junseong Kim, Youngsook Song, Taehwan Oh, Joohong Lee, Juhyun Oh, Sungwon Ryu, Younghoon Jeong, Inkwon Lee, Sangwoo Seo, Dongjun Lee, Hyunwoo Kim, Myeonghwa Lee, Seongbo Jang, Seungwon Do, Sunkyoung Kim, Kyungtae Lim, Jongwon Lee, Kyumin Park, Jamin Shin, Seonghyun Kim, Lucy Park

Advisors

Alice Oh, Jung-Woo Ha, Kyunghyun Cho

Sponsors

Organizers

Reference

@misc{park2021klue,
      title={KLUE: Korean Language Understanding Evaluation},
      author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho},
      year={2021},
      eprint={2105.09680},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

License

This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>.

<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a><br />