Home

Awesome

Korean-Sentence-Embedding

The Korean Sentence Embedding Repository provides pre-trained models that are ready for instant download and use. Moreover, it offers an environment that is well-suited for personalized model training.

Quick tour

Note <br> All the pretrained models are uploaded in Huggingface Model Hub. Check https://huggingface.co/BM-K

import torch
from transformers import AutoModel, AutoTokenizer

def cal_score(a, b):
    if len(a.shape) == 1: a = a.unsqueeze(0)
    if len(b.shape) == 1: b = b.unsqueeze(0)

    a_norm = a / a.norm(dim=1)[:, None]
    b_norm = b / b.norm(dim=1)[:, None]
    return torch.mm(a_norm, b_norm.transpose(0, 1)) * 100

model = AutoModel.from_pretrained('BM-K/KoSimCSE-roberta-multitask')  # or 'BM-K/KoSimCSE-bert-multitask'
tokenizer = AutoTokenizer.from_pretrained('BM-K/KoSimCSE-roberta-multitask')  # or 'BM-K/KoSimCSE-bert-multitask'

sentences = ['치타가 들판을 가로 질러 먹이를 쫓는다.',
             '치타 한 마리가 먹이 뒤에서 달리고 있다.',
             '원숭이 한 마리가 드럼을 연주한다.']

inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors="pt")
embeddings, _ = model(**inputs, return_dict=False)

score01 = cal_score(embeddings[0][0], embeddings[1][0])  # 84.09
# '치타가 들판을 가로 질러 먹이를 쫓는다.' @ '치타 한 마리가 먹이 뒤에서 달리고 있다.'
score02 = cal_score(embeddings[0][0], embeddings[2][0])  # 23.21
# '치타가 들판을 가로 질러 먹이를 쫓는다.' @ '원숭이 한 마리가 드럼을 연주한다.'

Update history

** Updates on Mar.08.2023 **

** Updates on Feb.24.2023 **

** Updates on Nov.15.2022 **

** Updates on Oct.27.2022 **

** Updates on Oct.21.2022 **

** Updates on Jun.01.2022 **

** Updates on May.23.2022 **

** Updates on Mar.01.2022 **

** Updates on Feb.11.2022 **

** Updates on Jan.26.2022 **

Baseline Models

Baseline models used for korean sentence embedding - KLUE-PLMs

ModelEmbedding sizeHidden size# Layers# Heads
KLUE-BERT-base7687681212
KLUE-RoBERTa-base7687681212

Warning <br> Large pre-trained models need a lot of GPU memory to train

Available Models

  1. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks [SBERT]-[EMNLP 2019]
  2. SimCSE: Simple Contrastive Learning of Sentence Embeddings [SimCSE]-[EMNLP 2021]
  3. Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models [Sentence-T5]-[ACL findings 2022]
  4. DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings [DiffCSE]-[NAACL 2022]

Datasets

Setups

Python Pytorch

KoSentenceBERT

KoSimCSE

KoSentenceT5

KoDiffCSE

Performance-supervised

ModelAverageCosine PearsonCosine SpearmanEuclidean PearsonEuclidean SpearmanManhattan PearsonManhattan SpearmanDot PearsonDot Spearman
KoSBERT<sup></sup><sub>SKT</sub>77.4078.8178.4777.6877.7877.7177.8375.7575.22
KoSBERT80.3982.1382.2580.6780.7580.6980.7877.9677.90
KoSRoBERTa81.6481.2082.2081.7982.3481.5982.2080.6281.25
KoSentenceBART77.1479.7178.7478.4278.0278.4078.0074.2472.15
KoSentenceT577.8380.8779.7480.2479.3680.1979.2772.8170.17
KoSimCSE-BERT<sup></sup><sub>SKT</sub>81.3282.1282.5681.8481.6381.9981.7479.5579.19
KoSimCSE-BERT83.3783.2283.5883.2483.6083.1583.5483.1383.49
KoSimCSE-RoBERTa83.6583.6083.7783.5483.7683.5583.7783.5583.64
KoSimCSE-BERT-multitask85.7185.2986.0285.6386.0185.5785.9785.2685.93
KoSimCSE-RoBERTa-multitask85.7785.0886.1285.8486.1285.8386.1285.0385.99

Performance-unsupervised

ModelAverageCosine PearsonCosine SpearmanEuclidean PearsonEuclidean SpearmanManhattan PearsonManhattan SpearmanDot PearsonDot Spearman
KoSRoBERTa-base<sup></sup>N/AN/A48.96N/AN/AN/AN/AN/AN/A
KoSRoBERTa-large<sup></sup>N/AN/A51.35N/AN/AN/AN/AN/AN/A
KoSimCSE-BERT74.0874.9273.9874.1574.2274.0774.0774.1573.14
KoSimCSE-RoBERTa75.2775.9375.0075.2875.0175.1774.8375.9575.01
KoDiffCSE-RoBERTa77.1777.7376.9677.2176.8977.1176.8177.7476.97

Downstream tasks

License

This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>.

<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a><br />

References

@misc{park2021klue,
    title={KLUE: Korean Language Understanding Evaluation},
    author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jung-Woo Ha and Kyunghyun Cho},
    year={2021},
    eprint={2105.09680},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

@inproceedings{gao2021simcse,
   title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings},
   author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi},
   booktitle={Empirical Methods in Natural Language Processing (EMNLP)},
   year={2021}
}

@article{ham2020kornli,
  title={KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language Understanding},
  author={Ham, Jiyeon and Choe, Yo Joong and Park, Kyubyong and Choi, Ilji and Soh, Hyungjoon},
  journal={arXiv preprint arXiv:2004.03289},
  year={2020}
}

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "http://arxiv.org/abs/1908.10084",
}

@inproceedings{chuang2022diffcse,
   title={{DiffCSE}: Difference-based Contrastive Learning for Sentence Embeddings},
   author={Chuang, Yung-Sung and Dangovski, Rumen and Luo, Hongyin and Zhang, Yang and Chang, Shiyu and Soljacic, Marin and Li, Shang-Wen and Yih, Wen-tau and Kim, Yoon and Glass, James},
   booktitle={Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
   year={2022}
}