Awesome
โ๏ธ KULLM (๊ตฌ๋ฆ): Korea University Large Language Model
<p align="center" width="100%"> <img src="assets/kullm_logo.png" alt="NLP Logo" style="width: 50%;"> </p>Update Logs
- 2024.04.08: ๐ค๊ตฌ๋ฆ3(KULLM3) ์์ํ ๋ชจ๋ธ(awq-4bit) ๊ณต๊ฐ
- 2024.04.03: ๐ค๊ตฌ๋ฆ3(KULLM3) ๊ณต๊ฐ
- 2023.06.23: ํ๊ตญ์ด ๋ํ ํ๊ฐ ๊ฒฐ๊ณผ ๊ณต๊ฐ
- 2023.06.08: ๐คPolyglot-ko 5.8B ๊ธฐ๋ฐ KULLM-Polyglot-5.8B-v2 fp16 ๋ชจ๋ธ ๊ณต๊ฐ
- 2023.06.01: ๊ตฌ๋ฆ(KULLM) ๋ฐ์ดํฐ์ v2 HuggingFace Datasets ๊ณต๊ฐ
- 2023.05.31: ๐คPolyglot-ko 12.8B ๊ธฐ๋ฐ KULLM-Polyglot-12.8B-v2 fp16 ๋ชจ๋ธ ๊ณต๊ฐ
- 2023.05.30: ๐คPolyglot-ko 12.8B ๊ธฐ๋ฐ KULLM-Polyglot-12.8B fp16 ๋ชจ๋ธ ๊ณต๊ฐ
<br>
KULLM(๊ตฌ๋ฆ)์ ๊ณ ๋ ค๋ํ๊ต NLP & AI ์ฐ๊ตฌ์ค๊ณผ HIAI ์ฐ๊ตฌ์๊ฐ ๊ฐ๋ฐํ ํ๊ตญ์ด Large Language Model (LLM) ์ ๋๋ค.
KULLM3์ ๊ณต๊ฐํฉ๋๋ค.
(์ด์ ๋ชจ๋ธ์ ํ์ต ๋ฐฉ๋ฒ ๋ฐ ๋ฐ์ดํฐ๋ kullm_v2 ๋ธ๋์น๋ฅผ ์ฐธ๊ณ ํด ์ฃผ์ธ์.)
<br/>KULLM3 ๋ํ ์ฑ๋ฅ ํ๊ฐ ๊ฒฐ๊ณผ
<img src="assets/kullm3_instruction_evaluation.png" >๋ํ ์์
<img src="assets/ex1.png" alt="example 1" ><img src="assets/ex2.png" alt="example 2">
<img src="assets/ex3.png" alt="example 3">
<img src="assets/ex4.png" alt="example 4">
KULLM ๋ชจ๋ธ ์คํ ์์ ์ฝ๋
Huggingface TextStreamer๋ก ์คํธ๋ฆฌ๋ฐ
- torch / transformers / accelerate ์ค์น
- (2024.04.03๊ธฐ์ค) transformers>=4.39.0 ์์ generate ํจ์๊ฐ ์ ๋๋ก ๋์ํ์ง ์์ต๋๋ค. 4.38.2๋ก ์ค์นํด์ฃผ์ธ์.
- (2024.04.28๊ธฐ์ค) transformers>=4.40.0 ์์ ์ ์ ๋์ํจ์ ํ์ธํ์ต๋๋ค.
pip install torch transformers==4.38.2 accelerate
์๋ ์์ ์ฝ๋๋ก ์คํํด๋ณผ ์ ์์ต๋๋ค.
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
MODEL_DIR = "nlpai-lab/KULLM3"
model = AutoModelForCausalLM.from_pretrained(MODEL_DIR, torch_dtype=torch.float16).to("cuda")
tokenizer = AutoTokenizer.from_pretrained(MODEL_DIR)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
s = "๊ณ ๋ ค๋ํ๊ต์ ๋ํด์ ์๊ณ ์๋?"
conversation = [{'role': 'user', 'content': s}]
inputs = tokenizer.apply_chat_template(
conversation,
tokenize=True,
add_generation_prompt=True,
return_tensors='pt').to("cuda")
_ = model.generate(inputs, streamer=streamer, max_new_tokens=1024, use_cache=True)
# ๋ค, ๊ณ ๋ ค๋ํ๊ต์ ๋ํด ์๊ณ ์์ต๋๋ค. ๊ณ ๋ ค๋ํ๊ต๋ ๋ํ๋ฏผ๊ตญ ์์ธ์ ์์นํ ์ฌ๋ฆฝ ๋ํ๊ต๋ก, 1905๋
์ ์ค๋ฆฝ๋์์ต๋๋ค. ์ด ๋ํ๊ต๋ ํ๊ตญ์์ ๊ฐ์ฅ ์ค๋๋ ๋ํ ์ค ํ๋๋ก, ๋ค์ํ ํ๋ถ ๋ฐ ๋ํ์ ํ๋ก๊ทธ๋จ์ ์ ๊ณตํฉ๋๋ค. ๊ณ ๋ ค๋ํ๊ต๋ ํนํ ๋ฒํ, ๊ฒฝ์ ํ, ์ ์นํ, ์ฌํํ, ๋ฌธํ, ๊ณผํ ๋ถ์ผ์์ ๋์ ๋ช
์ฑ์ ๊ฐ์ง๊ณ ์์ต๋๋ค. ๋ํ, ์คํฌ์ธ ๋ถ์ผ์์๋ ํ๋ฐํ ํ๋์ ๋ณด์ด๋ฉฐ, ๋ํ๋ฏผ๊ตญ ๋ํ ์คํฌ์ธ ์์ ์ค์ํ ์ญํ ์ ํ๊ณ ์์ต๋๋ค. ๊ณ ๋ ค๋ํ๊ต๋ ๊ตญ์ ์ ์ธ ๊ต๋ฅ์ ํ๋ ฅ์๋ ์ ๊ทน์ ์ด๋ฉฐ, ์ ์ธ๊ณ ๋ค์ํ ๋ํ๊ณผ์ ํ๋ ฅ์ ํตํด ๊ธ๋ก๋ฒ ๊ฒฝ์๋ ฅ์ ๊ฐํํ๊ณ ์์ต๋๋ค.
<br/>
Training
- KULLM3์ upstage/SOLAR-10.7B-v1.0์ ๊ธฐ๋ฐ์ผ๋ก instruction-tuning ๋ ๋ชจ๋ธ์ ๋๋ค.
- 8รA100 GPU๋ก ํ์ต๋์์ต๋๋ค.
- ๋ค์ ์์คํ ํ๋กฌํํธ๊ฐ ์ฃผ์ด์ง ์ํ๋ก ํ์ตํ์์ต๋๋ค. (์์ ์ฝ๋์์๋ ์์คํ ํ๋กฌํํธ๋ฅผ ํฌํจ์ํค๊ณ ์์ต๋๋ค!)
๋น์ ์ ๊ณ ๋ ค๋ํ๊ต NLP&AI ์ฐ๊ตฌ์ค์์ ๋ง๋ AI ์ฑ๋ด์
๋๋ค.
๋น์ ์ ์ด๋ฆ์ 'KULLM'์ผ๋ก, ํ๊ตญ์ด๋ก๋ '๊ตฌ๋ฆ'์ ๋ปํฉ๋๋ค.
๋น์ ์ ๋น๋๋์ ์ด๊ฑฐ๋, ์ฑ์ ์ด๊ฑฐ๋, ๋ถ๋ฒ์ ์ด๊ฑฐ๋ ๋๋ ์ฌํ ํต๋
์ ์ผ๋ก ํ์ฉ๋์ง ์๋ ๋ฐ์ธ์ ํ์ง ์์ต๋๋ค.
์ฌ์ฉ์์ ์ฆ๊ฒ๊ฒ ๋ํํ๋ฉฐ, ์ฌ์ฉ์์ ์๋ต์ ๊ฐ๋ฅํ ์ ํํ๊ณ ์น์ ํ๊ฒ ์๋ตํจ์ผ๋ก์จ ์ต๋ํ ๋์์ฃผ๋ ค๊ณ ๋
ธ๋ ฅํฉ๋๋ค.
์ง๋ฌธ์ด ์ด์ํ๋ค๋ฉด, ์ด๋ค ๋ถ๋ถ์ด ์ด์ํ์ง ์ค๋ช
ํฉ๋๋ค. ๊ฑฐ์ง ์ ๋ณด๋ฅผ ๋ฐ์ธํ์ง ์๋๋ก ์ฃผ์ํฉ๋๋ค.
Model Evaluation (Fully Reproducible)
- ๋ํ ๋ฅ๋ ฅ ํ๊ฐ๋ ๋ค์์ ์ฐธ๊ณ ํ์ฌ ์งํํ์ต๋๋ค.
- G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment (Yang Liu. et. al. 2023)
- MT-Eval
- ํ๊ฐ ๋ชจ๋ธ์ GPT-4-Turbo(gpt-4-0125-preview)๋ฅผ ์ฌ์ฉํ์๊ณ , ํ๊ฐ ๋ฐ์ดํฐ์
์ yizhongw/self-instruct์ ํด๋จผ ํ๊ฐ ๋ฐ์ดํฐ์
์ธ
user_oriented_instructions.jsonl
์ deepl๋ก ๋ฒ์ญํ ๋ฐ์ดํฐ์ ์ ์ฌ์ฉํ์์ต๋๋ค. - ์ฃผ์ด์ง prompt ๋ฐ์ดํฐ์ ๋ํด ๋ชจ๋ธ์ด ์๋ต์ ์์ฑํ๊ณ , ๊ทธ ์๋ต์ OpenAI API๋ฅผ ์ฌ์ฉํ์ฌ ํ๊ฐํ๋ ๋ฐฉ์์ ๋๋ค.
- ํด๋น ํ๊ฐ ๊ฒฐ๊ณผ๋ repo์์ ์ฌํํ ์ ์์ต๋๋ค.
Prompt
๋ชจ๋ธ ํ๊ฐ์ ์ฌ์ฉํ ํ๋กฌํํธ๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
์คํ ๊ฒฐ๊ณผ, ํ๊ตญ์ด๋ณด๋ค ์์ด ํ๋กฌํํธ๊ฐ ๋ ์ ํํ ํ๊ฐ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์ฌ์ฃผ์์ต๋๋ค.
๋ฐ๋ผ์ ํ๊ฐ์ ์ ํ์ฑ์ ์ํด ์์ด ํ๋กฌํํธ๋ก ์งํํ์ต๋๋ค.
You will be given evaluation instruction, input and AI-generated response.
Your task is to rate the response on given metric.
Please make sure you read and understand these instructions carefully. Please keep this document open while reviewing, and refer to it as needed.
Evaluation Criteria:
- Fluency (1-5): The quality of the language used in the translation. A high-quality response should be grammatically correct, idiomatic, and free from spelling and punctuation errors.
- Coherence (1-5): A high score indicates that the response maintains consistent context. A low score is given if the response shifts context or language inappropriately from instruction(e.g. instruction's language is Korean, but response is English).
- Accuracy (1-5) - The correctness of the answer. The answer should be factually correct and directly answer the question asked
- Completeness (1-5) - The extent to which the response covers all aspects of the question. The response should not just address one part of the question, but should provide a comprehensive response.
- Overall Quality (1-5) - The overall effectiveness and excellence of the response, integrating considerations of all above criteria.
Evaluation Steps:
1. Read the instruction and input carefully and understand what it is asking.
2. Read the AI-generated response and Evaluation Criteria.
3. Assign a score for each criterion on a scale of 1 to 5, where 1 is the lowest and 5 is the highest.
Instruction:
{instruction}
Input:
{input}
Response:
{response}
Evaluation Form (scores ONLY):
- Fluency (1-5):
- Coherence (1-5):
- Accuracy (1-5):
- Completeness (1-5):
- Overall Quality (1-5):
<br/>
์ฃผ์์ฌํญ
- ํ๊ฐ(Hallucination) ํ์๊ณผ, decoding strategy์ ๋ฐ๋ผ ๋์ด ๋ฐ๋ณต ํ์์ด ์กด์ฌํ๋ ๋ชจ๋ธ์ ๋๋ค.
- KULLM์ด ์์ฑํ ๊ฒฐ๊ณผ๋ ๋ถ์ ํํ๊ฑฐ๋ ์ ํดํ ๊ฒฐ๊ณผ๋ฅผ ํฌํจํ ์ ์์ต๋๋ค.
- ๊ณ ์ ๋ system prompt๋ก ํ๋ จ๋ ๋ชจ๋ธ์ด๋ฏ๋ก, system prompt๋ฅผ ์ฃผ์ง ์๋ ๋ฒค์น๋งํฌ์ ๊ฒฝ์ฐ ์ฑ๋ฅ์ด ๋ณธ๋๋ณด๋ค ๋ฎ์ ์ ์์ต๋๋ค.
License
Apache-2.0
Citation
Please cite the repo if you use the data or code in this repo.
@misc{kullm3,
author = {Kim, Jeongwook and Lee, Taemin and Jang, Yoonna and Moon, Hyeonseok and Son, Suhyune and Lee, Seungyoon and Kim, Dongjun},
title = {KULLM3: Korea University Large Language Model 3},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/nlpai-lab/kullm}},
}
@inproceedings{lee2023kullm,
title={KULLM: Learning to Construct Korean Instruction-following Large Language Models},
author={Lee, SeungJun and Lee, Taemin and Lee, Jeongwoo and Jang, Yoona and Lim, Heuiseok},
booktitle={Annual Conference on Human and Language Technology},
pages={196--202},
year={2023},
organization={Human and Language Technology}
}
@misc{kullm,
author = {NLP & AI Lab and Human-Inspired AI research},
title = {KULLM: Korea University Large Language Model Project},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/nlpai-lab/kullm}},
}