Home

Awesome

chat_templates

NOTICE: I may not update or maintain this repository actively, as I noticed that recent HF LLMs have usually implemented their chat templates in the tokenizer_config.json file. I am glad about this common practice and it will definitely facilitate out-of-box use of these powerful models.

This is a repository that includes proper chat templates (or input formats) for instruction-tuned large language models (LLMs), to support transformers's chat_template feature. If you are interested to include more chat templates, feel free to open a pull request

If you find this repo useful, please kindly cite it:

@misc{zheng-2024-chat-templates,
  author = {Zheng, Chujie},
  title = {Chat Templates for HuggingFace Large Language Models},
  year = {2024},
  howpublished = {\url{https://github.com/chujiezheng/chat_templates}}
}

Updates

What are Contained in This Repo?

Usage Examples

Important NOTE: As mentioned in this issue, the messages should contain at least one user message. It is strongly not recommented to pass only the system message, as there may result in unexpected outputs (because the models are not trained in this way).

<details> <summary><b>Example 1: Meta-Llama-3-8B-Instruct</b></summary>

This example may check if the jinja file is correctly implemented.

from transformers import AutoTokenizer

toker = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B-Instruct", token="YOUR_OWN_TOKEN")
messages = [
    {'role': 'system', 'content': 'This is a system prompt.'},
    {'role': 'user', 'content': 'This is the first user input.'},
    {'role': 'assistant', 'content': 'This is the first assistant response.'},
    {'role': 'user', 'content': 'This is the second user input.'},
]
print('###### Default (yet Correct) Chat Template ######')
print(toker.apply_chat_template(messages, tokenize=False, add_generation_prompt=True))
print('###### Corrected Chat Template ######')
chat_template = open('./chat_templates/llama-3-instruct.jinja').read()
chat_template = chat_template.replace('    ', '').replace('\n', '')
toker.chat_template = chat_template
print(toker.apply_chat_template(messages, tokenize=False, add_generation_prompt=True))

Expected output:

###### Default (yet Correct) Chat Template ######
<|begin_of_text|><|start_header_id|>system<|end_header_id|>

This is a system prompt.<|eot_id|><|start_header_id|>user<|end_header_id|>

This is the first user input.<|eot_id|><|start_header_id|>assistant<|end_header_id|>

This is the first assistant response.<|eot_id|><|start_header_id|>user<|end_header_id|>

This is the second user input.<|eot_id|><|start_header_id|>assistant<|end_header_id|>


###### Corrected Chat Template ######
<|begin_of_text|><|start_header_id|>system<|end_header_id|>

This is a system prompt.<|eot_id|><|start_header_id|>user<|end_header_id|>

This is the first user input.<|eot_id|><|start_header_id|>assistant<|end_header_id|>

This is the first assistant response.<|eot_id|><|start_header_id|>user<|end_header_id|>

This is the second user input.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
</details> <details> <summary><b>Example 2: Mistral-7B-Instruct-v0.2</b></summary>

For mistral-instruct (also gemma-it), it does not natively support the system message, so passing the system message would raise error.

from transformers import AutoTokenizer

toker = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
messages = [
    {'role': 'system', 'content': 'This is a system prompt.'},
    {'role': 'user', 'content': 'This is the first user input.'},
    {'role': 'assistant', 'content': 'This is the first assistant response.'},
    {'role': 'user', 'content': 'This is the second user input.'},
]
print('###### Default (but Improper) Chat Template ######')
# raising error
#print(toker.apply_chat_template(messages, tokenize=False, add_generation_prompt=True))
print('###### Corrected Chat Template ######')
chat_template = open('./chat_templates/mistral-instruct.jinja').read()
chat_template = chat_template.replace('    ', '').replace('\n', '')
toker.chat_template = chat_template
print(toker.apply_chat_template(messages, tokenize=False, add_generation_prompt=True))

Expected output:

###### Default (but Error-Raising) Chat Template ######
jinja2.exceptions.TemplateError: Conversation roles must alternate user/assistant/user/assistant/...
###### Corrected Chat Template ######
<s>[INST] This is a system prompt.

This is the first user input. [/INST] This is the first assistant response. </s>[INST] This is the second user input. [/INST]
</details> <details> <summary><b>Example 3: vicuna-7b-v1.5</b></summary>

NOTE: In fast-chat, vicuna does not add linebreaks between roles' messages. But I found that adding linebreaks leads to a bit better performance (especially for the v1.5 version).

Also, I found vicuna-7/13/33b-v1.3 may not work well when given a system message different from its default one. So I would recommend to use vicuna-7/13b-v1.5 instead.

from transformers import AutoTokenizer

toker = AutoTokenizer.from_pretrained("lmsys/vicuna-7b-v1.5")
messages = [
    {'role': 'system', 'content': 'This is a system prompt.'},
    {'role': 'user', 'content': 'This is the first user input.'},
    {'role': 'assistant', 'content': 'This is the first assistant response.'},
    {'role': 'user', 'content': 'This is the second user input.'},
]
print('###### Default (but Improper) Chat Template ######')
print(toker.apply_chat_template(messages, tokenize=False, add_generation_prompt=True))
print('###### Corrected Chat Template ######')
chat_template = open('./chat_templates/vicuna.jinja').read()
chat_template = chat_template.replace('    ', '').replace('\n', '')
toker.chat_template = chat_template
print(toker.apply_chat_template(messages, tokenize=False, add_generation_prompt=True))

Expected output:

###### Default (but Improper) Chat Template ######
<s>[INST] <<SYS>>
This is a system prompt.
<</SYS>>

This is the first user input. [/INST] This is the first assistant response. </s><s>[INST] This is the second user input. [/INST]
###### Corrected Chat Template ######
<s>This is a system prompt.

USER: This is the first user input.
ASSISTANT: This is the first assistant response.</s>
USER: This is the second user input.
ASSISTANT:
</details>

Supported Models

NOTE: The listed models are not inclusive and also include other-sized ones in the same model family

<details> <summary><b>granite-3.0-instruct</b></summary> </details> <details> <summary><b>Llama-3.2-Instruct</b></summary> </details> <details> <summary><b>Llama-3.1-Instruct</b></summary> </details> <details> <summary><b>Llama-3-Instruct</b></summary> </details> <details> <summary><b>Llama-2-Chat, CodeLlama-Instruct</b></summary> </details> <details> <summary><b>Qwen2-Instruct, Qwen1.5-Chat</b></summary> </details> <details> <summary><b>Mistral-Instruct</b></summary> </details> <details> <summary><b>Phi-3-Instruct</b></summary> </details> <details> <summary><b>Yi-1.5-Chat, Yi-Chat</b></summary> </details> <details> <summary><b>gemma-it, gemma-2-it</b></summary> </details> <details> <summary><b>Llama3-ChatQA-1.5</b></summary> </details> <details> <summary><b>openchat-3.5, Starling-LM</b></summary> </details> <details> <summary><b>zephyr</b></summary> </details> <details> <summary><b>vicuna</b></summary> </details> <details> <summary><b>Orca-2</b></summary> </details> <details> <summary><b>falcon-instruct</b></summary> </details> <details> <summary><b>SOLAR-Instruct</b></summary> </details> <details> <summary><b>Alpaca</b></summary> </details> <details> <summary><b>AmberChat</b></summary> </details> <details> <summary><b>saiga</b></summary> </details>

Star History

Star History Chart