Awesome
Democratizing Medical LLMs For Much More Languages
Covering 12 Major Languages including English, Chinese, French, Hindi, Spanish, Arabic, Russian, Japanese, Korean, German, Italian, Portuguese and 38 Minor Languages So far.
<center> <p align="center"> π <a href="https://arxiv.org/abs/2410.10626" target="_blank">Paper</a> β’ π <a href="" target="_blank">Demo</a> β’ π€ <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloMoEDataset" target="_blank">ApolloMoEDataset</a> β’ π€ <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloMoEBench" target="_blank">ApolloMoEBench</a> β’ π€ <a href="https://huggingface.co/collections/FreedomIntelligence/apollomoe-and-apollo2-670ddebe3bb1ba1aebabbf2c" target="_blank">Models</a> β’ π <a href="https://github.com/FreedomIntelligence/Apollo" target="_blank">Apollo</a> </p>π Update
- [2024.10.15] ApolloMoE repo is publishedοΌπ
Languages Coverage
12 Major Languages and 38 Minor Languages
<details> <summary>Click to view the Languages Coverage</summary> </details>Architecture
<details> <summary>Click to view the MoE routing image</summary> </details>Results
Dense
π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo2-0.5B" target="_blank">Apollo2-0.5B</a> β’ π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo2-1.5B" target="_blank">Apollo2-1.5B</a> β’ π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo2-2B" target="_blank">Apollo2-2B</a> β’ π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo2-3.8B" target="_blank">Apollo2-3.8B</a> β’ π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo2-7B" target="_blank">Apollo2-7B</a> β’ π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo2-9B" target="_blank">Apollo2-9B</a>
<details> <summary>Click to view the Dense Models Results</summary> </details>Post-MoE
π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo-MoE-0.5B" target="_blank">Apollo-MoE-0.5B</a> β’ π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo-MoE-1.5B" target="_blank">Apollo-MoE-1.5B</a> β’ π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo-MoE-7B" target="_blank">Apollo-MoE-7B</a>
<details> <summary>Click to view the Post-MoE Models Results</summary> </details>Usage Format
Apollo2
- 0.5B, 1.5B, 7B: User:{query}\nAssistant:{response}<|endoftext|>
- 2B, 9B: User:{query}\nAssistant:{response}<eos>
- 3.8B: <|user|>\n{query}<|end|><|assisitant|>\n{response}<|end|>
Apollo-MoE
- 0.5B, 1.5B, 7B: User:{query}\nAssistant:{response}<|endoftext|>
Dataset & Evaluation
-
Dataset π€ <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloMoEDataset" target="_blank">ApolloMoEDataset</a>
<details><summary>Click to expand</summary> </details>The complete data is stored in
ApolloMoEDataset.json
, while a sample shown inApolloMoEDataset_sample.json
-
Evaluation π€ <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloMoEBench" target="_blank">ApolloMoEBench</a>
<details><summary>Click to expand</summary>-
EN:
- MedQA-USMLE
- MedMCQA
- PubMedQA: Because the results fluctuated too much, they were not used in the paper.
- MMLU-Medical
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
-
ZH:
- MedQA-MCMLE
- CMB-single: Not used in the paper
- Randomly sample 2,000 multiple-choice questions with single answer.
- CMMLU-Medical
- Anatomy, Clinical_knowledge, College_medicine, Genetics, Nutrition, Traditional_chinese_medicine, Virology
- CExam: Not used in the paper
- Randomly sample 2,000 multiple-choice questions
-
ES: Head_qa
-
FR:
- Frenchmedmcqa
- [MMLU_FR]
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
-
HI: MMLU_HI
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
-
AR: MMLU_AR
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
-
JA: IgakuQA
-
KO: KorMedMCQA
-
IT:
- MedExpQA
- [MMLU_IT]
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
-
DE: BioInstructQA: German part
-
PT: BioInstructQA: Portuguese part
-
RU: RuMedBench
-
Model Download and Inference
We take Apollo-MoE-0.5B as an example
-
Login Huggingface
huggingface-cli login --token $HUGGINGFACE_TOKEN
-
Download model to local dir
from huggingface_hub import snapshot_download import os local_model_dir=os.path.join('/path/to/models/dir','Apollo-MoE-0.5B') snapshot_download(repo_id="FreedomIntelligence/Apollo-MoE-0.5B", local_dir=local_model_dir)
-
Inference Example
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig import os local_model_dir=os.path.join('/path/to/models/dir','Apollo-MoE-0.5B') model=AutoModelForCausalLM.from_pretrained(local_model_dir,trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained(local_model_dir,trust_remote_code=True) generation_config = GenerationConfig.from_pretrained(local_model_dir, pad_token_id=tokenizer.pad_token_id, num_return_sequences=1, max_new_tokens=7, min_new_tokens=2, do_sample=False, temperature=1.0, top_k=50, top_p=1.0) inputs = tokenizer('Answer direclty.\nThe capital of Mongolia is Ulaanbaatar.\nThe capital of Iceland is Reykjavik.\nThe capital of Australia is', return_tensors='pt') inputs = inputs.to(model.device) pred = model.generate(**inputs,generation_config=generation_config) print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
Results reproduction
(Optional) Custom Model as Base
<details><summary>Click to expand</summary> copy /path/to/your/configuration_upcycling_qwen2_moe.py /path/to/src/variants/moe_initilization/configuration_upcycling_qwen2_moe.py
copy /path/to/your/modeling_upcycling_qwen2_moe.py /path/to/src/variants/moe_initilization/modeling_upcycling_qwen2_moe.py
cd /path/to/src/variants/moe_initilization
bash convert.sh
</details>
Full-finetune on Base Model
<details><summary>Click to expand</summary>We take Apollo2-7B or Apollo-MoE-0.5B as examples
-
Download and extract data:
- Dowload Dataset and Benchmark firstly
- Extract major or minor data part according to your needs:
bash 0.extract_data.sh
-
Prepare test and dev data for specific model:
- Create test data for with special token
bash 1.data_process_test&dev.sh
-
Prepare train data for specific model (Create tokenized data in advance):
- You can adjust data Training order and Training Epoch in this step
bash 2.data_process_train.sh
-
Train the model
- If you want to train in Multi Nodes please refer to ./src/sft/training_config/zero_multi.yaml
bash 3.single_node_train.sh
-
Evaluate your model: Generate score for benchmark
bash 4.eval.sh
Citation
Please use the following citation if you intend to use our dataset for training or evaluation:
@misc{zheng2024efficientlydemocratizingmedicalllms,
title={Efficiently Democratizing Medical LLMs for 50 Languages via a Mixture of Language Family Experts},
author={Guorui Zheng and Xidong Wang and Juhao Liang and Nuo Chen and Yuping Zheng and Benyou Wang},
year={2024},
eprint={2410.10626},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.10626},
}