Awesome
MedJamba
Multilingual Medical Model Based On Jamba
<center></center>
π <a href="https://arxiv.org/abs/2403.03640" target="_blank">Paper</a> β’ π <a href="https://apollo.llmzoo.com/" target="_blank">Demo</a> β’ π€ <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a> β’ π€ <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a>
π Update
-
[2024.04.25] MedJamba Model is publishedοΌπ
Results
π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo-0.5B" target="_blank">Apollo-0.5B</a> β’ π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo-1.8B" target="_blank">Apollo-1.8B</a> β’ π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo-2B" target="_blank">Apollo-2B</a> β’ π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo-6B" target="_blank">Apollo-6B</a> β’ π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo-7B" target="_blank">Apollo-7B</a> β’ π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo-34B" target="_blank">Apollo-34B</a> β’ π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo-72B" target="_blank">Apollo-72B</a>
π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo-MedJamba" target="_blank">MedJamba</a>
π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo-0.5B-GGUF" target="_blank">Apollo-0.5B-GGUF</a> β’ π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo-2B-GGUF" target="_blank">Apollo-2B-GGUF</a> β’ π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo-6B-GGUF" target="_blank">Apollo-6B-GGUF</a> β’ π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo-7B-GGUF" target="_blank">Apollo-7B-GGUF</a>
Dataset & Evaluation
-
Dataset π€ <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus
<details><summary>Click to expand</summary>- Zip File
- Data category
- Pretrain:
- data item:
- json_name: {data_source}{language}{data_type}.json
- data_type: medicalBook, medicalGuideline, medicalPaper, medicalWeb(from online forum), medicalWiki
- language: en(English), zh(chinese), es(spanish), fr(french), hi(Hindi)
- data_type: qa(generated qa from text)
- data_type==text: list of string
[ "string1", "string2", ... ]
- data_type==qa: list of qa pairs(list of string)
[ [ "q1", "a1", "q2", "a2", ... ], ... ]
- data item:
- SFT:
- json_name: {data_source}_{language}.json
- data_type: code, general, math, medicalExam, medicalPatient
- data item: list of qa pairs(list of string)
[ [ "q1", "a1", "q2", "a2", ... ], ... ]
- Pretrain:
-
Evaluation π€ <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a>
<details><summary>Click to expand</summary>-
EN:
- MedQA-USMLE
- MedMCQA
- PubMedQA: Because the results fluctuated too much, they were not used in the paper.
- MMLU-Medical
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
-
ZH:
- MedQA-MCMLE
- CMB-single: Not used in the paper
- Randomly sample 2,000 multiple-choice questions with single answer.
- CMMLU-Medical
- Anatomy, Clinical_knowledge, College_medicine, Genetics, Nutrition, Traditional_chinese_medicine, Virology
- CExam: Not used in the paper
- Randomly sample 2,000 multiple-choice questions
-
ES: Head_qa
-
FR: Frenchmedmcqa
-
HI: MMLU_HI
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
-
AR: MMLU_Ara
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
-
Results reproduction
<details><summary>Click to expand</summary>-
Download Dataset for project:
bash 0.download_data.sh
-
Prepare test and dev for specific model:
- Create test data for with special token, you can use ./util/check.ipynb to check models' special tokens
bash 1.data_process_test&dev.sh
-
Prepare train data for specific model (Create tokenized data in advance):
- You can adjust data Training order and Training Epoch in this step
bash 2.data_process_train.sh
-
Train the model
- Multi Nodes refer to ./scripts/multi_node_train_*.sh
pip install causal-conv1d>=1.2.0 pip install mamba-ssm
Node 0:
bash ./scripts/3.multinode_train_jamba_rank0.sh
... Node 4:
bash ./scripts/3.multinode_train_jamba_rank4.sh
-
Evaluate your model: Generate score for benchmark
bash 4.eval.sh
-
Evaluate your model: Play with your ckpts in bash
python ./src/evaluate/cli_demo.py --model_name='./ckpts/your/path/tfmr'
To do
- Long Context Capability Evaluation and new Long-Med Benchmark
Acknowledgment
Citation
Please use the following citation if you intend to use our dataset for training or evaluation:
@misc{wang2024apollo,
title={Apollo: Lightweight Multilingual Medical LLMs towards Democratizing Medical AI to 6B People},
author={Xidong Wang and Nuo Chen and Junyin Chen and Yan Hu and Yidong Wang and Xiangbo Wu and Anningzhe Gao and Xiang Wan and Haizhou Li and Benyou Wang},
year={2024},
eprint={2403.03640},
archivePrefix={arXiv},
primaryClass={cs.CL}
}