Home

Awesome

HuatuoGPT-Vision, Towards Injecting Medical Visual Knowledge into Multimodal LLMs at Scale

<div align="center"> <h5> 📃 <a href="https://arxiv.org/abs/2406.19280" target="_blank">Paper</a> • 🖥️ <a href="https://vision.huatuogpt.cn/#/" target="_blank">Demo</a> </h5> </div> <div align="center"> <h4> 📚 <a href="https://huggingface.co/datasets/FreedomIntelligence/PubMedVision" target="_blank">PubMedVision</a> </h4> </div> <div align="center"> <h4> 🤗 <a href="https://huggingface.co/FreedomIntelligence/HuatuoGPT-Vision-34B" target="_blank">HuatuoGPT-Vision-34B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/HuatuoGPT-Vision-7B">HuatuoGPT-Vision-7B</a> </h4> </div>

✨ Updates

🩻 PubMedVision

# DataDownload
PubMedVision Dataset1,294,062HF Link
VQA-RADSLAKEPathVQAPMC-VQA
LLaVA-v1.6-34B58.667.359.144.4
LLaVA-v1.5-LLaMA3-8B54.259.454.136.4
LLaVA-v1.5-LLaMA3-8B + PubMedVision63.874.559.952.7
OmniMedVQAMMMU Health & Medicine (Test Set)
LLaVA-v1.6-34B61.448.8
LLaVA-v1.5-LLaMA3-8B48.838.2
LLaVA-v1.5-LLaMA3-8B + PubMedVision75.149.1

👨‍⚕️ HuatuoGPT-Vision

HuatuoGPT-Vision is our medical multimodal LLMs, built on PubMedVision.

Model Access

Our model is available on Huggingface in two versions:

BackboneCheckpoint
HuatuoGPT-Vision-7BQwen2-7BHF Link
HuatuoGPT-Vision-34BYi-1.5-34BHF Link

Model Usage

Chat via the command line:

python cli.py --model_dir path-to-huatuogpt-vision-model

Inference using our ChatBot:

query = 'What does the picture show?'
image_paths = ['image_path1']

from cli import HuatuoChatbot
bot = HuatuoChatbot(path-to-huatuogpt-vision-model)
output = bot.inference(query, image_paths)
print(output) # Prints the output of the model

Performance of Medical Multimodal

VQA-RADSLAKEPathVQAPMC-VQA
LLaVA-Med-7B51.448.656.824.7
LLaVA-v1.6-34B58.667.359.144.4
HuatuoGPT-Vision-7B63.776.257.954.3
HuatuoGPT-Vision-34B68.176.963.558.2
OmniMedVQAMMMU Health & Medicine (Test Set)
LLaVA-Med-7B44.536.9
LLaVA-v1.6-34B61.448.8
HuatuoGPT-Vision-7B74.050.6
HuatuoGPT-Vision-34B76.954.4

🩺 HuatuoGPT Series

Explore our HuatuoGPT series:

Citation

@misc{chen2024huatuogptvisioninjectingmedicalvisual,
      title={HuatuoGPT-Vision, Towards Injecting Medical Visual Knowledge into Multimodal LLMs at Scale}, 
      author={Junying Chen and Ruyi Ouyang and Anningzhe Gao and Shunian Chen and Guiming Hardy Chen and Xidong Wang and Ruifei Zhang and Zhenyang Cai and Ke Ji and Guangjun Yu and Xiang Wan and Benyou Wang},
      year={2024},
      eprint={2406.19280},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2406.19280}, 
}