Home

Awesome

<!--- Copyright 2022 The OFA-Sys Team. Copyright 2023 Kai Zhang @ Lehigh. All rights reserved. This source code is licensed under the Apache 2.0 license found in the LICENSE file in the root directory. -->

[Nature Medicine'24] <img src="examples/logo.jpg" alt="logo" width="35">iomedGPT

A Generalist Vision-Language Foundation Model for Diverse Biomedical Tasks. Arxiv

PWC PWC PWC PWC PWC

BiomedGPT is pre-trained and fine-tuned with multi-modal & multi-task biomedical datasets. Details of used datasets are shown in datasets.md. If you have any questions, feel free to contact us or post issues.

Installation (Linux)

  1. Clone this repository and navigate to the BiomedGPT folder
git clone https://github.com/taokz/BiomedGPT
cd BiomedGPT/
  1. Install required packages
conda create --name biomedgpt python=3.7.4
python -m pip install pip==21.2.4
pip install -r requirements.txt

Quick Start with Huggingface's transformers

Please check out this Colab notebook for Fairseq-free inference.

Warning: Extensive experiments using transformers have not been conducted, so we cannot confirm whether the results from transformers and fairseq are fully aligned.

Checkpoints

We provid pretrained checkpoints of BiomedGPT (<a href="https://www.dropbox.com/sh/cu2r5zkj2r0e6zu/AADZ-KHn-emsICawm9CM4MqVa?dl=0">Dropbox</a>), which can be put in the scripts/ folder for further development. For finetuned checkpoints, please refer to checkpoints.md.

transformers-compatible weights are accessible through the collection .

Note:

We emphasize that BiomedGPT, including its files, code, and checkpoints, is strictly for academic research purposes. Commercial and clinical uses are strictly prohibited for three key reasons: First, BiomedGPT is based on the OFA framework, which carries a non-commercial license that we have inherited. Second, our model is not licensed for use in healthcare settings. Finally, we have not implemented sufficient security measures, and the current model cannot guarantee the accuracy required for medical diagnoses.

Implementation

We provide the preprocessing, pretraining, finetuning and inference scripts in the scripts/ folder. You can follow the directory setting below:

BiomedGPT/
├── checkpoints/
├── datasets/
│   ├── pretraining/
│   ├── finetuning/
│   └── ...
├── scripts/
│   ├── preprocess/
│   │   ├── pretraining/
│   │   └── finetuning/
│   ├── pretrain/
│   ├── vqa/
│   └── ...
└── ...

Pretraining

Please follow datasets.md to prepare pretraining datasets, which includes 4 TSV files: <code>vision_language.tsv</code>, <code>text.tsv</code>, <code>image.tsv</code> and <code>detection.tsv</code> in the directory of ./datasets/pretraining/.

<pre> cd scripts/pretrain bash pretrain_tiny.sh </pre>

Feel free to modify the hyperparameters in the bash script for your requirements or ablation study.

Zero-shot VQA inference using pre-trained checkpoints

Add --zero-shot argument in the script. Example script: /scripts/vqa/evaluate_vqa_rad_zero_shot.sh.

Warning: The current implementation is not yet designed for chatbot or copilot applications, as its primary focus is on learning general representations in medicine that can be transferred to downstream tasks, as outlined in our paper. Large-scale training and instruction tuning for improving robust conversational abilities are still in progress.

Downstreams

We provide the run scripts of fine-tuning and inference. There will be log files during execution. Before fine-tuning or inference, please refer to

<details> <summary><b>Visual Question Answering</b></summary> <pre> cd scripts/vqa # for fine-tuning bash train_vqa_rad_beam.sh # for inference using fine-tuned weights bash evaluate_vqa_rad_beam.sh # for zero-shot inference using instruction-tuned weights bash evaluate_vqa_rad_unconstrained.sh </pre> </details> <details> <summary><b>Image Captioning</b></summary> <pre> cd scripts/caption # for fine-tuning bash train_peir_gross.sh # for inference bash evaluate_peir_gross.sh </pre> </details> <details> <summary><b>Text Summarization</b></summary> <pre> cd scripts/text_sum # for fine-tuning bash train_meqsum.sh # for inference bash evaluate_meqsum.sh </pre> </details> <details> <summary><b>Natural Language Inference</b></summary> <pre> cd scripts/mednli # for fine-tuning bash train_mednli.sh # for inference bash evaluate_mednli.sh </pre> </details> <details> <summary><b>Image Classification</b></summary> <pre> cd scripts/image_cls # for fine-tuning: I provide a template, please set different hyparameters for each dataset in MedMNIST if required. bash train_medmnist.sh # for inference: a template bash evaluate_medmnist.sh </pre> </details>

<br></br>

Related Codebase

Citation

If you use BiomedGPT model or our code for publications, please cite 🤗:

@article{zhang2024generalist,
  title={A generalist vision--language foundation model for diverse biomedical tasks},
  author={Zhang, Kai and Zhou, Rong and Adhikarla, Eashan and Yan, Zhiling and Liu, Yixin and Yu, Jun and Liu, Zhengliang and Chen, Xun and Davison, Brian D and Ren, Hui and others},
  journal={Nature Medicine},
  pages={1--13},
  year={2024},
  publisher={Nature Publishing Group US New York}
}

<br></br>