Awesome
GPN (Genomic Pre-trained Network)
Code and resources from GPN paper and GPN-MSA paper.
Table of contents
Installation
pip install git+https://github.com/songlab-cal/gpn.git
Minimal usage
import gpn.model
from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained("songlab/gpn-brassicales")
# or
model = AutoModelForMaskedLM.from_pretrained("songlab/gpn-msa-sapiens")
GPN
Can also be called GPN-SS (single sequence).
Examples
Code and resources from specific papers
Training on your own data
- Snakemake workflow to create a dataset
- Can automatically download data from NCBI given a list of accessions, or use your own fasta files.
- Training
- Will automatically detect all available GPUs.
- Track metrics on Weights & Biases
- Implemented models:
ConvNet
,GPNRoFormer
(Transformer) - Specify config overrides: e.g.
--config_overrides n_layers=30
- Example:
WANDB_PROJECT=your_project torchrun --nproc_per_node=$(echo $CUDA_VISIBLE_DEVICES | awk -F',' '{print NF}') -m gpn.ss.run_mlm --do_train --do_eval \
--fp16 --report_to wandb --prediction_loss_only True --remove_unused_columns False \
--dataset_name results/dataset --tokenizer_name gonzalobenegas/tokenizer-dna-mlm \
--soft_masked_loss_weight_train 0.1 --soft_masked_loss_weight_evaluation 0.0 \
--weight_decay 0.01 --optim adamw_torch \
--dataloader_num_workers 16 --seed 42 \
--save_strategy steps --save_steps 10000 --evaluation_strategy steps \
--eval_steps 10000 --logging_steps 10000 --max_steps 120000 --warmup_steps 1000 \
--learning_rate 1e-3 --lr_scheduler_type constant_with_warmup \
--run_name your_run --output_dir your_output_dir --model_type ConvNet \
--per_device_train_batch_size 512 --per_device_eval_batch_size 512 --gradient_accumulation_steps 1 \
--torch_compile
- Extract embeddings
- Input file requires
chrom
,start
,end
- Example:
- Input file requires
torchrun --nproc_per_node=$(echo $CUDA_VISIBLE_DEVICES | awk -F',' '{print NF}') -m gpn.ss.get_embeddings windows.parquet genome.fa.gz 100 your_output_dir \
results.parquet --per-device-batch-size 4000 --is-file --dataloader-num-workers 16
- Variant effect prediction
- Input file requires
chrom
,pos
,ref
,alt
- Example:
- Input file requires
torchrun --nproc_per_node=$(echo $CUDA_VISIBLE_DEVICES | awk -F',' '{print NF}') -m gpn.ss.run_vep variants.parquet genome.fa.gz 512 your_output_dir results.parquet \
--per-device-batch-size 4000 --is-file --dataloader-num-workers 16
GPN-MSA
Examples
- Play with the model:
examples/msa/basic_example.ipynb
- Variant effect prediction:
examples/msa/vep.ipynb
- Training (human):
examples/msa/training.ipynb
Code and resources from specific papers
Training on other species (e.g. plants)
Under construction.
Citation
GPN:
@article{benegas2023dna,
author = {Gonzalo Benegas and Sanjit Singh Batra and Yun S. Song },
title = {DNA language models are powerful predictors of genome-wide variant effects},
journal = {Proceedings of the National Academy of Sciences},
volume = {120},
number = {44},
pages = {e2311219120},
year = {2023},
doi = {10.1073/pnas.2311219120},
URL = {https://www.pnas.org/doi/abs/10.1073/pnas.2311219120},
eprint = {https://www.pnas.org/doi/pdf/10.1073/pnas.2311219120},
}
GPN-MSA:
@article{benegas2023gpnmsa,
author = {Gonzalo Benegas and Carlos Albors and Alan J. Aw and Chengzhong Ye and Yun S. Song},
title = {GPN-MSA: an alignment-based DNA language model for genome-wide variant effect prediction},
elocation-id = {2023.10.10.561776},
year = {2023},
doi = {10.1101/2023.10.10.561776},
publisher = {Cold Spring Harbor Laboratory},
URL = {https://www.biorxiv.org/content/early/2023/10/11/2023.10.10.561776},
eprint = {https://www.biorxiv.org/content/early/2023/10/11/2023.10.10.561776.full.pdf},
journal = {bioRxiv}
}