Home

Awesome

Genomic ULMFiT

This is an implementation of ULMFiT for genomics classification using Pytorch and Fastai. The model architecture used is based on the AWD-LSTM model, consisting of an embedding, three LSTM layers, and a final set of linear layers.

The ULMFiT approach uses three training phases to produce a classification model:

  1. Train a language model on a large, unlabeled corpus
  2. Fine tune the language model on the classification corpus
  3. Use the fine tuned language model to initialize a classification model

This method is particularly advantageous for genomic data, where large amounts of unlabeled data is abundant and labeled data is scarce. The ULMFiT approach allows us to train a model on a large, unlabeled genomic corpus in an unsupervised fashion. The pre-trained language model serves as a feature extractor for parsing genomic data.

Typical deep learning approaches to genomics classification are highly restricted to whatever labeled data is available. Models are usually trained from scratch on small datasets, leading to problems with overfitting. When unsupervised pre-training is used, it is typically done only on the classification dataset or on synthetically generated data. The Genomic-ULMFiT approach uses genome scale corpuses for pre-training to produce better feature extractors than we would get by training only on the classification corpus.

For a deep dive into the ULMFiT approach, model architectures, regularization and training strategies, see the Methods Long Form document in the Methods section.

Results

Performance of Genomic-ULMFiT relative to other methods

Promoter Classification

E. coli promoters

The Genomic-ULMFiT method performs well at the task of classifying promoter sequences from random sections of the genome. The process of unsupervised pre-training and fine-tuning has a clear impact on the performance of the classification model

ModelAccuracyPrecisionRecallCorrelation Coefficient
Naive0.8340.8470.8160.670
E. coli Genome Pre-Training0.9190.9410.8930.839
Genomic Ensemble Pre-Training0.9730.9800.9660.947

Data generation described in notebook

Notebook Directory

Classification performance on human promoters is competitive with published results

Human Promoters (short)

For the short promoter sequences, using data from Recognition of Prokaryotic and Eukaryotic Promoters using Convolutional Deep Learning Neural Networks:

ModelDNA Sizekmer/strideAccuracyPrecisionRecallCorrelation CoefficientSpecificity
Kh et al.-200/50---0.90.890.98
Naive Model-200/505/20.800.740.800.590.80
With Pre-Training-200/505/20.9220.9630.8490.8440.976
With Pre-Training and Fine Tuning-200/505/2.977.959.989.955.969
With Pre-Training and Fine Tuning-200/505/1.990.983.995.981.987
With Pre-Training and Fine Tuning-200/503/1.995.992.996.991.994

Data Source

Notebook Directory

Human Promoters (long)

For the long promoter sequences, using data from PromID: Human Promoter Prediction by Deep Learning:

ModelDNA SizeModelsAccuracyPrecisionRecallCorrelation Coefficient
Umarov et al.-1000/5002 Model Ensemble-0.6360.8020.714
Umarov et al.-200/4002 Model Ensemble-0.7690.7550.762
Naive Model-500/500Single Model0.8580.8770.7720.708
With Pre-Training-500/500Single Model0.8880.900.8240.770
With Pre-Training and Fine Tuning-500/500Single Model0.8920.8770.8650.778

Data generation described in notebook

Notebook Directory

Other Bacterial Promoters

This table shows results on data from Recognition of prokaryotic and eukaryotic promoters using convolutional deep learning neural networks. These results show how CNN based methods can sometimes perform better when training on small datasets.

MethodOrganismTraining ExamplesAccuracyPrecisionRecallCorrelation CoefficientSpecificity
Kh et al.E. coli2936--0.900.840.96
Genomic-ULMFiTE. coli29360.9560.9170.8800.8710.977
Kh et al.B. subtilis1050--0.910.860.95
Genomic-ULMFiTB. subtilis10500.9050.8570.7890.7590.95

Data Source

Notebook Directory

Metaganomics Classification

Genomic-ULMFiT shows improved performance on the metagenomics taxonomic dataset from Deep learning models for bacteria taxonomic classification of metagenomic data.

MethodData SourceAccuracyPrecisionRecallF1
Fiannaca et al.Amplicon.9137.9162.9137.9126
Genomic-ULMFiTAmplicon.9239.9402.9332.9306
Fiannaca et al.Shotgun.8550.8570.8520.8511
Genomic-ULMFiTShotgun.8797.8824.8769.8758

Data Source

Notebook Directory

Enhancer Classification

When trained on a dataset of mammalian enhancer sequences from Enhancer Identification using Transfer and Adversarial Deep Learning of DNA Sequences, Genomic_ULMFiT improves on results from Cohn et al.

Model/ROC-AUCHumanMouseDogOpossum
Cohn et al.0.800.780.770.72
Genomic-ULMFiT 5-mer Stride 20.8120.8710.7730.787
Genomic-ULMFiT 4-mer Stride 20.8040.8760.7710.786
Genomic-ULMFiT 3-mer Stride 10.8190.8750.7880.798

Data Source

Notebook Directory

mRNA/lncRNA Classification

This table shows results for training a classification model on a dataset of coding mRNA sequences and long noncoding RNA (lncRNA) sequences. The dataset comes from A deep recurrent neural network discovers complex biological rules to decipher RNA protein-coding potential by Hill et al. The dataset contains two test sets - a standard test set and a challenge test set.

ModelTest SetAccuracySpecificitySensitivityPrecisionMCC
GRU Ensemble (Hill et al.)*Standard Test Set0.960.970.950.970.92
Genomic ULMFiT (3mer stride 1)Standard Test Set0.9630.9520.9740.9530.926
GRU Ensemble (Hill et al.)*Challenge Test Set0.8750.950.800.950.75
Genomic ULMFiT (3mer stride 1)Challenge Test Set0.900.9440.8710.9390.817

(*) Hill et al. presented their results as a plot rather than as a data table. Values in the above table are estimated by reading off the plot

Data Source

Notebook Directory

Interpreting Results

One way to gain insight into how the classification model makes decisions is to perturb regions of a given input sequence to see how changing different regions of the sequence impact the classification result. This allows us to create plots like the one below, highlighting important sequence regions for classification. In the plot below, the red line corresponds to a true transcription start site. The plot shows how prediction results are sensitive to changes around that location. More detail on interpretations can be found in the Model Interpretations directory.

Long Sequence Inference

Inference on long, unlabeled sequences can be done by breaking the input sequence into chunks and plotting prediction results as a function of length. The image below shows a sample prediction of promoter locations on a 40,000 bp region of the E. coli genome. True promoter locations are shown in red. More detail can be found in this notebook

Relevant Literature

For a comparison to other published methods, see Section 6 of the Methods notebook. Here are some relevant papers in the deep genomics classification space.

DeepCRISPR: optimized CRISPR guide RNA design by deep learning

Recognition of prokaryotic and eukaryotic promoters using convolutional deep learning neural networks

PromID: human promoter prediction by deep learning

Deep Learning for Genomics: A Concise Overview

Prediction of deleterious mutations in coding regions of mammals with transfer learning

Enhancer Identification using Transfer and Adversarial Deep Learning of DNA Sequences

PEDLA: predicting enhancers with a deep learning-based algorithmic framework

Predicting enhancers with deep convolutional neural networks

BiRen: predicting enhancers with a deep-learning-based model using the DNA sequence alone

Deep learning models for bacteria taxonomic classification of metagenomic data

Prediction of enhancer-promoter interactions via natural language processing

A deep recurrent neural network discovers complex biological rules to decipher RNA protein-coding potential

Recurrent Neural Network for Predicting Transcription Factor Binding Sites

Learning the Language of the Genome using RNNs