Home

Awesome

DeepSignal-plant

Python GitHub-License

PyPI-version PyPI-Downloads ā€ƒConda-version Conda-Downloads

<!--[![PyPI-Downloads/m](https://pepy.tech/badge/deepsignal-plant/month)](https://pepy.tech/project/deepsignal-plant)-->

A deep-learning method for detecting methylation state from Oxford Nanopore sequencing reads of plants.

deepsignal-plant applies BiLSTM to detect methylation from Nanopore reads. It is built on Python3 and PyTorch.

Known issues

# 1. install hdf5/hdf5-tools (maybe not necessary)
# ubuntu
sudo apt-get install libhdf5-serial-dev hdf5-tools
# centos
sudo yum install hdf5-devel

# 2. download ont-vbz-hdf-plugin-1.0.1-Linux-x86_64.tar.gz (or newer version) and set HDF5_PLUGIN_PATH
# https://github.com/nanoporetech/vbz_compression/releases
wget https://github.com/nanoporetech/vbz_compression/releases/download/v1.0.1/ont-vbz-hdf-plugin-1.0.1-Linux-x86_64.tar.gz
tar zxvf ont-vbz-hdf-plugin-1.0.1-Linux-x86_64.tar.gz
export HDF5_PLUGIN_PATH=/abslolute/path/to/ont-vbz-hdf-plugin-1.0.1-Linux/usr/local/hdf5/lib/plugin

References: issue #8, tombo issue #254, and vbz_compression issue #5.

Contents

Installation

deepsignal-plant is built on Python3 and PyTorch. Guppy and tombo are required to basecall and re-squiggle the raw signals from nanopore reads before running deepsignal-plant.

Option 1. One-step installation

Install deepsignal-plant, its dependencies, and other required packages in one step using conda and environment.yml:

# download deepsignal-plant
git clone https://github.com/PengNi/deepsignal-plant.git

# install tools in environment.yml
conda env create --name deepsignalpenv -f /path/to/deepsignal-plant/environment.yml

# then the environment can be activated to use
conda activate deepsignalpenv

Option 2. Step-by-step installation

(1) create an environment

We highly recommend using a virtual environment for the installation of deepsignal-plant and its dependencies. A virtual environment can be created and (de)activated as follows using conda:

# create
conda create -n deepsignalpenv python=3.8

# activate
conda activate deepsignalpenv

# deactivate
conda deactivate

The virtual environment can also be created using virtualenv.

(2) Install deepsignal-plant

After the environment being created and activated, deepsignal-plant can be installed using conda/pip, or from github directly:

# install using conda
conda install -c bioconda deepsignal-plant

# or install using pip
pip install deepsignal-plant

# or install from github (latest version)
git clone https://github.com/PengNi/deepsignal-plant.git
cd deepsignal-plant
python setup.py install
(3) Re-install pytorch if needed

PyTorch can be automatically installed during the installation of deepsignal-plant. However, if the version of PyTorch installed is not appropriate for your OS, an appropriate version should be re-installed in the same environment as the instructions:

# install using conda
conda install pytorch==1.11.0 cudatoolkit=10.2 -c pytorch

# or install using pip
pip install torch==1.11.0
(4) Install tombo

tombo (version 1.5.1) is required to be installed:

# install using pip
pip install ont-tombo

# or install using conda
conda install -c bioconda ont-tombo

Note:

Guppy (version>=3.6.1) is also required, which can be downloaded from Nanopore Community (login required).

Trained models

Currently, we have trained the following models:

Example data

Quick start

To call modifications, the raw fast5 files should be basecalled by Guppy (version>=3.6.1) and then be re-squiggled by tombo (version 1.5.1). At last, modifications of specified motifs can be called by deepsignal. Belows are commands to call 5mC in CG, CHG, and CHH contexts:

# Download and unzip the example data and pre-trained models.
# 1. guppy basecall using GPU
guppy_basecaller -i fast5s/ -r -s fast5s_guppy \
  --config dna_r9.4.1_450bps_hac_prom.cfg \
  --device CUDA:0

# 2. tombo resquiggle
cat fast5s_guppy/*.fastq > fast5s_guppy.fastq
tombo preprocess annotate_raw_with_fastqs --fast5-basedir fast5s/ \
  --fastq-filenames fast5s_guppy.fastq \
  --sequencing-summary-filenames fast5s_guppy/sequencing_summary.txt \
  --basecall-group Basecall_1D_000 --basecall-subgroup BaseCalled_template \
  --overwrite --processes 10
tombo resquiggle fast5s/ GCF_000001735.4_TAIR10.1_genomic.fna \
  --processes 10 --corrected-group RawGenomeCorrected_000 \
  --basecall-group Basecall_1D_000 --overwrite

# 3. deepsignal-plant call_mods
# 5mCs in all contexts (CG, CHG, and CHH) can be called at one time
CUDA_VISIBLE_DEVICES=0 deepsignal_plant call_mods --input_path fast5s/ \
  --model_path model.dp2.CNN.arabnrice2-1_120m_R9.4plus_tem.bn13_sn16.both_bilstm.epoch6.ckpt \
  --result_file fast5s.C.call_mods.tsv \
  --corrected_group RawGenomeCorrected_000 \
  --motifs C --nproc 30 --nproc_gpu 6
deepsignal_plant call_freq --input_path fast5s.C.call_mods.tsv \
  --result_file fast5s.C.call_mods.frequency.tsv
# split 5mC call_freq file into CG/CHG/CHH call_freq files
python /path/to/deepsignal_plant/scripts/split_freq_file_by_5mC_motif.py \
  --freqfile fast5s.C.call_mods.frequency.tsv

Usage

1. Basecall and re-squiggle

Before running deepsignal, the raw reads should be basecalled by Guppy (version>=3.6.1) and then be processed by the re-squiggle module of tombo (version 1.5.1).

Note:

multi_to_single_fast5 -i $multi_read_fast5_dir -s $single_read_fast5_dir -t 30 --recursive

For the example data:

# 1. run multi_to_single_fast5 if needed
multi_to_single_fast5 -i $multi_read_fast5_dir -s $single_read_fast5_dir -t 30 --recursive

# 2. basecall using GPU, fast5s/ is the $single_read_fast5_dir
guppy_basecaller -i fast5s/ -r -s fast5s_guppy \
  --config dna_r9.4.1_450bps_hac_prom.cfg \
  --device CUDA:0
# or using CPU
guppy_basecaller -i fast5s/ -r -s fast5s_guppy \
  --config dna_r9.4.1_450bps_hac_prom.cfg

# 3. proprecess fast5 if basecall results are saved in fastq format
cat fast5s_guppy/*.fastq > fast5s_guppy.fastq
tombo preprocess annotate_raw_with_fastqs --fast5-basedir fast5s/ \
  --fastq-filenames fast5s_guppy.fastq \
  --sequencing-summary-filenames fast5s_guppy/sequencing_summary.txt \
  --basecall-group Basecall_1D_000 --basecall-subgroup BaseCalled_template \
  --overwrite --processes 10

# 4. resquiggle, cmd: tombo resquiggle $fast5_dir $reference_fa
tombo resquiggle fast5s/ GCF_000001735.4_TAIR10.1_genomic.fna \
  --processes 10 --corrected-group RawGenomeCorrected_000 \
  --basecall-group Basecall_1D_000 --overwrite

2. extract features

Features of targeted sites can be extracted for training or testing.

For the example data (By default, deepsignal-plant extracts 13-mer-seq and 13*16-signal features of each CpG motif in reads. Note that the value of --corrected_group must be the same as that of --corrected-group in tombo.):

# extract features of all Cs
deepsignal_plant extract -i fast5s \
  -o fast5s.C.features.tsv --corrected_group RawGenomeCorrected_000 \
  --nproc 30 --motifs C

The extracted_features file is a tab-delimited text file in the following format:

3. call modifications

To call modifications, either the extracted-feature file or the raw fast5 files (recommended) can be used as input.

GPU/Multi-GPU support: Use CUDA_VISIBLE_DEVICES=${cuda_number} ccsmeth call_mods [options] to call modifications with specified GPUs (e.g., CUDA_VISIBLE_DEVICES=0 or CUDA_VISIBLE_DEVICES=0,1).

For the example data:

# call 5mCs for instance

# extracted-feature file as input, use CPU
CUDA_VISIBLE_DEVICES=-1 deepsignal_plant call_mods --input_path fast5s.C.features.tsv \
  --model_path model.dp2.CNN.arabnrice2-1_120m_R9.4plus_tem.bn13_sn16.both_bilstm.epoch6.ckpt \
  --result_file fast5s.C.call_mods.tsv \
  --nproc 30
# extracted-feature file as input, use GPU
CUDA_VISIBLE_DEVICES=0 deepsignal_plant call_mods --input_path fast5s.C.features.tsv \
  --model_path model.dp2.CNN.arabnrice2-1_120m_R9.4plus_tem.bn13_sn16.both_bilstm.epoch6.ckpt \
  --result_file fast5s.C.call_mods.tsv \
  --nproc 30 --nproc_gpu 6

# fast5 files as input, use CPU
CUDA_VISIBLE_DEVICES=-1 deepsignal_plant call_mods --input_path fast5s/ \
  --model_path model.dp2.CNN.arabnrice2-1_120m_R9.4plus_tem.bn13_sn16.both_bilstm.epoch6.ckpt \
  --result_file fast5s.C.call_mods.tsv \
  --corrected_group RawGenomeCorrected_000 \
  --motifs C --nproc 30
# fast5 files as input, use GPU
CUDA_VISIBLE_DEVICES=0 deepsignal_plant call_mods --input_path fast5s/ \
  --model_path model.dp2.CNN.arabnrice2-1_120m_R9.4plus_tem.bn13_sn16.both_bilstm.epoch6.ckpt \
  --result_file fast5s.C.call_mods.tsv \
  --corrected_group RawGenomeCorrected_000 \
  --motifs C --nproc 30 --nproc_gpu 6

The modification_call file is a tab-delimited text file in the following format:

4. call frequency of modifications

A modification-frequency file can be generated by call_freq function with the call_mods file as input:

# call 5mCs for instance

# output in tsv format
deepsignal_plant call_freq --input_path fast5s.C.call_mods.tsv \
  --result_file fast5s.C.call_mods.frequency.tsv
# output in bedMethyl format
deepsignal_plant call_freq --input_path fast5s.C.call_mods.tsv \
  --result_file fast5s.C.call_mods.frequency.bed --bed
# use --sort to sort the results
deepsignal_plant call_freq --input_path fast5s.C.call_mods.tsv \
  --result_file fast5s.C.call_mods.frequency.bed --bed --sort

The modification_frequency file can be either saved in bedMethyl format (by setting --bed as above), or saved as a tab-delimited text file in the following format by default:

5. denoise training samples

# please use deepsignal_plant denoise -h/--help for instructions
deepsignal_plant denoise --train_file /path/to/train/file

6. train new models

A new model can be trained as follows:

# need to split training samples to two independent datasets for training and validating
# please use deepsignal_plant train -h/--help for instructions
deepsignal_plant train --train_file /path/to/train/file \
  --valid_file /path/to/valid/file \
  --model_dir /dir/to/save/the/new/model

Extra

We are testing deepsignal-plant on a zebrafish sample...

License

Copyright (C) 2020 Jianxin Wang, Feng Luo, Peng Ni

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program. If not, see https://www.gnu.org/licenses/.

Jianxin Wang, Peng Ni, School of Computer Science and Engineering, Central South University, Changsha 410083, China

Feng Luo, School of Computing, Clemson University, Clemson, SC 29634, USA