Home

Awesome

Kallisto-NF

A Nextflow implementation of Kallisto & Sleuth RNA-Seq Tools

nextflow Build Status

Quick start

Make sure you have all the required dependencies listed in the last section.

Install the Nextflow runtime by running the following command:

$ curl -fsSL get.nextflow.io | bash

When done, you can launch the pipeline execution by entering the command shown below:

$ nextflow run cbcrg/kallisto-nf

By default the pipeline is executed against the provided example dataset. Check the Pipeline parameters section below to see how enter your data on the program command line.

Pipeline parameters

--reads

Example:

$ nextflow run cbcrg/kallisto-nf --reads '/home/dataset/*.fastq'

This will handle each fastq file as a seperate sample.

Read pairs of samples can be specified using the glob file pattern. Consider a more complex situation where there are three samples (A, B and C), with A and B being paired reads and C being single ended. The read files could be:

sample_A_1.fastq
sample_A_2.fastq
sample_B_1.fastq
sample_B_2.fastq 
sample_C_1.fastq

The reads may be specified as below:

$ nextflow run cbcrg/kallisto-nf --reads '/home/dataset/sample_*_{1,2}.fastq'    

--transcriptome

Example:

$ nextflow run cbcrg/kallisto-nf --transcriptome /home/user/my_transcriptome/example.fa

--experiment

Example:

$ nextflow run cbcrg/kallisto-nf --experiment '/home/experiment/exp_design.txt'

The experiment file should be a text file, space delimited, in a format similar to as shown below:

run_accession condition sample
SRR493366 control A
SRR493367 control B
SRR493368 control C
SRR493369 HOXA1KD A
SRR493370 HOXA1KD B
SRR493371 HOXA1KD C

--fragment_len

Example:

$ nextflow run cbcrg/kallisto-nf --fragment_len 180

--fragment_sd

Example:

$ nextflow run cbcrg/kallisto-nf --fragment_sd 180

--bootstrap

Example:

$ nextflow run cbcrg/kallisto-nf --bootstrap 100

--output

Example:

$ nextflow run cbcrg/kallisto-nf --output /home/user/my_results 

Cluster support

Kallisto-NF execution relies on Nextflow framework which provides an abstraction between the pipeline functional logic and the underlying processing system.

Thus it is possible to execute it on your computer or any cluster resource manager without modifying it.

Currently the following platforms are supported:

By default the pipeline is parallelized by spawning multiple threads in the machine where the script is launched.

To submit the execution to a SGE cluster create a file named nextflow.config, in the directory where the pipeline is going to be launched, with the following content:

process {
  executor='sge'
  queue='<your queue name>'
}

When doing that, tasks will be executed through the qsub SGE command, and so your pipeline will behave like any other SGE job script, with the benefit that Nextflow will automatically and transparently manage the tasks synchronisation, file(s) staging/un-staging, etc.

Alternatively the same declaration can be defined in the file $HOME/.nextflow/config.

To lean more about the avaible settings and the configuration file read the Nextflow documentation.

Dependencies