Home

Awesome

<!--brand: |- <a href="http://flow-r.github.io/flowr"> <img src='https://raw.githubusercontent.com/sahilseth/flowr/devel/vignettes/files/logo.png' alt='flowr icon' width='40px' height='40px' style='margin-top: -10px;'> </a> --> <meta property="og:description" content="Easy, scalable big data pipelines using hpcc (high performance computing cluster)"> <meta property="og:title" content="flowr — Easy, scalable big data pipelines using hpcc"> <meta name="twitter:description" content="flowr - Easy, scalable big data pipelines using hpcc (high performance computing cluster)"> <meta name="twitter:title" content="flowr — Easy, scalable big data pipelines using hpcc (high performance computing cluster)">

Build Status cran

<!--[![codecov.io](http://codecov.io/github/sahilseth/flowr/coverage.svg?branch=devel)](http://codecov.io/github/sahilseth/flowr?branch=devel)-->

downloads

<!-- ![license](https://img.shields.io/badge/license-MIT-blue.svg) -->

flow-r.github.io/flowr Streamlining Computing workflows

Latest documentation: flow-r.github.io/flowr

Flowr framework allows you to design and implement complex pipelines, and deploy them on your institution's computing cluster. This has been built keeping in mind the needs of bioinformatics workflows. However, it is easily extendable to any field where a series of steps (shell commands) are to be executed in a (work)flow.

Highlights

Example

ex_fq_bam

<!--- - Example: - A typical case in next-generation sequencing involves processing of tens of [fastqs](http://en.wikipedia.org/wiki/FASTQ_format) for a sample, [mapping](http://en.wikipedia.org/wiki/Sequence_alignment) them to a reference genome. - Each step requires a range resources in terms of CPU, RAM etc. - Consider step 1 uses 10 cores for each file; with 50 files it would use 500 cores in total. - Next step uses one core for each file, 50 cores in total. - Say step C merges them, and uses only 1 core. - Some pipelines may reserve the maximum, example say 500 cores throught steps 1 to 3, flowr would handle the **surge**, reserving 500, 50 or 1; when needed. - Now consider the run has 10 samples, all of them would be procesed in parallel, spawning **thousands of cores**. --->

A few lines, to get started

## Official stable release from CRAN (updated every other month)
## visit flow-r.github.io/flowr/install for more details
install.packages("flowr",  repos = "http://cran.rstudio.com")

# or a latest version from DRAT, provide cran for dependencies
install.packages("flowr", repos = c(CRAN="http://cran.rstudio.com", DRAT="http://sahilseth.github.io/drat"))

library(flowr) ## load the library
setup() ## copy flowr bash script; and create a folder flowr under home.

# Run an example pipeline

# style 1: sleep_pipe() function creates system cmds
flowr run x=sleep_pipe platform=local execute=TRUE

# style 2: we start with a tsv of system cmds
# get example files
wget --no-check-certificate http://raw.githubusercontent.com/sahilseth/flowr/master/inst/pipelines/sleep_pipe.tsv
wget --no-check-certificate http://raw.githubusercontent.com/sahilseth/flowr/master/inst/pipelines/sleep_pipe.def

# submit to local machine
flowr to_flow x=sleep_pipe.tsv def=sleep_pipe.def platform=local execute=TRUE
# submit to local LSF cluster
flowr to_flow x=sleep_pipe.tsv def=sleep_pipe.def platform=lsf execute=TRUE

Example pipelines inst/pipelines

Resources

Updates

This package is under active-development, you may watch for changes using the watch link above.

Feedback

Please feel free to raise a github issue with questions and comments.

Acknowledgements

<!--why this license http://kbroman.org/pkg_primer/pages/licenses.html --> <script src = "vignettes/files/googl.js"></script>