A DSL for parallel and scalable computational pipelines | Nextflow (original) (raw)
Fast prototyping
Nextflow allows you to write a computational pipeline by making it simpler to put together many different tasks.
You may reuse your existing scripts and tools and you don't need to learn a new language or API to start using it.
Reproducibility
Nextflow supports Docker and Singularity containers technology.
This, along with the integration of the GitHub code sharing platform, allows you to write self-contained pipelines, manage versions and to rapidly reproduce any former configuration.
Continuous checkpoints
All the intermediate results produced during the pipeline execution are automatically tracked.
This allows you to resume its execution, from the last successfully executed step, no matter what the reason was for it stopping.
Portable
Nextflow provides an abstraction layer between your pipeline's logic and the execution layer, so that it can be executed on multiple platforms without it changing.
It provides out of the box executors for GridEngine, SLURM, LSF, PBS, Moab and HTCondor batch schedulers and for Kubernetes,Amazon AWS,Google Cloud andMicrosoft Azure platforms.
Stream oriented
Nextflow extends the Unix pipes model with a fluent DSL, allowing you to handle complex stream interactions easily.
It promotes a programming approach, based on functional composition, that results in resilient and easily reproducible pipelines.
Unified parallelism
Nextflow is based on the dataflow programming model which greatly simplifies writing complex distributed pipelines.
Parallelisation is implicitly defined by the processes input and output declarations. The resulting applications are inherently parallel and can scale-up or scale-out, transparently, without having to adapt to a specific platform architecture.