Home

Awesome

CI License: GPL v2 Don't judge me

Snippy

Rapid haploid variant calling and core genome alignment

Author

Torsten Seemann

Synopsis

Snippy finds SNPs between a haploid reference genome and your NGS sequence reads. It will find both substitutions (snps) and insertions/deletions (indels). It will use as many CPUs as you can give it on a single computer (tested to 64 cores). It is designed with speed in mind, and produces a consistent set of output files in a single folder. It can then take a set of Snippy results using the same reference and generate a core SNP alignment (and ultimately a phylogenomic tree).

Quick Start

% snippy --cpus 16 --outdir mysnps --ref Listeria.gbk --R1 FDA_R1.fastq.gz --R2 FDA_R2.fastq.gz
<cut>
Walltime used: 3 min, 42 sec
Results folder: mysnps
Done.

% ls mysnps
snps.vcf snps.bed snps.gff snps.csv snps.tab snps.html 
snps.bam snps.txt reference/ ...

% head -5 mysnps/snps.tab
CHROM  POS     TYPE    REF   ALT    EVIDENCE        FTYPE STRAND NT_POS AA_POS LOCUS_TAG GENE PRODUCT EFFECT
chr      5958  snp     A     G      G:44 A:0        CDS   +      41/600 13/200 ECO_0001  dnaA replication protein DnaA missense_variant c.548A>C p.Lys183Thr
chr     35524  snp     G     T      T:73 G:1 C:1    tRNA  -   
chr     45722  ins     ATT   ATTT   ATTT:43 ATT:1   CDS   -                    ECO_0045  gyrA DNA gyrase
chr    100541  del     CAAA  CAA    CAA:38 CAAA:1   CDS   +                    ECO_0179      hypothetical protein
plas      619  complex GATC  AATA   GATC:28 AATA:0  
plas     3221  mnp     GA    CT     CT:39 CT:0      CDS   +                    ECO_p012  rep  hypothetical protein

% snippy-core --prefix core mysnps1 mysnps2 mysnps3 mysnps4 
Loaded 4 SNP tables.
Found 2814 core SNPs from 96615 SNPs.

% ls core.*
core.aln core.tab core.tab core.txt core.vcf

Installation

Conda

Install Bioconda then:

conda install -c conda-forge -c bioconda -c defaults snippy

Homebrew

Install Homebrew (MacOS) or LinuxBrew (Linux) then:

brew install brewsci/bio/snippy

Source

This will install the latest version direct from Github. You'll need to add Snippy's bin directory to your $PATH.

cd $HOME
git clone https://github.com/tseemann/snippy.git
$HOME/snippy/bin/snippy --help

Check installation

Ensure you have the desired version:

snippy --version

Check that all dependencies are installed and working:

snippy --check

Calling SNPs

Input Requirements

Output Files

ExtensionDescription
.tabA simple tab-separated summary of all the variants
.csvA comma-separated version of the .tab file
.htmlA HTML version of the .tab file
.vcfThe final annotated variants in VCF format
.bedThe variants in BED format
.gffThe variants in GFF3 format
.bamThe alignments in BAM format. Includes unmapped, multimapping reads. Excludes duplicates.
.bam.baiIndex for the .bam file
.logA log file with the commands run and their outputs
.aligned.faA version of the reference but with - at position with depth=0 and N for 0 < depth < --mincov (does not have variants)
.consensus.faA version of the reference genome with all variants instantiated
.consensus.subs.faA version of the reference genome with only substitution variants instantiated
.raw.vcfThe unfiltered variant calls from Freebayes
.filt.vcfThe filtered variant calls from Freebayes
.vcf.gzCompressed .vcf file via BGZIP
.vcf.gz.csiIndex for the .vcf.gz via bcftools index)

:warning: :x: Snippy 4.x does NOT produce the following files that Snippy 3.x did

ExtensionDescription
.vcf.gz.tbiIndex for the .vcf.gz via TABIX
.depth.gzOutput of samtools depth -aa for the .bam file
.depth.gz.tbiIndex for the .depth.gz file

Columns in the TAB/CSV/HTML formats

NameDescription
CHROMThe sequence the variant was found in eg. the name after the > in the FASTA reference
POSPosition in the sequence, counting from 1
TYPEThe variant type: snp msp ins del complex
REFThe nucleotide(s) in the reference
ALTThe alternate nucleotide(s) supported by the reads
EVIDENCEFrequency counts for REF and ALT

If you supply a Genbank file as the --reference rather than a FASTA file, Snippy will fill in these extra columns by using the genome annotation to tell you which feature was affected by the variant:

NameDescription
FTYPEClass of feature affected: CDS tRNA rRNA ...
STRANDStrand the feature was on: + - .
NT_POSNucleotide position of the variant withinthe feature / Length in nt
AA_POSResidue position / Length in aa (only if FTYPE is CDS)
LOCUS_TAGThe /locus_tag of the feature (if it existed)
GENEThe /gene tag of the feature (if it existed)
PRODUCTThe /product tag of the feature (if it existed)
EFFECTThe snpEff annotated consequence of this variant (ANN tag in .vcf)

Columns in TXT format

NameDescription
IDReference + Sample
LENGTHLength of the reference
ALIGNEDNumber of sites aligned to
UNALIGNEDNumber of sites unaligned
VARIANTNumber of sites different from the reference
HETNumber of sites heterozygous or poor quality genotype represented with an n (--minqual)
MASKEDNumber of sites masked in reference represented with an X (--mask)
LOWCOVNumber of sites low coverage in this sample represented with an N (--mincov)

Variant Types

TypeNameExample
snpSingle Nucleotide PolymorphismA => T
mnpMultiple Nuclotide PolymorphismGC => AT
insInsertionATT => AGTT
delDeletionACGG => ACG
complexCombination of snp/mnpATTC => GTTA

The variant caller

The variant calling is done by Freebayes. The key parameters under user control are:

Looking at variants in detail with snippy-vcf_report

If you run Snippy with the --report option it will automatically run snippy-vcf_report and generate a snps.report.txt which has a section like this for each SNP in snps.vcf:

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>LBB_contig000001:10332 snp A=>T DP=7 Q=66.3052 [7]

         10301     10311     10321     10331     10341     10351     10361
tcttctccgagaagggaatataatttaaaaaaattcttaaataattcccttccctcccgttataaaaattcttcgcttat
........................................T.......................................
,,,,,,  ,,,,,,,,,,,,,,,,,,,,,t,,,,,,,,,,t,,t,,,,,,,,,,,,,,,,g,,,,,,,g,,,,,,,,,t,
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, .......T..................A............A.......
.........................A........A.....T...........    .........C..............
.....A.....................C..C........CT.................TA.............
,a,,,,,a,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,t,t,,,g,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
,,,,,ga,,,,,,,c,,,,,,,t,,,,,,,,,,g,,,,,,t,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
                            ............T.C..............G...............G......
                                                    ,,,,,,,g,,,,,,,,g,,,,,,,,,,,
                                                           g,,,,,,,,,,,,,,,,,,,,

If you wish to generate this report after you have run Snippy, you can run it directly:

cd snippydir
snippy-vcf_report --cpus 8 --auto > snps.report.txt

If you want a HTML version for viewing in a web browser, use the --html option:

cd snippydir
snippy-vcf_report --html --cpus 16 --auto > snps.report.html

It works by running samtools tview for each variant, which can be very slow if you have 1000s of variants. Using --cpus as high as possible is recommended.

Options

Core SNP phylogeny

If you call SNPs for multiple isolates from the same reference, you can produce an alignment of "core SNPs" which can be used to build a high-resolution phylogeny (ignoring possible recombination). A "core site" is a genomic position that is present in all the samples. A core site can have the same nucleotide in every sample ("monomorphic") or some samples can be different ("polymorphic" or "variant"). If we ignore the complications of "ins", "del" variant types, and just use variant sites, these are the "core SNP genome".

Input Requirements

Using snippy-multi

To simplify running a set of isolate sequences (reads or contigs) against the same reference, you can use the snippy-multi script. This script requires a tab separated input file as follows, and can handle paired-end reads, single-end reads, and assembled contigs.

# input.tab = ID R1 [R2]
Isolate1	/path/to/R1.fq.gz	/path/to/R2.fq.gz
Isolate1b	/path/to/R1.fastq.gz	/path/to/R2.fastq.gz
Isolate1c	/path/to/R1.fa		/path/to/R2.fa
# single end reads supported too
Isolate2	/path/to/SE.fq.gz
Isolate2b	/path/to/iontorrent.fastq
# or already assembled contigs if you don't have reads
Isolate3	/path/to/contigs.fa
Isolate3b	/path/to/reference.fna.gz

Then one would run this to generate the output script. The first parameter should be the input.tab file. The remaining parameters should be any remaining shared snippy parameters. The ID will be used for each isolate's --outdir.

% snippy-multi input.tab --ref Reference.gbk --cpus 16 > runme.sh

% less runme.sh   # check the script makes sense

% sh ./runme.sh   # leave it running over lunch

It will also run snippy-core at the end to generate the core genome SNP alignment files core.*.

Output Files

ExtensionDescription
.alnA core SNP alignment in the --aformat format (default FASTA)
.full.alnA whole genome SNP alignment (includes invariant sites)
.tabTab-separated columnar list of core SNP sites with alleles but NO annotations
.vcfMulti-sample VCF file with genotype GT tags for all discovered alleles
.txtTab-separated columnar list of alignment/core-size statistics
.ref.faFASTA version/copy of the --ref
.self_mask.bedBED file generated if --mask auto is used.

Why is core.full.aln an alphabet soup?

The core.full.aln file is a FASTA formatted mutliple sequence alignment file. It has one sequence for the reference, and one for each sample participating in the core genome calculation. Each sequence has the same length as the reference sequence.

CharacterMeaning
ATGCSame as the reference
atgcDifferent from the reference
-Zero coverage in this sample or a deletion relative to the reference
NLow coverage in this sample (based on --mincov)
XMasked region of reference (from --mask)
nHeterozygous or poor quality genotype (has GT=0/1 or QUAL < --minqual in snps.raw.vcf)

You can remove all the "weird" characters and replace them with N using the included snippy-clean_full_aln. This is useful when you need to pass it to a tree-building or recombination-removal tool:

% snippy-clean_full_aln core.full.aln > clean.full.aln
% run_gubbins.py -p gubbins clean.full.aln
% snp-sites -c gubbins.filtered_polymorphic_sites.fasta > clean.core.aln
% FastTree -gtr -nt clean.core.aln > clean.core.tree

Options

Advanced usage

Increasing speed when too many reads

Sometimes you will have far more sequencing depth that you need to call SNPs. A common problem is a whole MiSeq flowcell for a single bacterial isolate, where 25 million reads results in genome depth as high as 2000x. This makes Snippy far slower than it needs to be, as most SNPs will be recovered with 50-100x depth. If you know you have 10 times as much data as you need, Snippy can randomly sub-sample your FASTQ data:

# have 1000x depth, only need 100x so sample at 10%
snippy --subsample 0.1  ...
<snip>
Sub-sampling reads at rate 0.1
<snip>

Only calling SNPs in particular regions

If you are looking for specific SNPs, say AMR releated ones in particular genes in your reference genome, you can save much time by only calling variants there. Just put the regions of interest into a BED file:

snippy --targets sites.bed ...

Finding SNPs between contigs

Sometimes one of your samples is only available as contigs, without corresponding FASTQ reads. You can still use these contigs with Snippy to find variants against a reference. It does this by shredding the contigs into 250 bp single-end reads at 2 &times; --mincov uniform coverage.

To use this feature, instead of providing --R1 and --R2 you use the --ctgs option with the contigs file:

% ls
ref.gbk mutant.fasta

% snippy --outdir mut1 --ref ref.gbk --ctgs mut1.fasta
Shredding mut1.fasta into pseudo-reads.
Identified 257 variants.

% snippy --outdir mut2 --ref ref.gbk --ctgs mut2.fasta
Shredding mut2.fasta into pseudo-reads.
Identified 413 variants.

% snippy-core mut1 mut2 
Found 129 core SNPs from 541 variant sites.

% ls
core.aln core.full.aln ...

This output folder is completely compatible with snippy-core so you can mix FASTQ and contig based snippy output folders to produce alignments.

Correcting assembly errors

The de novo assembly process attempts to reconstruct the reads into the original DNA sequences they were derived from. These reconstructed sequences are called contigs or scaffolds. For various reasons, small errors can be introduced into the assembled contigs which are not supported by the original reads used in the assembly process.

A common strategy is to align the reads back to the contigs to check for discrepancies. These errors appear as variants (SNPs and indels). If we can reverse these variants than we can "correct" the contigs to match the evidence provided by the original reads. Obviously this strategy can go wrong if one is not careful about how the read alignment is performed and which variants are accepted.

Snippy is able to help with this contig correction process. In fact, it produces a snps.consensus.fa FASTA file which is the ref.fa input file provided but with the discovered variants in snps.vcf applied!

However, Snippy is not perfect and sometimes finds questionable variants. Typically you would make a copy of snps.vcf (let's call it corrections.vcf) and remove those lines corresponding to variants we don't trust. For example, when correcting Roche 454 and PacBio SMRT contigs, we primarily expect to find homopolymer errors and hence expect to see ins more than snp type variants.

In this case you need to run the correcting process manually using these steps:

% cd snippy-outdir
% cp snps.vcf corrections.vcf
% $EDITOR corrections.vcf
% bgzip -c corrections.vcf > corrections.vcf.gz
% tabix -p vcf corrections.vcf.gz
% vcf-consensus corrections.vcf.gz < ref.fa > corrected.fa

You may wish to iterate this process by using corrected.fa as a new --ref for a repeated run of Snippy. Sometimes correcting one error allows BWA to align things it couldn't before, and new errors are uncovered.

Snippy may not be the best way to correct assemblies - you should consider dedicated tools such as PILON or iCorn2, or adjust the Quiver parameters (for Pacbio data).

Unmapped Reads

Sometimes you are interested in the reads which did not align to the reference genome. These reads represent DNA that was novel to your sample which is potentially interesting. A standard strategy is to de novo assemble the unmapped reads to discover these novel DNA elements, which often comprise mobile genetic elements such as plasmids.

By default, Snippy does not keep the unmapped reads, not even in the BAM file. If you wish to keep them, use the --unmapped option and the unaligned reads will be saved to a compressed FASTQ file:

% snippy --outdir out --unmapped ....

% ls out/
snps.unmapped.fastq.gz ....

Information

Etymology

The name Snippy is a combination of SNP (pronounced "snip") , snappy (meaning "quick") and Skippy the Bush Kangaroo (to represent its Australian origin)

License

Snippy is free software, released under the GPL (version 2).

Issues

Please submit suggestions and bug reports to the Issue Tracker

Requirements

Bundled binaries

For Linux (compiled on Ubuntu 16.04 LTS) and macOS (compiled on High Sierra Brew) some of the binaries, JARs and scripts are included.