High-Performance Computing at the NIH
GitHub YouTube @nih_hpc RSS Feed
Sickle on Biowulf & Helix

Sickle is a windowed adaptive trimming tool for FASTQ files using quality. Most modern sequencing technologies produce reads that have deteriorating quality towards the 3'-end and some towards the 5'-end as well. Incorrectly called bases in both regions negatively impact assembles, mapping, and downstream bioinformatics analyses. Sickle is a tool that uses sliding windows along with quality and length thresholds to determine when quality is sufficiently low to trim the 3'-end of reads and also determines when the quality is sufficiently high enough to trim the 5'-end of reads. It will also discard reads based upon the length threshold. It takes the quality values and slides a window across them whose length is 0.1 times the length of the read. If this length is less than 1, then the window is set to be equal to the length of the read. Otherwise, the window slides along the quality values until the average quality in the window rises above the threshold, at which point the algorithm determines where within the window the rise occurs and cuts the read and quality there for the 5'-end cut. Then when the average quality in the window drops below the threshold, the algorithm determines where in the window the drop occurs and cuts both the read and quality strings there for the 3'-end cut. However, if the length of the remaining sequence is less than the minimum length threshold, then the read is discarded entirely. 5'-end trimming can be disabled.

Running on Helix
$ module load sickle
$ sickle pe -f input1.fastq -r input2.fastq -t sanger \ 
-o output1.fastq -p output2.fastq \
-s trimmed_singles_file.fastq

Running a single batch job on Biowulf

1. Create a script file similar to this: lines below.

#!/bin/bash

module load sickle
cd /data/$USER/
sickle pe -f input1.fastq -r input2.fastq -t sanger \ 
-o output1.fastq -p output2.fastq \
-s trimmed_singles_file.fastq

2. Submit the script on biowulf:

$ sbatch jobscript

If more momory is required (default 4gb), specify --mem=Mg, for example --mem=10g:

$ sbatch --mem=10g jobscript

Running a swarm of jobs on Biowulf

Setup a swarm command file:

  cd /data/$USER/dir1; sickle pe -f input1.fastq -r input2.fastq -t sanger -o output1.fastq -p output2.fastq -s trimmed_singles_file.fastq
  cd /data/$USER/dir2; sickle pe -f input1.fastq -r input2.fastq -t sanger -o output1.fastq -p output2.fastq -s trimmed_singles_file.fastq
  cd /data/$USER/dir3; sickle pe -f input1.fastq -r input2.fastq -t sanger -o output1.fastq -p output2.fastq -s trimmed_singles_file.fastq
	[......]
  

Submit the swarm file, -f specify the swarmfile name, and --module will be loaded the required module for each command line in the file:

  $ swarm -f swarmfile --module sickle

If more memory is needed for each line of commands, the below example allocate 10g for each command:

  $ swarm -f swarmfile -g 10 --module sickle

For more information regarding running swarm, see swarm.html

Running an interactive job on Biowulf

It may be useful for debugging purposes to run jobs interactively. Such jobs should not be run on the Biowulf login node. Instead allocate an interactive node as described below, and run the interactive job there.

biowulf$ sinteractive 
salloc.exe: Granted job allocation 16535

cn999$ module load sickle
cn999$ cd /data/$USER/dir
cn999$ sickle pe -f input1.fastq -r input2.fastq -t sanger -o output1.fastq -p output2.fastq -s trimmed_singles_file.fastq
[...etc...]

cn999$ exit
exit

biowulf$

Make sure to exit the job once finished.

If more memory is needed, use --mem. For example

biowulf$ sinteractive --mem=8g

Documentation

https://github.com/najoshi/sickle/