High-Performance Computing at the NIH
GitHub YouTube @nih_hpc RSS Feed
Fastq-tools on Biowulf and Helix

A collection of small and efficient programs for performing some common and uncommon tasks with FASTQ files.

Running on Helix

Sample session:

helix$ module load fastqtools
helix$ fastq-uniq -h
fastq-uniq [OPTION] [FILE]...
Output a non-redundant FASTQ file, in which there are no duplicate reads.
(Warning: this program can be somewhat memory intensive.)

  -v, --verbose           print status along the way
  -h, --help              print this message
  -V, --version           output version information and exit

Submitting a single batch job

1. Create a script file. The file will contain the lines similar to the lines below. Modify the path of program location before running.


module load fastqtools
cd /data/$USER/somewhere
fastq-uniq file.fq

2. Submit the script on Biowulf.

$ sbatch myscript

see biowulf user guide for more options such as allocate more memory and longer walltime.

Submitting a swarm of jobs

Using the 'swarm' utility, one can submit many jobs to the cluster to run concurrently.

Set up a swarm command file (eg /data/$USER/cmdfile). Here is a sample file:

cd /data/user/run1/; fastq-uniq file.fq
cd /data/user/run2/; fastq-uniq file.fq
cd /data/user/run3/; fastq-uniq file.fq

The -f flag is required to specify swarm file name.

Submit the swarm job:

$ swarm -f swarmfile --module fastqtools

- Use -g flag for more memory requirement (default 1.5gb per line in swarmfile)

- Use --time flag for longer walltime (default 4 hours)

For more information regarding running swarm, see swarm.html


Running an interactive job

User may need to run jobs interactively sometimes. Such jobs should not be run on the Biowulf login node. Instead allocate an interactive node as described below, and run the interactive job there.

[user@biowulf]$ sinteractive 

[user@pXXXX]$ cd /data/$USER/myruns

[user@pXXXX]$ module load fastqtools

[user@pXXXX]$ fastq-uniq file.fq
[user@pXXXX] exit
slurm stepepilog here!