High-Performance Computing at the NIH
GitHub YouTube @nih_hpc RSS Feed
Dropseq on Biowulf and Helix

Drop-seq is a technology that allows biologists to analyze genome-wide gene expression in thousands of individual cells in a single experiment.  


Running on Helix

Sample session:

helix$ module load dropseq
helix$ cd /data/$USER/dir
helix$ BAMTagHistogram -- -h
USAGE: BAMTagHistogram [options]

Create a histogram of values for the given tag
Version: 1.0(a568873_1439010606)


Options:

--help
-h                            Displays options specific to this tool.

--stdhelp
-H                            Displays options specific to this tool AND options common to all Picard command line 
                              tools.

--version                     Displays program version.

INPUT=File
I=File                        The input SAM or BAM file to analyze.  Must be coordinate sorted. (???)  Required. 

OUTPUT=File
O=File                        Output file of histogram of tag value frequencies. This supports zipped formats like gz 
                              and bz2.  Required. 

TAG=String                    Tag to extract  Required. 

FILTER_PCR_DUPLICATES=Boolean Filter PCR Duplicates.  Default value: false. This option can be set to 'null' to clear 
                              the default value. Possible values: {true, false} 

READ_QUALITY=Integer          Read quality filter.  Filters all reads lower than this mapping quality.  Defaults to 10.  
                              Set to 0 to not filter reads by map quality.  Default value: 10. This option can be set 
                              to 'null' to clear the default value. 

Submitting a single batch job

1. Create a script file. The file will contain the lines similar to the lines below.

#! /bin/bash 
#SBATCH --mail-type=BEGIN,END,FAIL 

module load dropseq 
cd /data/$USER/dir 
BAMTagHistogram -I=file1 -O=file2 ....
....
....

2. Submit the script on Biowulf.

$ sbatch myscript

see biowulf user guide for more options such as allocate more memory and longer walltime if needed.

Submit a swarm of jobs

Using the 'swarm' utility, one can submit many jobs to the cluster to run concurrently.

Set up a swarm command file (eg /data/$USER/cmdfile). Here is a sample file:

cd /data/user/run1/; BAMTagHistogram -I=file1 -O=file2 
cd /data/user/run2/; BAMTagHistogram -I=file1 -O=file2 
cd /data/user/run3/; BAMTagHistogram -I=file1 -O=file2 
........

The -f flag is required to specify swarm file name.

Submit the swarm job:

$ swarm -f swarmfile --module dropseq

For more information regarding running swarm, see swarm.html

 

Running an interactive job

User may need to run jobs interactively sometimes. Such jobs should not be run on the Biowulf login node. Instead allocate an interactive node as described below, and run the interactive job there.

[user@biowulf]$ sinteractive 

[user@pXXXX]$ cd /data/$USER/myruns

[user@pXXXX]$ module load dropseq

[user@pXXXX]$ BAMTagHistogram -I=file1 -O=file2 
[user@pXXXX]$ exit
slurm stepepilog here!
                   
[user@biowulf]$ 

Documentation

http://mccarrolllab.com/dropseq/