High-Performance Computing at the NIH
GitHub YouTube @nih_hpc RSS Feed
Csvkit on Biowulf and Helix

Csvkit is a suite of command-line tools for converting to and working with CSV, the king of tabular file formats.

Example data can be copied from /usr/local/apps/csvkit/csvkit_tutorial/ne_1033.data.xlsx

Running on Helix

Sample session:

helix$ module load csvkit
helix$ cd /data/$USER/dir
helix$ curl -L -O https://github.com/onyxfish/csvkit/raw/master/examples/realdata/ne_1033_data.xlsx
helix$ module load csvkit
helix$ in2csv /data/$USER/dir/ne_1033_data.xlsx

Submitting a single batch job

1. Create a script file. The file will contain the lines similar to the lines below.

#! /bin/bash 

module load csvkit 
cd /data/$USER/dir 
in2csv /data/$USER/dir/ne_1033_data.xlsx

2. Submit the script on Biowulf.

$ sbatch myscript

see biowulf user guide for more options such as allocate more memory and longer walltime if needed.

Submit a swarm of jobs

Using the 'swarm' utility, one can submit many jobs to the cluster to run concurrently.

Set up a swarm command file (eg /data/$USER/cmdfile). Here is a sample file:

cd /data/$USER/dir1/run1/; in2csv /data/$USER/dir1/ne_1033_data.xlsx 
cd /data/$USER/dir2/run1/; in2csv /data/$USER/dir2/ne_1033_data.xlsx 
cd /data/$USER/dir3/run1/; in2csv /data/$USER/dir3/ne_1033_data.xlsx  

The -f flag is required to specify swarm file name.

Submit the swarm job:

$ swarm -f swarmfile --module csvkit

For more information regarding running swarm, see swarm.html


Running an interactive job

User may need to run jobs interactively sometimes. Such jobs should not be run on the Biowulf login node. Instead allocate an interactive node as described below, and run the interactive job there.

[user@biowulf]$ sinteractive 

[user@pXXXX]$ cd /data/$USER/myruns

[user@pXXXX]$ module load csvkit

[user@pXXXX]$in2csv /data/$USER/myruns/ne_1033_data.xlsx 
[user@pXXXX]$ exit
slurm stepepilog here!