Biowulf High Performance Computing at the NIH
m2clust on Biowulf

Detects clusters of features using omics data and scores metadata (resolution score) based on their influences in clustering. The similarity of features within each cluster can be different (different resolution). Resolution of similarity score takes to account not only similarity between measurements and also the structure in a hierarchical structure of data and number of features which group together.

Documentation
Important Notes

Interactive job
Interactive jobs should be used for debugging, graphics, or applications that cannot be run as batch jobs.

Allocate an interactive session and run the program.
Sample session (user input in bold):

[user@biowulf ~]$ sinteractive --cpus-per-task=2 --mem=4g --gres=lscratch:10
salloc.exe: Pending job allocation 51601756
salloc.exe: job 51601756 queued and waiting for resources
salloc.exe: job 51601756 has been allocated resources
salloc.exe: Granted job allocation 51601756
salloc.exe: Waiting for resource configuration
salloc.exe: Nodes cn0854 are ready for job
srun: error: x11: no local DISPLAY defined, skipping

[user@cn0854 ~]$ cd /lscratch/$SLURM_JOB_ID

[user@cn0854 51601756]$ git clone https://github.com/omicsEye/m2clust.git
Cloning into 'm2clust'...
remote: Enumerating objects: 73, done.
remote: Counting objects: 100% (73/73), done.
remote: Compressing objects: 100% (53/53), done.
remote: Total 73 (delta 31), reused 60 (delta 18), pack-reused 0
Unpacking objects: 100% (73/73), done.

[user@cn0854 51601756]$ module load m2clust
[+] Loading m2clust  0.0.7  on cn0854
[+] Loading singularity  3.5.3  on cn0854

[user@cn0854 51601756]$ cd m2clust/

[user@cn0854 m2clust]$ m2clust \
    -i m2clust_demo/synthetic_data/adist.txt \
    -o demo_output \
    --metadata m2clust_demo/synthetic_data/metadata.txt \
    --plot
[snip...]

[user@cn0854 m2clust]$ ls demo_output/
adist.txt  m2clust.txt  MDS_plot.pdf  PCoA_plot.pdf  t-SNE_plot.pdf

[user@cn0854 m2clust]$ exit
exit
salloc.exe: Relinquishing job allocation 51601756

[user@biowulf ~]$

Batch job
Most jobs should be run as batch jobs.

Create a batch input file (e.g. m2clust.sh). For example:

#!/bin/bash
set -e
module load m2clust
m2clust \
    -i m2clust_demo/synthetic_data/adist.txt \
    -o demo_output \
    --metadata m2clust_demo/synthetic_data/metadata.txt \
    --plot

Submit this job using the Slurm sbatch command.

sbatch [--cpus-per-task=#] [--mem=#] m2clust.sh
Swarm of Jobs
A swarm of jobs is an easy way to submit a set of independent commands requiring identical resources.

Create a swarmfile (e.g. m2clust.swarm). For example:

m2clust -i adist1.txt -o output1 --metadata metadata1.txt --plot
m2clust -i adist2.txt -o output2 --metadata metadata2.txt --plot
m2clust -i adist3.txt -o output3 --metadata metadata3.txt --plot
m2clust -i adist4.txt -o output4 --metadata metadata4.txt --plot

Submit this job using the swarm command.

swarm -f m2clust.swarm [-g #] [-t #] --module m2clust
where
-g # Number of Gigabytes of memory required for each process (1 line in the swarm command file)
-t # Number of threads/CPUs required for each process (1 line in the swarm command file).
--module m2clust Loads the m2clust module for each subjob in the swarm