LAST is designed for moderately large data (e.g. genomes, DNA reads, proteomes). It's especially geared toward:
Allocate an interactive session and run the program.
Sample session (user input in bold):
[user@biowulf]$ sinteractive salloc.exe: Pending job allocation 46116226 salloc.exe: job 46116226 queued and waiting for resources salloc.exe: job 46116226 has been allocated resources salloc.exe: Granted job allocation 46116226 salloc.exe: Waiting for resource configuration salloc.exe: Nodes cn3144 are ready for job [user@cn3144 ~]$ module load last [user@cn3144 ~]$ lastdb humdb $LAST_HOME/examples/humanMito.fa [user@cn3144 ~]$ lastal humdb $LAST_HOME/examples/fuguMito.fa > myalns.maf [user@cn3144 ~]$ exit salloc.exe: Relinquishing job allocation 46116226 [user@biowulf ~]$
Create a batch input file (e.g. last.sh). For example:
#!/bin/bash set -e module load last lastdb humdb $LAST_HOME/examples/humanMito.fa lastal humdb $LAST_HOME/examples/fuguMito.fa > myalns.maf
Submit this job using the Slurm sbatch command.
sbatch [--cpus-per-task=#] [--mem=#] last.sh
Create a swarmfile (e.g. last.swarm). For example:
lastal -P $SLURM_CPUS_PER_TASK humdb sample1.fa > s1.maf lastal -P $SLURM_CPUS_PER_TASK humdb sample2.fa > s2.maf lastal -P $SLURM_CPUS_PER_TASK humdb sample3.fa > s3.maf lastal -P $SLURM_CPUS_PER_TASK humdb sample4.fa > s4.maf
Submit this job using the swarm command.
swarm -f last.swarm [-g #] [-t #] --module lastwhere
-g # | Number of Gigabytes of memory required for each process (1 line in the swarm command file) |
-t # | Number of threads/CPUs required for each process (1 line in the swarm command file). |
--module last | Loads the last module for each subjob in the swarm |