From the 10x xeniumranger documentation:
Xenium Ranger provides flexible off-instrument reanalysis of Xenium In Situ data. Relabel transcripts, resegment cells with the latest 10x segmentation algorithms, or import your own segmentation data to assign transcripts to cells.
Please try local mode first and only use slurm mode if local mode does not produce results in a reasonable time frame. xeniumranger in slurm mode may generate too many short jobs for moderate input sizes.
Local Mode: xenium ranger can be configured to run all
process on the same compute node by setting --jobmode=local.
It is necessary to specify --localcores and
--localmem.
To test xenium ranger interactively and use its web monitoring UI, run it in local mode in an interactive session with tunneling:
[user@biowulf]$ sinteractive --cpus-per-task=16 --mem=64g --tunnel salloc.exe: Pending job allocation 46116226 salloc.exe: job 46116226 queued and waiting for resources salloc.exe: job 46116226 has been allocated resources salloc.exe: Granted job allocation 46116226 salloc.exe: Waiting for resource configuration salloc.exe: Nodes cn3144 are ready for job [user@cn3144]$ module load xeniumranger [user@cn3144]$ xeniumranger resegment \ --jobmode=local \ --localcores=12 \ --localmem=60 \ --xenium-bundle /data/$USER/xenium-bundle \ --uiport $PORT1 \ --id test [user@cn3144]$ exit salloc.exe: Relinquishing job allocation 46116226 [user@biowulf]$
xenium ranger batch jobs can be configured to run either in local mode (job submission mode) or cluster mode. Please disable the web monitoring UI for batch jobs with --disable-ui option.
Cluster Mode: To run xenium ranger in batch mode, where different
processes are run in independent jobs, set --jobmode=slurm.
Once submitted, the controlling or orchestrator job will automatically submit
additional jobs. This controlling job doesn't require a large amount of
resources; you only need to request a long enough wall time.
Read the full documentation for cluster mode here
You should specify the following options:
--mempercore which will set how much memory
a CPU will have access to and based on that the number of CPUs
will be requested accordingly for jobs that need scaling.
More information here.
--maxjobs which controls how many jobs are submitted
simultaneously. Keep this to a minimum when testing your pipeline.
--jobinterval which controls the delay between job
submissions, in ms. We recommend at least 30 seconds.Here's an example. Create a batch input file (e.g. xeniumranger.sh):
#!/bin/bash module load xeniumranger xeniumranger resegment \ --xenium-bundle /data/$USER/xenium-bundle \ --id id \ --localcores=8 \ --localmem=15 \ --jobmode slurm \ --mempercore 8 \ --maxjobs 20 \ --jobinterval 30000 \ --disable-ui true
Submit this job using the Slurm sbatch command. For example:
sbatch --cpus-per-task=8 --mem=16g xeniumranger.sh
Note that in the above example, the 8 CPUs and 16G memory is for the controlling xenium ranger job.