High-Performance Computing at the NIH
YouTube @nih_hpc RSS Feed
Singularity

      


Extreme Mobility of Compute

Singularity containers let users run applications in a Linux environment of their choosing.

Possible uses for Singularity on Biowulf:

Singularity is being actively developed at Lawrence Berkeley National Labs.

Web sites
Definition files written by the NIH HPC staff

Please Note:
Singularity gives you the ability to install and run applications in your own Linux environment with your own customized software stack. With this ability comes the added responsibility of managing your own Linux environment. While the NIH HPC staff can provide guidance on how to create and use singularity containers, we do not have the resources to manage containers for individual users. If you decide to use Singularity, it is your responsibility to build and manage your own containers.

Creating Singularity containers
back to top

To use Singularity on Biowulf, you either need to create your own Singularity container, or use one created by someone else. To build a Singularity container, you need root access to the build system. Thus, you cannot build a Singularity container on Helix or Biowulf. Possible options are:

Depending on your environment and the type of Singularity container you want to build, you may need to install some dependencies before installing and/or using Singularity. For instance, the following may need to be installed on Ubuntu for Singularity to build and run properly. (user input in bold)

[user@someUbuntu ~]$ sudo apt-get install build-essential debootstrap yum dh-autoreconf
On Centos, these commands will provide some needed dependencies for Singularity:
[user@someCentos ~]$ sudo yum groupinstall 'Development Tools'
[user@someCentos ~]$ sudo yum install wget epel-release
[user@someCentos ~]$ sudo yum install debootstrap.noarch

You can find more information about installing Singularity on your Linux build system here. Because Singularity is being rapidly developed, we recommend downloading and installing the latest release from Github.

You also need a definition file to build a Singularity container from scratch. You can find some simple definition files for a variety of Linux distributions in the /example directory of the source code. You can also find these files on Biowulf at /usr/local/src/singularity/2.2/examples. You can also find a small list of definition files containing popular applications at the top of this page. Detailed documentation about building Singularity container images is available at the LBL Singularity website.

Expand the tab below to watch a quick demo showing how to install Singularity on a build system (here it's Google cloud), create a container, and run your container on Biowulf. Use space to pause, f to go fullscreen, and the directional keys to fast-forward or rewind. You can also copy and paste text directly from this demo into a terminal.

Singularity installation and use overview demo
  • space - play / pause
  • f - toggle fullscreen mode
  • arrow keys(←/→) - rewind 5 seconds / fast-forward 5 seconds
  • 0, 1, 2 ... 9 - jump to 0%, 10%, 20% ... 90%
  • copy and paste text from movie

Binding external directories
back to top

Binding a directory to your Singularity container allows you to access files in a host system directory from within your container. By default, Singularity will bind your /home/$USER directory and your current working directory (along with a few other directories such as /tmp and /dev). You can also bind other directories into your Singularity container yourelf. To do this, you must do 2 things:

Let's say you want to bind /lscratch into a singularity container. You can either create bind points in your singularity container during the bootstrap procedure within your .def file, or you can edit your container as root on your build system after you have completed the bootstrap process.

For instance, if you decided to create the bind point in your .def file it would look something like this:

BootStrap: debootstrap
OSVersion: xenial
MirrorURL: http://us.archive.ubuntu.com/ubuntu/

%post
    # create bind points for NIH HPC environment
    mkdir /lscratch

Or, if you have already bootstrapped your Singularity container, you can create the /scratch bind point in a root owned Singularity shell on your build system like so. (user input in bold)

[user@someBuildSystem ~]$ singularity shell -w my_container.img

[user@my_container ~]$ mkdir /lscratch

Then, when you mount your container onto Biowulf use the --bind option like so:

[user@cn1234 ~]$ singularity shell --bind /lscratch my_container.img

The --bind option also works with the run and exec Singularity commands. You can also bind a directory on the host system to a directory with a different name in your Singlularity container using the following syntax:

[user@cn1234 ~]$ singularity shell --bind /foo:/bar my_container.img
And you can bind multiple directories in a single command with this syntax:
[user@cn1234 ~]$ singularity shell --bind /us:/them,/me:/you,/black:/blue my_container.img 
Finally, you can use the environmental variable $SINGULARITY_BINDPATH to bind host directories to container directories:
[user@cn1234 ~]$ export SINGULARITY_BINDPATH="/lscratch,/fdb:/myfdb"
This means "bind the host /lscratch to /lscratch in my container, and bind the host /fdb to the directory called /myfdb in my containter". Using environmental variables, you can bind directories even when you are running your container as an executable file. If you bind a lot of directories into your singularity container and they don't change, you could even put this variable in your .bashrc file.

This process is further documented at the LBL Singularity website.

Symbolic link directories (e.g. /data and /scratch)

Binding some directories can be more complicated because we use symbolic links to refer to network storage systems on Biowulf. Let's say you want to bind your /data directory. First you must know which volume contains your data directory.You can get this information with the following command:

[godlovedc@helix ~]$ ls -l /data/$USER
lrwxrwxrwx 1 root root 22 Jan 15  2016 /data/godlovedc -> /spin1/users/godlovedc

My data directory is actually a symbolic link to /spin1, so I need to bind both /data and /spin1.

I am also a member of a group with a shared directory.

[godlovedc@helix ~]$ ls -l /data/NIF
lrwxrwxrwx 1 root root 14 Aug 12  2014 /data/NIF -> /gs3/users/NIF

It is hosted on the gs3 volume of the gpfs file system. But, where is gs3?

[godlovedc@helix ~]$ ls -l /gs3
lrwxrwxrwx. 1 root root 11 Oct  7  2014 /gs3 -> /gpfs/gsfs3

gs3 is actually a symbolic link to /gpfs/gsfs3. So for this data directory to bind properly within a Singularity container, I must bind /gpfs, /gs3, and /data. If you are unsure what directories to bind for /data and /scratch directories to appear properly, you can bind most commonly used directories by creating the following mount points in your Singularity container:

[user@someBuildSystem ~]$ mkdir /gpfs /gs2 /gs3 /gs4 /gs5 /gs6 /spin1 /data /scratch /fdb /lscratch

You can then set your $SINGULARITY_BINDPATH variable like so (either in the terminal or in your .bashrc file for permanence):

[user@cn0123 ~]$ export SINGULARITY_BINDPATH="/gpfs,/gs2,/gs3,/gs4,/gs5,/gs6,/spin1,/data,/scratch,/fdb,/lscratch"
This approach guarantees that your /data directories and other commonly used directories will all be bound within your container. Binding every link is also a robust approach ensuring that your /data directory will remain accessible even if it is moved to a different volume.

Expand the tab below to watch a quick demo on binding host system directories within a Singularity container.

Singularity binding directories demo
  • space - play / pause
  • f - toggle fullscreen mode
  • arrow keys(←/→) - rewind 5 seconds / fast-forward 5 seconds
  • 0, 1, 2 ... 9 - jump to 0%, 10%, 20% ... 90%
  • copy and paste text from movie

Interactive Singularity containers
back to top

Singularity cannot be run on Helix or the Biowulf login node.

To run a Singularity container image on Biowulf interactively, you need to allocate an interactive session, and load the Singularity module. In this sample session (user input in bold), an Ubuntu 16.04 Singularity container is started, and python is run. Note that in this example, you would be running the version of python that is installed within the Singularity container, not the version on Helix/Biowulf.

[susanc@biowulf ~]$ sinteractive --cpus-per-task=4 --mem=10g
salloc.exe: Pending job allocation 23562157
salloc.exe: job 23562157 queued and waiting for resources
salloc.exe: job 23562157 has been allocated resources
salloc.exe: Granted job allocation 23562157
salloc.exe: Waiting for resource configuration
salloc.exe: Nodes cn2723 are ready for job

[susanc@cn2723 ~]$ module load singularity
[+] Loading singularity 2.1.2 on cn2723

[susanc@cn2723 ~]$ singularity shell /data/susanc/singularity/Ubuntu-16.04.img

Singularity.Ubuntu-16.04.img> $ pwd
/home/susanc

Singularity.Ubuntu-16.04.img> $ which python
/usr/bin/python

Singularity.Ubuntu-16.04.img> $ python --version
Python 2.7.6

Singularity.Ubuntu-16.04.img> $ exit

[susanc@cn2723 ~]$ exit
exit
salloc.exe: Relinquishing job allocation 23562157
[susanc@biowulf ~]$
Note that you need to exit your Singularity container as well as your allocated interactive Slurm session when you are done.

Expand the tab below to view a demo of interactive Singularity usage.

Singularity interactive container demo
  • space - play / pause
  • f - toggle fullscreen mode
  • arrow keys(←/→) - rewind 5 seconds / fast-forward 5 seconds
  • 0, 1, 2 ... 9 - jump to 0%, 10%, 20% ... 90%
  • copy and paste text from movie

Singularity containers in batch
back to top

In this example, singularity will be used to run a TensorFlow example in an Ubuntu 16.04 container. (User input in bold).

First, create a container image on a machine where you have root privileges. These commands were run on a Google Cloud VM instance running an Ubuntu 16.04 image, and the Singularity container was created using this definition file that includes a TensorFlow installation.

[user@someCloud ~]$ sudo singularity create -s 1500 ubuntu_w_TFlow.img

[user@someCloud ~]$ sudo singularity bootstrap ubuntu_w_TFlow.img ubuntu_w_TFlow.def

Next, copy the TensorFlow script that you want to run into the container, and move the container to Biowulf. In this case, this example script from the TensorFlow website was copied to /usr/bin inside of the container, and the container was moved to the user's data directory

[user@someCloud ~]$ sudo singularity copy ubuntu_w_TFlow.img TFlow_example.py /usr/bin/ 

[user@someCloud ~]$ scp ubuntu_w_Tflow.img user@biowulf.nih.gov:/data/user 

Then ssh to Biowulf and write a batch script to run the singularity command similar to this:

#!/bin/bash
# file called myjob.batch

module load singularity
cd /data/user
singularity exec ubuntu_w_TFlow.img python /usr/bin/TFlow_example.py

Submit the job like so:

[user@biowulf ~]$ sbatch myjob.batch

After the job finishes executing you should see the following output in the swarm*.out file.

[+] Loading singularity 2.1.2 on cn2725
(0, array([-0.39398459], dtype=float32), array([ 0.78525567], dtype=float32))
(20, array([-0.05549375], dtype=float32), array([ 0.38339305], dtype=float32))
(40, array([ 0.05872268], dtype=float32), array([ 0.3221375], dtype=float32))
(60, array([ 0.08904253], dtype=float32), array([ 0.30587664], dtype=float32))
(80, array([ 0.09709124], dtype=float32), array([ 0.30156001], dtype=float32))
(100, array([ 0.09922785], dtype=float32), array([ 0.30041414], dtype=float32))
(120, array([ 0.09979502], dtype=float32), array([ 0.30010995], dtype=float32))
(140, array([ 0.09994559], dtype=float32), array([ 0.30002919], dtype=float32))
(160, array([ 0.09998555], dtype=float32), array([ 0.30000776], dtype=float32))
(180, array([ 0.09999616], dtype=float32), array([ 0.30000207], dtype=float32))
(200, array([ 0.09999899], dtype=float32), array([ 0.30000055], dtype=float32))

Expand the tab below to watch a quick demo of Singularity in batch mode.

Singularity containers in batch demo
  • space - play / pause
  • f - toggle fullscreen mode
  • arrow keys(←/→) - rewind 5 seconds / fast-forward 5 seconds
  • 0, 1, 2 ... 9 - jump to 0%, 10%, 20% ... 90%
  • copy and paste text from movie

Singularity containers on GPU nodes
back to top

If you want to build a singularity container image that can run applications on Biowulf GPU nodes, you must prepare your container by taking the following steps:

For your convenience, the NIH HPC staff maintains an installation script (called gpu4singularity) that automates this process. It has been tested with Ubuntu 16.4 and Centos 7. You can either copy or download gpu4singularity into an existing container and execute it with root privileges, or you can add the following lines of code to your .def file and install the NVIDIA driver binaries/libraries during the bootstrap procedure.
wget ftp://helix.nih.gov/CUDA/gpu4singularity
chmod 755 gpu4singularity
./gpu4singularity
rm gpu4singularity   

Please Note:
gpu4singularity was previously called cuda4singularity because it downloaded and installed CUDA and cuDNN in addition to the NVIDIA driver. You can still access and use cuda4singularity if you are on the NIH LAN or VPN. But the NVIDIA license agreement disallows us from distributing cuDNN outside of the NIH HPC community. Instead, we recommend that users bootstrap images from the official NVIDIA/CUDA dockerhub registry as is the example below.

Note that it may violate the NVIDIA license agreement to distribute Singularity images containing cuDNN outside of your organization. As a rule, users must be aware of and adhere to any regulations imposed on redistributing software as these may also apply to software redistributed within containers.

The following .def file could be used to create an Ubuntu 16.04 image with the NVIDIA driver, CUDA, and cuDNN libraries suitable for running TensorFlow on a GPU node. (Note the section that installs the NVIDIA driver via gpu4singularity in bold).
BootStrap: docker
From: nvidia/cuda:8.0-cudnn5-devel

%setup
    # commands to be executed on host outside container during bootstrap

%post
    # commands to be executed inside container during bootstrap

    # add universe repo and install some packages
    sed -i '/xenial.*universe/s/^#//g' /etc/apt/sources.list
    locale-gen en_US.UTF-8
    apt-get -y update
    apt-get -y install vim wget perl python python-pip python-dev

    # create bind points for NIH HPC environment
    mkdir /gpfs /spin1 /gs2 /gs3 /gs4 /gs5 /gs6 /data /scratch /fdb /lscratch

    # download and run NIH HPC NVIDIA driver installer
    wget ftp://helix.nih.gov/CUDA/gpu4singularity
    chmod 755 gpu4singularity
    ./gpu4singularity --verbose
    rm gpu4singularity
 
    # install tensorflow
    pip install --upgrade pip
    pip install tensorflow-gpu
 
%runscript
    # commands to be executed when the container runs
 
%test
    # commands to be executed within container at close of bootstrap process

If you prefer CentOS you could use this .def file instead.

After creating a container with one of these files, you can copy it to Biowulf and test it like so. (User input in bold.)

[user@biowulf ~]$ sinteractive --constraint=gpuk20x --gres=gpu:k20x:1
salloc.exe: Pending job allocation 24315111
salloc.exe: job 24315111 queued and waiting for resources
salloc.exe: job 24315111 has been allocated resources
salloc.exe: Granted job allocation 24315111
salloc.exe: Waiting for resource configuration
salloc.exe: Nodes cn0619 are ready for job
srun: error: x11: no local DISPLAY defined, skipping

[user@cn0619 ~]$ module load singularity
[+] Loading singularity 2.2.1 on cn0619

[user@cn0619 ~]$ singularity shell gpu.img

Singularity.gpu.img> nvidia-smi
Mon Sep 26 20:15:05 2016       
+------------------------------------------------------+                       
| NVIDIA-SMI 352.39     Driver Version: 352.39         |                       
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla K20Xm         Off  | 0000:08:00.0     Off |                    0 |
| N/A   33C    P8    30W / 235W |     12MiB /  5759MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  Tesla K20Xm         Off  | 0000:27:00.0     Off |                    0 |
| N/A   40C    P0    65W / 235W |    100MiB /  5759MiB |    100%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    1     36760    C   python                                          84MiB |
+-----------------------------------------------------------------------------+

Singularity.gpu.img> python -m tensorflow.models.image.mnist.convolutional
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcurand.so locally
Extracting data/train-images-idx3-ubyte.gz
Extracting data/train-labels-idx1-ubyte.gz
Extracting data/t10k-images-idx3-ubyte.gz
Extracting data/t10k-labels-idx1-ubyte.gz
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties: 
name: Tesla K20Xm
major: 3 minor: 5 memoryClockRate (GHz) 0.732
pciBusID 0000:08:00.0
Total memory: 5.62GiB
Free memory: 5.54GiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0:   Y 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:838] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Tesla K20Xm, pci bus id: 0000:08:00.0)
Initialized!
Step 0 (epoch 0.00), 49.4 ms
Minibatch loss: 12.054, learning rate: 0.010000
Minibatch error: 90.6%
Validation error: 84.6%
Step 100 (epoch 0.12), 14.7 ms
Minibatch loss: 3.278, learning rate: 0.010000
Minibatch error: 6.2%
Validation error: 6.9%
[...snip...]

Expand the tab below to see a demo of installing and using GPU support in a Singularity container.

Using the GPU demo
  • space - play / pause
  • f - toggle fullscreen mode
  • arrow keys(←/→) - rewind 5 seconds / fast-forward 5 seconds
  • 0, 1, 2 ... 9 - jump to 0%, 10%, 20% ... 90%
  • copy and paste text from movie

Using Docker containers with Singularity
back to top

Singularity can import, bootstrap, and even run Docker images directly from Docker Hub. For instance, the following commands will start an Ubuntu container running on a compute node with no need for a definition file or container image!

[user@cn0123 ~]$ module load singularity

[user@cn0123 ~]$ singularity shell docker://ubuntu:latest
library/ubuntu:latest
Downloading layer: sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
Downloading layer: sha256:9332eaf1a55b72fb779d2f249b65855c623c8ce7be83c822b7d80115ef5a3af3
Downloading layer: sha256:47b5e16c0811b08c1cf3198fa5ac0b920946ac538a0a0030627d19763e2fa212
Downloading layer: sha256:e931b117db38a05b9d0bbd28ca99a0abe5236a0026d88b3db804f520e59977ec
Downloading layer: sha256:8f9757b472e7962a4304d4af61630e2cde66129218135b4093a43b9db8942c34
Downloading layer: sha256:af49a5ceb2a56a8232402f5868cdb13dfdae5d66a62955a73e647e16e9f30a63
Singularity: Invoking an interactive shell within container...

Singularity.ubuntu:latest> cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04.1 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.1 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
This feature gives you instant access to 100,000+ pre-built container images. You can run one of these containers without modification or as a starting point for your own in a definition file container. When using container images from Docker Hub, you needn't worry about which bootstrap module use. See the LBL Singularity webpage for detailed information.

Note that Docker integration in Singularity is under active development. As of 2.2 it is not fully functional, so users are strongly encouraged to install the latest Singularity code from github on their build system.

Docker containers can be also be exported directly to a Singularity container image. This may be useful for users who already have an existing Docker container or find working in Docker to build containers convenient. Please refer to the LBL Singularity webpage for more details on converting an existing Docker image to a Singularity container using the docker2singularity.sh tool.

In this example, we will create a Singularity container image wrapping a number of RNASeq tools. This would allow us to write a pipeline with, for example, Snakemake and distribute it along with the image to create an easily shared, reproducible workflow. Rather than bootstrapping our own Linux image from a distribution repository, we will use a miniconda3 image directly from Docker Hub as our base container and install the RNASeq tools directly into it. Finally, we'll write a runscript enabling us to treat our container like an executable. The following .def file takes care of this for us.

BootStrap: docker
From: continuumio/miniconda3:latest
IncludeCmd: yes

%post
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# this will install all necessary packages and prepare the container
    apt-get -y update
    apt-get -y install make gcc zlib1g-dev libncurses5-dev
    wget https://github.com/samtools/samtools/releases/download/1.3.1/samtools-1.3.1.tar.bz2 \
        && tar -xjf samtools-1.3.1.tar.bz2 \
        && cd samtools-1.3.1 \
        && make \
        && make prefix=/usr/local install
    export PATH=/opt/conda/bin:$PATH
    conda install --yes -c bioconda \
        star=2.5.2b \
        sailfish=0.10.1 \
        fastqc=0.11.5 \
        kallisto=0.43.0 \
        subread=1.5.0.post3
    conda clean --index-cache --tarballs --packages --yes
    mkdir /gpfs /spin1 /gs2 /gs3 /gs4 /gs5 /gs6 /data /scratch /fdb /lscratch

%runscript
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# this text will get copied to /singularity and will run whenever the container
# is called as an executable
function usage() {
    cat <<EOF
NAME
    rnaseq - rnaseq pipeline tools 0.1
SYNOPSIS
    rnaseq tool [tool options]
    rnaseq list
    rnaseq help
DESCRIPTION
    Singularity container with tools to build rnaseq pipeline. 
EOF
}

function tools() {
    echo "conda: $(which conda)"
    echo "---------------------------------------------------------------"
    conda list
    echo "---------------------------------------------------------------"
    echo "samtools: $(samtools --version | head -n1)"
}

export PATH="/opt/conda/bin:/usr/local/bin:/usr/bin:/bin:"
unset CONDA_DEFAULT_ENV
export ANACONDA_HOME=/opt/conda

arg="${1:-none}"

case "$arg" in
    none) usage; exit 1;;
    help) usage; exit 0;;
    list) tools; exit 0;;
    # just try to execute it then
    *)    $@;;
esac

Assuming this file is called rnaseq.def, we can create a Singularity container called rnaseq on our build system with the following commands:

[user@some_build_system ~]$ sudo singularity create -s 1600 -F rnaseq

[user@some_build_system ~]$ sudo singularity bootstrap rnaseq rnaseq.def

This image contains miniconda3 and our rnaseq tools and can be called directly as an executable like so:

[user@some_build_system ~]$ ./rnaseq help
NAME
    rnaseq - rnaseq pipeline tools 0.1
SYNOPSIS
    rnaseq snakemake [snakemake options]
    rnaseq list
    rnaseq help
DESCRIPTION
    Singularity container with tools to build rnaseq pipeline. 

[user@some_build_system ~]$ ./rnaseq list
conda: /opt/conda/bin/conda
---------------------------------------------------------------
# packages in environment at /opt/conda:
#
fastqc                    0.11.5                        1    bioconda
java-jdk                  8.0.92                        1    bioconda
kallisto                  0.43.0                        1    bioconda
sailfish                  0.10.1              boost1.60_1    bioconda
[...snip...]

[user@some_build_system ~]$ ./rnaseq samtools --version
samtools 1.3.1
Using htslib 1.3.1
Copyright (C) 2016 Genome Research Ltd.

After copying the image to the NIH HPC systems, allocate an sinteractive session and test it there

[user@cn1234 ~]$ module load singularity
[user@cn1234 ~]$ ./rnaseq list
conda: /opt/conda/bin/conda
---------------------------------------------------------------
# packages in environment at /opt/conda:
#
fastqc                    0.11.5                        1    bioconda
java-jdk                  8.0.92                        1    bioconda
kallisto                  0.43.0                        1    bioconda
sailfish                  0.10.1              boost1.60_1    bioconda
[...snip...]

This could be used with a Snakemake file like this

rule fastqc:
    input: "{sample}.fq.gz"
    output: "{sample}.fastqc.html"
    shell: 
        """
        module load singularity
        ./rnaseq fastqc ... {input}
        """

rule align:
    input: "{sample}.fq.gz"
    output: "{sample}.bam"
    shell: 
        """
        module load singularity
        ./rnaseq STAR ....
        """

Expand the tab below to see an example of creating a Singularity container to be used as an executable from a Docker image on DockerHub.

Singularity with Docker demo
  • space - play / pause
  • f - toggle fullscreen mode
  • arrow keys(←/→) - rewind 5 seconds / fast-forward 5 seconds
  • 0, 1, 2 ... 9 - jump to 0%, 10%, 20% ... 90%
  • copy and paste text from movie

Documentation
back to top