Using the Biowulf visual partition to visualize remote data

In some instances, it may be useful or necessary to visualize data directly on Biowulf. Maybe there are too many data to reasonably copy to a local resource, or maybe those data must be visualized as part of the analysis. The NIH HPC has set aside a small number of GPU nodes for a dedicated visual partition. These nodes can be accessed via the svis command.

Please note the following:

TurboVNC local client installation
back to top

You will need to use TurboVNC to take advantage of graphics hardware acceleration on the visualization partition.

Note: Installation of TurboVNC may require administrator privileges. If you do not have admin privs, you may need to ask your desktop support to install TurboVNC.


Windows

  1. Download the latest TurboVNC here.
  2. Click on the .exe file to install and then click "Run".
  3. Enter your administrative information when prompted to do so.
  4. Follow the on-screen prompts to install the program


Mac

  1. Download the latest TurboVNC here.
  2. Click on the .dmg file to install. You may get a warning like 'Package cannot be opened because Apple cannot check it for malicious software' -- click Ok, and then continue the installation. You may need to go to 'System Preferences' on your Mac -> Security and Privacy -> Open Anyway.
  3. When you start up TurboVNC as in step 3 above, you might see the error 'JRE Load Error'. This means that you need to install Java. You can either install it from
Allocating and connecting to a visualization session
back to top

Connecting to a visualization session is a 3 step process.

  1. Connect to the Biowulf login node with ssh and execute the svis command.
  2. Create an ssh tunnel from your local workstation to the Biowulf login node.
  3. Open TurboVNC viewer on your local workstation (which can be installed via instructions at the page bottom), direct it to your tunnel, and authenticate.

Troubleshooting: "Unable to contact settings server" error with black screen:
If you see a black screen and get an error referencing a failure to connect to a /tmp/dbus-***** socket you probably had a conda environment activated when you executed the svis command. Exit the session and double check that you don't have a conda environment activated and you don't have any code in your ~/.bashrc file that will automatically activate a conda environment.

Detailed instructions for each step follow:


Step 1. Connect to the biowulf login (using one of the methods detailed here) and execute the svis command.

Note that svis will accept and pass some options to Slurm, but the vast majority of users will want to execute this command with no options. This command will also grant you an entire vis node, so there is no reason to pass any options specifying CPUs, memory, lscratch, etc.

[user@biowulf ~]$ svis
salloc.exe: Pending job allocation 7130309
salloc.exe: job 7130309 queued and waiting for resources
salloc.exe: job 7130309 has been allocated resources
salloc.exe: Granted job allocation 7130309
salloc.exe: Waiting for resource configuration
salloc.exe: Nodes cn0655 are ready for job
srun: error: x11: no local DISPLAY defined, skipping
[+] Loading TurboVNC
Starting VNC server ... please be patient...
VNC server started on display 2 port 5902
VNC configured with SSH forwarding.

After creating a tunnel from your workstation to biowulf.nih.gov
port 42303, connect your VNC client to localhost port
42303. See https://hpc.nih.gov/nih/vnc for details.

The VNC connection will terminate when this shell exits.


Please create a SSH tunnel from your workstation to these ports on biowulf.
On Linux/MacOS, open a terminal and run:

    ssh  -L 42303:localhost:42303 user@biowulf.nih.gov

For Windows instructions, see https://hpc.nih.gov/docs/tunneling


[user@cn0655 ~]$

Take note of the instructions from the previous command to be used in Step 2. In this example we were allocated port 42303 but the port you recieve will likely be different.


Step 2. In a new terminal window on your desktop workstation, follow the instructions from the previous command to create your ssh tunnel. Use the port number you were assigned rather than this one.

[user@my_workstation.nih.gov ~]$ ssh -L 42303:localhost:42303 user@biowulf.nih.gov
                           ***WARNING***

You are accessing a U.S. Government information system, which includes
(1) this computer, (2) this computer network, (3) all computers
connected to this network, and (4) all devices and storage media
attached to this network or to a computer on this network. This
information system is provided for U.S.  Government-authorized use only.

Unauthorized or improper use of this system may result in disciplinary
action, as well as civil and criminal penalties.

By using this information system, you understand and consent to the
following:

* You have no reasonable expectation of privacy regarding any
communications or data transiting or stored on this information system.
At any time, and for any lawful Government purpose, the government may
monitor, intercept, record, and search and seize any communication or
data transiting or stored on this information system.

* Any communication or data transiting or stored on this information
system may be disclosed or used for any lawful Government purpose.

--
Notice to users:  This system is rebooted for patches and maintenance on
the first Monday of every month at 7:15AM unless Monday is a holiday, in
which case it is rebooted the following Tuesday.  Running cluster jobs
are not affected by the monthly reboot.

user@biowulf.nih.gov's password:
Last login: Thu Jan 28 09:53:58 2021 from my_workstation.nih.gov

[user@biowulf ~]$

Leave both of these terminal windows open for the duration of your session.


Step 3. Open a TurboVNC viewer on your local workstation and direct it to localhost:<port number> where <port number> is the port you received in the instructions in step 1. Click 'Connect'.

VNC port input prompt

Follow the next prompt instructing you to enter a username and password. Use your NIH login username and password.

VNC authentication prompt

And you will see a new desktop session.

VNC desktop

This desktop session is properly configured to render graphics using the remote GPU hardware and ship the rendered graphics to your local workstation with high efficiency. But additional steps are necessary to make sure that your applications take advantage of this configuration.

Running apps that utilize the GPU
back to top

Once you have established a desktop session on a visual partition node you must take several steps to ensure it uses the GPU hardware.

  1. Open a terminal within the desktop session and load the virtualgl module.
  2. Execute your command with the vglrun directive.
  3. Use the nvidia-smi command to verify that your app is running on the gpu hardware.

In the following example we run a graphical benchmark program that is installed on Biowulf.

First we open a terminal in the new desktop. You will see that the terminal prompt shows your session on the allocated compute node (e.g. cn0655), rather than on the Biowulf login node.

VNC terminal

Then in the new window we enter the following (user input in bold).

[user@cn0655 ~]$ module load virtualgl
[+] Loading VirtualGL 

[user@cn0655 ~]$ module load graphics-benchmarks
[+] Loading graphics-benchmarks  0.0.1  on cn0655 
[+] Loading singularity  3.7.1  on cn0655 

[user@cn0655 ~]$ vglrun valley

You will see a GUI like the following open.

VNC terminal

You can press the "run" button and enjoy the demo. A quick check of nvidia-smi will show that the program is running on the GPU hardware along with the X server.

[user@cn0655 ~]$ $ nvidia-smi
Thu Jan 28 20:46:12 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.06    Driver Version: 450.51.06    CUDA Version: 11.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla K20Xm         On   | 00000000:00:07.0 Off |                  Off |
| N/A   38C    P0    76W / 235W |    351MiB /  6083MiB |     25%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1166      G   X                                  29MiB |
|    0   N/A  N/A      7315      G   ./valley_x64                      318MiB |
+-----------------------------------------------------------------------------+
Running MATLAB on a visual partition node
back to top

Running MATLAB on a visual node is much the same as the example above with the small difference that MATLAB must be started with the -nosoftwareopengl flag.

First we open a terminal in the new desktop.

VNC terminal

Then in the new window we enter the following (user input in bold).

[user@cn0655 ~]$ module load matlab virtualgl
[+] Loading Matlab  2020b  on cn0655 
[+] Loading VirtualGL 

[user@cn0655 ~]$ vglrun matlab -nosoftwareopengl

You will see the MATLAB IDE.

VNC terminal

And you can once again verify that it is running on the GPU hardware with nvidia-smi.

[user@cn0655 ~]$ nvidia-smi
Thu Jan 28 21:01:52 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.06    Driver Version: 450.51.06    CUDA Version: 11.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla K20Xm         On   | 00000000:00:07.0 Off |                  Off |
| N/A   32C    P8    19W / 235W |     18MiB /  6083MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1166      G   X                                  14MiB |
|    0   N/A  N/A      8073      G   ...R2020b/bin/glnxa64/MATLAB        1MiB |
+-----------------------------------------------------------------------------+
Other Examples
back to top

We've selected visualizations in several different programs that benefit from hardware acceleration. These demonstrations are all intended to be carried out within a VNC session as detailed above. User input in bold. Standard output omitted for clarity.


AFNI / SUMA

Plot a 3D brain rendering with connectome data superimposed.

[user@cn0655 ~]$ mkdir -pv /data/${USER}/test/afni

[user@cn0655 ~]$ cd !$

[user@cn0655 afni]$ module load afni virtualgl

[user@cn0655 afni]$ @Install_FATCAT_DEMO

[user@cn0655 afni]$ cd FATCAT_DEMO/

[user@cn0655 FATCAT_DEMO]$ tcsh Do_00_PRESTO_ALL_RUNS.tcsh # this will take a while

[user@cn0655 FATCAT_DEMO]$ vglrun tcsh Do_09_VISdti_SUMA_visual_ex3.tcsh


VMD

Render the crystal structure of five 70s ribosomes from E-Coli in complex with protein Y. (717k atoms)

[user@cn0656 ~]$ cd /lscratch/$SLURM_JOB_ID

[user@cn0656 9233588]$ wget https://files.rcsb.org/download/4V4G.cif.gz

[user@cn0656 9233588]$ gunzip 4V4G.cif.gz

[user@cn0656 9233588]$ module load vmd virtualgl

[user@cn0656 9233588]$ vglrun vmd 4V4G.cif

ChimeraX

Render the crystal structure of five 70s ribosomes from E-Coli in complex with protein Y. (717k atoms)

[user@cn0656 ~]$ cd /lscratch/$SLURM_JOB_ID

[user@cn0656 9233588]$ wget https://files.rcsb.org/download/4V4G.cif.gz

[user@cn0656 9233588]$ gunzip 4V4G.cif.gz

[user@cn0656 9233588]$ module load ChimeraX

[user@cn0656 9233588]$ vglrun ChimeraX 4V4G.cif

FSLeyes

View a niftii data set containing an example structural (T1) brain image.

[user@cn0655 ~]$ cd /lscratch/$SLURM_JOB_ID

[user@cn0655 9349741]$ wget https://www.fmrib.ox.ac.uk/primers/intro_primer/ExBox13/ExBox13.zip

[user@cn0655 9349741]$ unzip ExBox13.zip

[user@cn0655 9349741]$ module load fsl virtualgl

[user@cn0655 9349741]$ vglrun fsleyes ExBox13/T1_brain.nii.gz

Schrödinger

View a basic phospholipase A2 (2,108 atoms).

[user@cn0655 ~]$ cd /lscratch/$SLURM_JOB_ID

[user@cn0655 9400228]$ wget https://files.rcsb.org/download/1JIA.pdb.gz

[user@cn0655 9400228]$ gunzip 1JIA.pdb.gz

[user@cn0655 9400228]$ module load schrodinger virtualgl

[user@cn0655 9400228]$ vglrun maestro -NOSGL 1JIA.pdb


Please send questions and comments to staff@hpc.nih.gov