NIH HPC News & Announcements
REMINDER: Extended Biowulf Downtime Mon-Wed
Date: 03 March 2017 08:03:15
From: steven fellini
HPC/Biowulf Users,
As previously announced, we will be having an extended downtime starting
Monday morning March 6 at 7am. At that time all batch jobs will be deleted
from the system. We should be back in service by early evening Wednesday.
The following HPC services will be unavailable at that time:
-- Biowulf login node and cluster
-- HelixDrive
-- Helix: All /data directories residing on GPFS filesystems
(to check if your /data directory is on a GPFS filesystem, use "checkquota --gpfs")
Thanks for your understanding as we begin the phase 3 implementation of the
NIH HPC expansion.
Extended Downtime for NIH Biowulf Cluster
Date: 15 February 2017 14:02:14
Biowulf & Helix Users,
There will be an extended downtime of the Biowulf Cluster Monday
through Wednesday March 6-8, 2017. All running and queued batch
jobs will be deleted on the morning of March 6.
During this time /data directories residing on GPFS filesystems will be
unavailable on Helix, the Biowulf login node and HelixDrive (to check
if your /data directory is on a GPFS filesystem, use 'checkquota --gpfs’.)
Helix will remain available throughout, and you will be able to read
Helix mail.
As the first step in the implementation of Phase 3 of the NIH HPC
expansion, this extended downtime will allow the recabling of the storage
system network fabric to improve the I/O performance of the Biowulf HPC
cluster. This operation requires the relocation of fiber cables totalling 2.7
kilometers in length.
As always, please contact staff@helix.nih.gov with questions or concerns.
Thank you for your patience as we continue to grow our computing
resources.
########################################################################
To unsubscribe from the HPC-USERS list, click the following link:
http://list.nih.gov/cgi-bin/wa.exe?SUBED1=HPC-USERS&A=1
[Last 12 months of HPC announcements]