The cluster is capable to run all workload types (single core, SMP, MPI, HTC) and evolves in a continuous upgrade model.
Hydra is a cluster being continuously upgraded. Traditional clusters are purchased in one go and remain fixed for their lifetime (generally 4-5 years) until replaced by a new one. However, research is changing and evolving all the time. We have and will therefore invest regularly in Hydra to extend or adapt the cluster on a yearly basis to better meet users' needs.
The cluster is composed of about 160 compute nodes, covering several CPU generations. Part of the cluster has Infiniband network for MPI jobs. A GPFS storage of ~800 TB offers fast and large capacities for IO demanding jobs. There are nodes with 16, 20 or 24 cores and fitted with memory capacity from 64 GB to 1.5 TB of RAM. Several master nodes run the core services such as Moab, Torque or the LDAP servers.
Hydra is freely accessible to all VUB and ULB users. You need to activate your NetID for Hydra before being able to access it. See our documentation for more details. We impose limitations on the number and type of jobs that can be executed since this is a shared environment and therefore everyone need a chance to get jobs started.
The HPC offering is not limited to a simple access to the Hydra cluster. The HPC team provides several services to help researchers to develop and perform scientific computing as efficiently as possible. With a team fully formed in 2017, the following services can be officially presented.
The most critical one! The whole team is involved in support, from answering simple practical questions to co-working on complex requests.
Learn MoreMost of the software is available as modules. If you want an extra - freely available - software, let us know.
Learn MoreFor new research projects requiring HPC, new fund requests or simply to improve your researcher efficiency, please contact us for advices and/or to work with you.
Learn MoreThe cluster is continuously maintained: security patches, OS upgrades, hardware replacement etc. The HPC team works on it on a daily basis.
Learn MoreExtras we offer but at a given price. It can be extra computing power, storage or hosting your compute nodes.
Learn MoreName | Limit |
---|---|
Number of running jobs | Maximum 500 jobs |
Number of concurrent usable cores | From 1000 to 2500 |
Number of jobs in the queue per user | 2500 jobs |
Job array size | 2500 jobs per array |
Memory per job | Default is 1 GB and maximum is 1500 GB. |
Home space | Quota (soft and hard) set to 100 GB |
Work space | Quota set to 400 GB soft and 1 TB hard with a grace period of 14 days. |
Walltime for jobs | Maximum allowed is 120 hours |