Skip to content

2023 ← → 2025

Standard node configurations (2024)

Warning

These specifications are not finalized!

We preferably offer the following classes of compute nodes:

  • Thin CPU: typical HPC workhorse. The large number of CPU cores make it suitable for parallel computing.
  • Fat CPU: as thin CPU, but with more memory. Good for jobs that need large amounts of RAM.
  • Fast CPU: as thin CPU, but with fewer, faster CPU. Good for serial (single core) jobs.
  • GPU: good for GPU jobs.

We preferably don't offer fabrics such as InfiniBand due to the cost. Compute jobs that require such fabrics should be run on other platforms, such as Snellius.

Class CPU Memory Accelerators Price
incl. VAT
Performance1 Power usage2
Raw [pp] /core [pp] /Euro [pp/€] Raw [W] /perf. [μW/pp] /year [kWh]
Thin 1x AMD EPYC 9654P
(2.40 GHz, 96C, 384 MB L3)
384 GB
(12x 32 GB)
(4 GB/core)
€20,400 118,641 1236 5.82 475.2 4005 4163
Fast 1x AMD EPYC 9474F
(3.60 GHz, 48C, 256 MB L3)
384 GB
(12x 32 GB)
(8 GB/core)
€18,000 104,894 2185 5.83 475.2 4530 4163
GPU 2x AMD EPYC 7313
(3.0 GHz, 16C, 128 MB L3)
256 GB
(16x 16 GB)
(8 GB/core)
2x NVIDIA A30 €23,100 581.0
Thin (~2023) 1x AMD EPYC 7713P
(2.00 GHz, 64C, 256 MB L3)
256 GB
(8x 32 GB)
(4 GB/core)
€11,700
(2023 quoted price)
80,373 1255 6.87 321.0 3994 2812

Rationales

Thin

This configuration is inspired by Snellius. We use a single socket to stay under 25 k€. We increase the memory from 2 GB/core to 4 GB/core to compensate for the lack of a high-speed fabric.

Due to the CPU's cooling requirements, these machines are 2U tall, which is unfortunate for HPC purposes. Ideally they would be 1U, which allows us to more densely pack the compute nodes in the data center.

GPU

This is the configuration that we bought in 2023, and for the sake of homogeneity, we stick with it. However, a thorough analysis of customer needs should be done.

The GPUs can be split in virtual GPUs (NVIDIA MIG), to enable resource sharing.

Fast CPU

This configuration is intended for serial (single core) jobs. It differs from the thin CPU configuration in the following ways:

  • Each individual CPU core is faster, leading to smaller run times for serial jobs.
  • Single-core jobs tend to need more memory per core than parallel jobs do, hence the increased amount of RAM in this configuration.

Other specifications

The following specifications are used by system administrators when ordering compute nodes:

  • Ethernet: 25 Gbit/s SFP28 with RoCEv2 support
    • e.g. Broadcom 57414
    • incl. SFP28 DAC cable, Dell-switch compatible
  • Storage:
    • Boot storage: RAID-1.
      • For Dell: BOSS, cheapest size available.
    • Local storage: on request. Can be put on boot storage as well.
  • Power supply: 1+1 redundant.
  • Power cables: included. Grid-side connector: Schuko.
  • Rack mount kit: yes
  • Bezel: yes
  • Remote mgmt: required.
    • For Dell: iDRAC with Enterprise and OpenManage license. Factory-generated password.
    • For other vendors: similar to above.
  • Support: 5 year next business day.

Other constraints:

  • Nodes should be ≤ 25 k€ (incl. VAT) to avoid financial issues.

Rationale

Ethernet: 25 Gbit hardware is only slightly more expensive than 10 Gbit hardware, but offers a good performance increase and is future-proof.


  1. Total for all CPUs in the system. Per-CPU value is obtained from the PassMark database. The unit "pp" stands for "performance point". 

  2. Power usage is estimated as: CPU TDP + GPU TDP + RAM power. RAM power is: 0.3 W/GB RAM for DDR5; 0.375 W/GB for DDR4.