HPC infrastructure
Parameters at a glance
Total cores: 1200
Total memory: 8 TB
Temporary storage volume: 238 TB
GPU accelerators: 22
Performance: 139 TFLOPS
HPC system “Rudens”
Cluster Vasara
484 CPU cores Intel(R) Xeon(R) Gold 6154 CPU (3.00GHz, 36 cores per node)
13 computing nodes
up to 384 GB DDR4, 2666 MHz RAM per node
1.5 TB DDR4, 2666 MHz RAM High memory node
8 Nvidia Tesla V100 GPUs (40,960 CUDA cores), 4 GPUs per node
100 Gb/s InfiniBand EDR interconnection network
Manufacturer: Dell
Theoretical performance: 44 Tflops (x86) + 62 Tflops (GPU) = 106 Tflops
Cluster Rudens
384 CPU cores Intel Xeon E5-2680 v3 (2.50 GHz, 24 cores per node)
16 computing nodes
2 TB RAM (128 GB DDR4, 2133 MHz per node)
8 Nvidia Tesla K40 GPU (23,040 CUDA cores)
56 Gb/s InfiniBand FDR interconnection network
Manufacturer: Dell
Theoretical performance: 15.4 Tflops (x86) + 11.4 Tflops (GPU) = 26.8 Tflops
Cluster T-Blade
288 CPU cores Intel Xeon X5670 (2.93 GHz, 12 cores per node)
24 CPU cores Intel Xeon E5630 (2.53 GHz, 8 cores per node)
27 computing nodes
6 Nvidia Tesla M2070 GPU
384 GB RAM (12–24 GB per node)
40 Gb/s InfiniBand QDR network
Manufacturer: T-Platforms
Model: T-Blade2
Theoretical x86 performance: ~3 TFLOPS
Cluster management software
OS: CentOS 6.7 (RHEL 6) or CentOS 7.2 (RHEL 7)
xCAT OS image management
Moab HPC Suite Enterprise Edition cluster management
High performance virtual desktop infrastructure
48 CPU cores Intel Xeon E5-2680 v3
256 GB RAM (128 GB per node)
10 GE Network
Server manufacturer: Fujitsu
Virtualisation platform: KVM
Data storage
8 knode EMC Isilon x200
Parallel architecture
40 Gb/s InfiniBand connection to cluster
238 TB usable capacity