Introduction to HPC

High performance computing (HPC) allows solving complicated computing problems in a short period of time. Computing is carried out on high-performance computers (shared-memory mainframe, computing cluster) made up by many parallel processors. A modern HPC is an essential element of any contemporary university, and it is used for various purposes ranging from solving engineering problems in MATLAB environment to big data analysis and machine learning.

The majority of HPC systems, including the one available at RTU, are designed as computing clusters, which consist of numerous separate servers interconnected by fast computer network (e.g., InfiniBand). HPC cluster can be used for both parallel computing, providing parallel execution of a large job, and distributed computing, performing several independent tasks on separate servers or processors. In addition, RTU HPC cluster is equipped with powerful NVidia Tesla GPU accelerators.

Why should you use HPC?

  • HPC cluster is equipped with high-performance processors and graphical accelerators.
  • If job is split between several processors, simulation can be accelerated thus resulting in a more detailed research.
  • Transferring jobs to HPC allows using the personal computer for other work.
  • HPC cluster is located in a data centre where uninterruptible power supply, cooling, and administrator supervision is available resulting in less interruptions during the simulation.
  • Certain scientific software has already been installed on the HPC cluster, find the list here.
  • Many applications have a built-in HPC support facility, which hides the HPC cluster environment from the user.

Service portfolio

  • Compute time in HPC cluster
  • Software development and code optimisation
  • Installation and administration of a client’s HPC system
  • Consultations

HPC services are available to all RTU employees and students, and we are also open to cooperation with other academic or commercial organisations.

Technical parameters

HPC system “Rudens”

Architecture: InfiniBand cluster based on x86 and graphical processors
Number of cores: 744 (CPU), 25728 (CUDA)
Performance: 35 TFLOPS
Data storage: 238 TB, parallel architecture
Workload management: Moab/Torque

Software

Scientific software has already been installed on the HPC cluster. If the software you need is not on the list, contact us and we will consider the installation possibilities.

How to begin?

To compute using HPC cluster (supercomputer), you must fill in the first. After receiving the application, we will contact you to inform on the following. As a rule, new users are invited to a meeting to discuss the user’s needs and carry out a short briefing on the work with the cluster.