Hardware Infrastructure
This is a short overview of some of the interesting architectures and machines that are available at the Research Group Scientific Computing, used both for education and research purposes. |
The ALMA compute cluster comprises a frontend and 6 computational nodes. The nodes are SuperMicro 20127GR-TRF systems with two Intel Xeon Xeon E5-2650 2.0 GHz (Sandy Bridge) and 128 GB RAM. Both the frontend and the computational nodes are additionally equipped with two Nvidia Tesla K20 M-class cards. The network connection is based on Gigabit Ethernet. The cluster runs Linux (AlmaLinux). SLURM is used for resource management. ALMA has been put into service in September 2020 and is used for teaching purposes. It uses hardware formerly used in the PHIA cluster (2013-2020). |
The NOVA mini-cluster comprises 4 computational nodes with two 12-core AMD Opteron™ 6172 2.1GHz processors. Available system-memory is 48GB 1333MHz DDR3 per node. Networking interconnects include QDR Infiniband and Gigabit Ethernet. In total, the system offers 96 CPU-cores with 192 GB DDR3 memory and 8 TB HDD storage. The system runs Red Hat Enterprise Linux 6 (64 bit), and has been put into service in March 2011. |