Abel is the high performance computing facility at UiO hosted at USIT by the Research Infrastructure Services group.
Abel is a powerful computing cluster boasting over 650 computers and over 10000 cores (CPUs). Abel compute nodes typically have 64GiB memory and are all connected to a large common scratch disk space. All nodes in the Abel cluster have FDR InfiniBand providing low latency and high bandwidth connection between all nodes. All nodes run the Linux Operating system (64 bit CentOS 6).
|Number of Cores||10000+|
|Number of nodes||650+|
|Max Floating point performance, double||258 Teraflops/s|
|Total memory||40 TebiBytes|
|Total local storage||400 TebiBytes using BeeGFS|
Abel consist of an array of compute nodes and systems around to support them, login, admin and storage nodes.
The 650+ Supermicro X9DRT compute nodes are all dual Intel E5-2670 (Sandy Bridge) based running at 2.6 GHz, yielding 16 physical compute cores. Each node have 64 GiBytes of Samsung DDR3 memory operating at 1600 MHz, giving 4 GiB memory per physical core at about 58 GiB/s aggregated bandwidth using all physical cores.
The storage is provied as two equal size partitions /cluster and /work each capable of a performance of about 6-8 GiB/s when doing sequential IO. The file system is Fraunhofer Global Parallel File System (FhGFS). The storage elements are 2 TB SAS disk RAIDs at RAID level 6 (8+2 conf)(LSI RAID controller 9265-8i) and XFS for data and SAS/SSD RAIDs at RAID level 10 and EXT4 for the metadata. All IO is transported by the Infiniband fabric using RDMA, with failover to IPoIB and IP/GbE. There are 10 IO servers and 2 redundant (active/passive) servers for the metadata.
Hugemem compute nodes
There are also a few nodes with more memory, 1 TiB, and more cores (32). They are Intel E-4620 based running at 2.20 GHz.
Accelerated compute nodes
There is also a set of accelerated nodes with NVIDIA Kepler II cards installed.
Abel network equipment:
- FDR (56 Gbits/s eq 6.78 Gbytes/s) InfiniBand between all nodes
- IP over IB on all nodes (enabling fast tcp communication)
- Gigabit Ethernet on all nodes
- Abel is connected to the other compute facilities in Norway by 10GbE links
- Abel has a dedicated 10GbE link to CERN in connection to Tier-1 responsibilities
Nodes in Abel run Linux, 64 bit Centos 6.
Please see Abel software for installed software.
A queue system ensures effective utilization of the Abel infrastructure. Abel uses the Slurm queue system.