About Abel

Abel is the high performance computing facility at UiO hosted at USIT by the Research Infrastructure Services group.

Abel is a powerful computing cluster boasting over 650 computers and over 10000 cores (CPUs). Abel compute nodes typically have 64GiB memory and are all connected to a large common scratch disk space. All nodes in the Abel cluster have FDR InfiniBand providing low latency and high bandwidth connection between all nodes. All nodes run the Linux Operating system (64 bit CentOS 6).

To get access to Abel, see getting access to Abel. We also maintain a detailed user guide and a FAQ.

Key numbers

Number of Cores 10000+
Number of nodes 650+
Max Floating point performance, double 258 Teraflops/s
Total memory 40 TebiBytes
Total local storage 400 TebiBytes using FhGFS

Hardware

Abel consist of an array of compute nodes and systems around to support them, login, admin and storage nodes.

Compute nodes

The 650+ Supermicro X9DRT compute nodes are all dual Intel E5-2670 (Sandy Bridge) based running at 2.6 GHz, yielding 16 physical compute cores. Each node have 64 GiBytes of Samsung DDR3 memory operating at 1600 MHz, giving 4 GiB memory per physical core at about 58 GiB/s aggregated bandwidth using all physical cores.

Storage

The storage is provied as two equal size partitions /cluster and /work each capable of a performance of about 6-8 GiB/s when doing sequential IO. The file system is Fraunhofer Global Parallel File System (FhGFS). The storage elements are 2 TB SAS disk RAIDs at RAID level 6 (8+2 conf)(LSI RAID controller 9265-8i) and XFS for data and SAS/SSD RAIDs at RAID level 10 and EXT4 for the metadata. All IO is transported by the Infiniband fabric using RDMA, with failover to IPoIB and IP/GbE. There are 10 IO servers and 2 redundant (active/passive) servers for the metadata.

Hugemem compute nodes

There are also a few nodes with more memory, 1 TiB, and more cores (32). They are Intel E-4620 based running at 2.20 GHz.

Accelerated compute nodes

There is also a set of accelerated nodes with NVIDIA Kepler II cards installed.

Interconnects

Abel network equipment:

  • FDR (56 Gbits/s eq 6.78 Gbytes/s) InfiniBand between all nodes
  • IP over IB on all nodes (enabling fast tcp communication)
  • Gigabit Ethernet on all nodes
  • Abel is connected to the other compute facilities in Norway by 10GbE links
  • Abel has a dedicated 10GbE link to CERN in connection to Tier-1 responsibilities

Software

Nodes in Abel run Linux, 64 bit Centos 6.

Please see Abel software for installed software.

Queue system

A queue system ensures effective utilization of the Abel infrastructure. Abel uses the Slurm queue system.

Published Sep. 3, 2012 11:15 AM - Last modified Mar. 13, 2017 2:21 PM