Abel Newsletter #2, 2018

Fall course week, application deadline for CPU time through Notur, interesting conferences and external courses, along with the usual list of updated tools and applications now available on Abel or in the Lifeportal.

  • USIT Underavdeling for IT i forskning ITF(NO), or Division for research computing RC(EN), is responsible for delivering IT support for research at University of Oslo.
  • The department's groups operate infrastructure for research, and support researchers in the use of computational resources, data storage, application portals, parallelization and optimizing of code, and advanced user support.
  • The Abel High Performance Computing (HPC) cluster is a central component of the USIT IT support for researchers.
  • Announcement of this news letter is done on the abel-users mailing list. All users with an account on Abel are automatically added to the abel-users list. This is mandatory. The news letter will be issued at least twice a year.


 

News and announcements

Strategic focus on AI for data-driven science at UiO and beyond

Exploiting artificial intelligence in data-driven science is becoming a major interest by many research groups. Recently, UiO has approved a project to build up resource capacity and competence with IT staff to help users successfully applying machine learning methods to their research. A first step for that project is to get engaged with researchers to develop a more detailed understanding of their needs concerning resources and support for AI. USIT and the university library are jointly organising a workshop titled "AI-based data-driven science" on September 7th, 2018. During the workshop we'll present the status of AI activities at USIT, and ask researchers to present their current and future use of AI. If you're interested in attending the workshop, please send an email to itf-ai-support@usit.uio.no.

During the workshop, we will also present an initiative which aims at supporting machine learning at the national level. The proposal for such a project is currently being developed. If you're interested in working together with such a project (e.g., by supplying use cases, by collaborating with IT staff, by using newly developed/deployed resources/services), please get in contact with us via itf-ai-support@usit.uio.no.

If you want to be kept up-to-date about new developments in our AI efforts, please subscribe to the email list itf-ai-announcements@usit.uio.no.

HPC (High performance Computing) training  - November

This November we have arranged a two-day course on HPC. The course is designed for Notur as well as local users. This event is especially suitable for scientists who wish to learn more about how they can use  Abel computer cluster for their research.

The course will have two sections, one section focusing on beginners and the other on advanced users. All participants are expected to have a very basic working knowledge of Unix. If you are new to UNIX, please participate in one of the Software carpentry (http://www.uio.no/english/services/it/research/events/) seminars before the start of the course.

Date/Time/place:  14 and 15 November 2018, from 09:00 -16:00. Room : Ole-Johan Dahls hus,  Seminarrom Python

Registration (we have space for maximum of 30 participants): Registration is now open and participants can register for the "Basic" or "Advanced" course according to their current competence. registration page: https://www.uio.no/english/services/it/research/events/hpc_for_research_november_2018.html

Schedule : https://www.uio.no/english/services/it/research/events/hpc_for_research_november_2018.html

Questions? Ideas? Contact hpc-drift@usit.uio.no.

NeIC training calendar

Looking for more training events? NeIC is maintaining a shared calendar for training events in the Nordics, see https://neic.no/training/ for more information.

New Notur allocation period 2018.2, application deadline 24 August 2018

The Notur period 2018.2 (01.10.2018 - 31.03.2019) is getting nearer, and the deadline for applications for and the deadline for applications for CPU hours, storage and TSD is 24 August. 
 

Kindly reminder: If you have many CPU hours remaining in the current period, you should of course try to utilize them asap, but since many users will be doing the same there is likely going to be a resource squeeze and potentially long queue times. The quotas are allocated according to several criteria, of which publications registered to Cristin is an important one (in addition to historical usage). The quotas are based on even use throughout the allocation period. If you think you will not be able to spend all your allocated CPU hours, it is highly appreciated to notify sigma@uninett.no so that the CPU hours may be released. You may get extra hours if you need more later. For those of you that have run out of hours already, or are about to run out of hours, you may contact sigma@uninett.no and ask for a little more. No guarantees of course.

Run

projects

to list project accounts you are able to use.

Run

cost -p

or

cost -p nn0815k

to check your allocation (replace 0815 with your project's account name).

Run

cost -p nn0815k --detail

to check your allocation and print consumption for all users of that allocation.

Procurement of the system that comes after Abel

Sigma2's procurement process to buy to new HPC clusters, code named B1 and C1 is going forward. You can read about the plans here: https://www.sigma2.no/procurements . While B1 will be massively parallel cluster, C1 will handle large amounts of smaller jobs, especially ones that are metadata intensive (requiring many small files, for example). UiO staff has advised the technical specifications, to ensure that lessons learnt from Abel are carried forward. The plan is that as soon as C1 is up and running, Abel's workload will be migrated to C1 and Abel will no longer be part of the national HPC infrastructure. When B1 and C1 is in place Sigma2's transition from one cluster per site to common systems operated cooperatively between the sites will be complete. The current timeframe is that C1 will be installed early in 2019, and B1 around summer 2019.

Supercomputing 2018 is in Dallas, USA 11-16 November

Read more at https://sc18.supercomputing.org.

A small contingent from USIT is attending. Contact us if you have any information to convey to some vendor or if you want lecture notes from any of the tutorials.

Availability of other computing resources

freebio - first ARM based server

Freebio (freebio.hpc.uio.no) introduces a new class of HPC capable ARM based systems. The well known ARM processor architecture from mobile phones, pads, TVs, cars, Raspberry PIs, etc have now been upgraded to a high performance version. Being cost effective with a high core count these processors represent an alternative approach to many tasks. So far we have targeted the bio informatics workloads for this type of processors. However, the system is not limited to run only such tasks as it can tackle all kinds of workloads. The system is installed with Ubuntu 16.04 and all software in the Ubuntu distro is available for ARM. The processor is from High Silicon, model Hi1616. The system has 64 cores, 256 GiB of memory and 36 TiB of local disk. While each core performs slightly less than an x86-64 core the higher core count compensates for this. Multithreaded programs are important as before. All Abel users can access freebio.hpc.uio.no and will find their Abel $HOME directory when logging in.

Virtual machines in OpenStack - Ubuntu bio image available

We have an OpenStack installation, known an UH-IaaS. Several images are available, including an ubuntu image with preinstalled Debian Med packages and a large section of the common bio-informatics and life-science applications. Contact us  (hpc-drift@usit.uio.no) if you would like to have your own personal or research group server. Small clusters with BeeGFS can easily be set up. Hardware is currently limited, but for smaller jobs it's a good match.

Other hardware needs

Are you in need of particular types of hardware (fancy GPUs, kunluns, dragons, Neural Transaction Processors etc.) not provided through Abel, please do contact us (hpc-drift@usit.uio.no), and we'll try to help you as best we can.

Also, if you have a computational challenge where your laptop is too small but a full-blown HPC solution is a bit of an overkill, it might be worth to check out UH-IaaS.

Abel operations

Follow operations

If you want to be informed about day-to-day operations you can subscribe to the abel-operations list by emailing "subscribe abel-operations <Your Name>" to sympa@usit.uio.no. You can also follow us on twitter abelcluster: http://twitter.com/#!/abelcluster

 

New possibilities

Abel is coming of age, but we are not out of local infrastructure plans of which include:

  • Development plans for cost effective ARM based systems for bioscience 
  • Object storage using CEPH
  • OpenStack for research group servers
  • Ubuntu for bio and life science

The deployment of ARM based infrastructure for both storage (ceph) and bioscience will continue as ARM based servers available from currently two vendors will continue to represent a very cost effective solution. This is especially true for codes that are not double precision vector based codes, but more Perl, Python, Julia - high level language based type of load. This class of processors have very high core count and handle movement of data very efficient.

Storage in the form of so called software defined storage of which CEPH is an alternative is being tested and will be deployed for scientific use in the near future. CEPH offers storage through files, blocks and objects. The latter will be the service offered first for general use. Object storage has many advantages over POSIX file system or block based storage and represent the future. We expect a significant fraction of storage volumes as object storage in the near future. Many of you should already be familiar with using object storage as this is what is used in for example Dropbox.

Personalised computation in the form of Infrastructure as a Service (IaaS) is a rapidly growing segment. Currently OpenStack is available for scientists who want a personal or group server or even a set of servers in a cluster. Migration from single servers owned by a single research group to a set of virtual servers in the local UiO cloud (IaaS) is already initiated. Plans call for a sizeable fraction of local computation (today run on Abel) to be run on the IaaS platform (larger jobs of high volume will still be run on national systems hosted by Sigma2).

As Ubuntu is better suited (mainly because of debian med) for life and bioscience the support for Ubuntu will continue to improve. Ubuntu is currently offered online using ARM based server or as images for those who deploy servers in the UiO cloud (IaaS).

Seeing that not all computing needs at UiO can be covered by expensive HPC systems, we are always looking for and deploying cost effective, state-of-the-art solutions like the ones mentioned above.

New and updated software packages

The following is a list of new or updated software packages available on Abel with the module command.

(for a complete list of modules type "module avail")


=== Anaconda3 5.1.0 ===
module load Anaconda3/5.1.0
 
=== Bison 3.0.4 ===
module load Bison/3.0.4

=== Miniconda3 4.4.10 ===
module load Miniconda3/4.4.10
 
=== Python 2.7.12-foss-2016b ===
module load Python/2.7.12-foss-2016b
 
=== Python 3.5.2-foss-2016b ===
module load Python/3.5.2-foss-2016b
 
=== QIIME2 2018.2 ===
module load QIIME2/2018.2
 
=== Qt 4.8.7-foss-2016b ===
module load Qt/4.8.7-foss-2016b

=== SQLite 3.17.0-GCCcore-6.3.0 ===
module load SQLite/3.17.0-GCCcore-6.3.0
 
=== ScaLAPACK 2.0.2-gompi-2018a-OpenBLAS-0.2.20 ===
module load ScaLAPACK/2.0.2-gompi-2018a-OpenBLAS-0.2.20
 
=== asciinema 1.2.0 ===
module load asciinema/1.2.0
 
=== bedtools 2.27.1 ===
module load bedtools/2.27.1
 
=== binutils 2.28-GCCcore-6.4.0 ===
module load binutils/2.28-GCCcore-6.4.0

=== bzip2 1.0.6-GCCcore-6.3.0 ===
module load bzip2/1.0.6-GCCcore-6.3.0
 
=== bzip2 1.0.6-foss-2016b ===
module load bzip2/1.0.6-foss-2016b
 
=== cURL 7.49.1-foss-2016b ===
module load cURL/7.49.1-foss-2016b
 
=== cuda 9.1 ===
module load cuda/9.1
 
=== dfnworks 2.0 ===
module load dfnworks/2.0
 
=== esysparticle 2.3.5 ===
module load esysparticle/2.3.5
 
=== expat 2.2.0-foss-2016b ===
module load expat/2.2.0-foss-2016b
 
=== flex 2.6.4-GCCcore-6.4.0 ===
module load flex/2.6.4-GCCcore-6.4.0
 
=== fontconfig 2.12.1-foss-2016b ===
module load fontconfig/2.12.1-foss-2016b
 
=== foss 2018a ===
module load foss/2018a
 
=== freesurfer dev ===
module load freesurfer/dev
 
=== freetype 2.6.5-foss-2016b ===
module load freetype/2.6.5-foss-2016b
 
=== fsl 5.0.11 ===
module load fsl/5.0.11
 
=== gatk 4.0 ===
module load gatk/4.0
 
=== gcc 7.2.0 ===
module load gcc/7.2.0
 
=== gettext 0.19.8 ===
module load gettext/0.19.8
 
=== gettext 0.19.8-foss-2016b ===
module load gettext/0.19.8-foss-2016b
 
=== git 2.16.2 ===
module load git/2.16.2
 
=== gompi 2018a ===
module load gompi/2018a
 
=== hdf5 1.8.19_intel ===
module load hdf5/1.8.19_intel
 
=== help2man 1.47.4 ===
module load help2man/1.47.4
 
=== help2man 1.47.4-GCCcore-6.4.0 ===
module load help2man/1.47.4-GCCcore-6.4.0
 
=== hwloc 1.11.8-GCCcore-6.4.0 ===
module load hwloc/1.11.8-GCCcore-6.4.0
 
=== ifort 2017.1.132-GCC-5.4.0-2.26 ===
module load ifort/2017.1.132-GCC-5.4.0-2.26
 
=== intel 2018.3 ===
module load intel/2018.3
 
=== libGLU 9.0.0-foss-2016b ===
module load libGLU/9.0.0-foss-2016b
 
=== libdrm 2.4.70-foss-2016b ===
module load libdrm/2.4.70-foss-2016b
 
=== libffi 3.2.1-GCCcore-6.3.0 ===
module load libffi/3.2.1-GCCcore-6.3.0
 
=== libffi 3.2.1-foss-2016b ===
module load libffi/3.2.1-foss-2016b
 
=== libjpeg-turbo 1.5.0-foss-2016b ===
module load libjpeg-turbo/1.5.0-foss-2016b
 
=== libpng 1.6.24-foss-2016b ===
module load libpng/1.6.24-foss-2016b
 
=== libreadline 7.0-GCCcore-6.3.0 ===
module load libreadline/7.0-GCCcore-6.3.0
 
=== libtool 2.4.6-GCCcore-6.4.0 ===
module load libtool/2.4.6-GCCcore-6.4.0
 
=== libtool 2.4.6-foss-2016b ===
module load libtool/2.4.6-foss-2016b
 
=== libxml2 2.9.4-foss-2016b ===
module load libxml2/2.9.4-foss-2016b
 
=== libxml2 2.9.4-foss-2016b-Python-2.7.12 ===
module load libxml2/2.9.4-foss-2016b-Python-2.7.12
 
=== matlab R2018a ===
module load matlab/R2018a
 
=== ncl 6.5.0 ===
module load ncl/6.5.0
 
=== ncurses 6.0 ===
module load ncurses/6.0
 
=== ncurses 6.0-GCCcore-6.3.0 ===
module load ncurses/6.0-GCCcore-6.3.0
 
=== ncurses 6.0-foss-2016b ===
module load ncurses/6.0-foss-2016b
 
=== nettle 3.2-foss-2016b ===
module load nettle/3.2-foss-2016b
 
=== numactl 2.0.11-GCCcore-6.4.0 ===
module load numactl/2.0.11-GCCcore-6.4.0
 
=== pandaseq 2.11 ===
module load pandaseq/2.11

=== picard-tools 2.17.6 ===
module load picard-tools/2.17.6
 
=== pkg-config 0.29.1-foss-2016b ===
module load pkg-config/0.29.1-foss-2016b
 
=== plink 1.90b5.2 ===
module load plink/1.90b5.2
 
=== plink2 2.00a2LM ===
module load plink2/2.00a2LM
 
=== plink2 2.00a2LM.AVX2 ===
module load plink2/2.00a2LM.AVX2

=== protobuf-python 3.2.0-foss-2016b-Python-2.7.12 ===
module load protobuf-python/3.2.0-foss-2016b-Python-2.7.12
 
=== protobuf-python 3.2.0-foss-2016b-Python-3.5.2 ===
module load protobuf-python/3.2.0-foss-2016b-Python-3.5.2
 
=== protobuf 3.2.0-foss-2016b ===
module load protobuf/3.2.0-foss-2016b
 
=== singularity 2.5.0 ===
module load singularity/2.5.0
 
=== spark 2.3.0-bin-hadoop2.7 ===
module load spark/2.3.0-bin-hadoop2.7
 
=== stata 15 ===
module load stata/15
 
=== subread 1.6.1 ===
module load subread/1.6.1
 
=== tbl2asn 2018.02.27 ===
module load tbl2asn/2018.02.27
 
=== trinityrnaseq 2.5.1 ===
module load trinityrnaseq/2.5.1
 
=== usearch 10.0.240 ===
module load usearch/10.0.240
 
=== vsearch 2.7.1 ===
module load vsearch/2.7.1
 
=== wrf 3.9.1.1 ===
module load wrf/3.9.1.1
 
=== yade 2018-04-09 ===
module load yade/2018-04-09
 
=== zlib 1.2.11 ===
module load zlib/1.2.11
 
=== zlib 1.2.11-GCCcore-6.4.0 ===
module load zlib/1.2.11-GCCcore-6.4.0

 

Puh! Questions? Contact hpc-drift@usit.uio.no.

Publication tracker

USIT Department for Research Computing (RC) is interested in keeping track of publications where computation on Abel (or Titan) or usage of any other RC services are involved. We greatly appreciate an email to:

hpc-publications@usit.uio.no

about any publications (including in the general media). If you would like to cite use of Abel or our other services, please follow this information.

Abel Operations mailing list

To receive extensive system messages and information please subscribe to the "Abel Operations" mailing-list. This can be done by emailing "subscribe abel-operations <Your Name>" to sympa@usit.uio.no.

Follow us on Twitter

Follow us on twitter abelcluster. Twitter is the place for short notices about Abel operations.

http://twitter.com/#!/abelcluster


 

Publisert 17. aug. 2018 12:42 - Sist endret 17. aug. 2018 12:42