Job Scripts on Fox

This page documents how to specify the queue system parameters for the different job types on Fox. See Fox job types for information about the different job types on Fox, and Job scripts for general information about job scripts.

Normal

The basic type of job on Fox is the normal job.

Normal jobs must specify account (--account, account corresponds to an ec project), walltime limit (--time) and how much memory is needed. Usually, they will also specify the number of tasks (i.e., processes) to run (--ntasks - the default is 1), and they can also specify how many cpus each task should get (--cpus-per-task - the default is 1).

The jobs can also specify how man tasks should be run per node (--ntasks-per-node), or how many nodes the tasks should be distributed over (--nodes). Without any of these two specifications, the tasks will be distributed on any available resources.

Memory usage is specified with --mem-per-cpu, in MiB (--mem-per-cpu=3600M) or GiB (--mem-per-cpu=4G).

If a job tries to use more (resident) memory on a compute node than it requested, it will be killed. Note that it is the total memory usage on each node that counts, not the usage per task or cpu. So, for instance, if your job has two single-cpu tasks on a node and asks for 2 GiB RAM per cpu, the total limit on that node will be 4 GiB. The queue system does not care if one of the tasks uses more than 2 GiB, as long as the total usage on the node is not more than 4 GiB.

A typical job specification for a normal job would be

#SBATCH --account=ec999
#SBATCH --job-name=MyJob
#SBATCH --time=1-0:0:0
#SBATCH --mem-per-cpu=3G
#SBATCH --ntasks=16

This will start 16 tasks (processes), each one getting one cpu and 3 GiB RAM. The tasks can be distributed on any number of nodes (up to 16, of course). In this case Educloud project ec999 will be billed for the job.

To run multithreaded applications, use --cpus-per-task to allocate the right number of cpus to each task. --cpus-per-task sets the environment variable $OMP_NUM_THREADS so that OpenMP programs by default will use the right number of threads. (It is possible to override this number by setting $OMP_NUM_THREADS in the job script.) For instance:

#SBATCH --account=ec999
#SBATCH --job-name=MyJob
#SBATCH --time=1-0:0:0
#SBATCH --mem-per-cpu=4G
#SBATCH --ntasks=8 --cpus-per-task=10 --ntasks-per-node=4

This job will get 2 nodes, and run 4 processes on each of them, each process getting 10 cpus.

All jobs on Fox are allocated the requested cpus and memory exclusively, but share nodes with other jobs. Also note that they are bound to the cpu cores they are allocated. However, the tasks and threads are free to use all cores the job has access to on the node.

Note that the more restrictive one is in specifying how tasks are placed on nodes, the longer the job might have to wait in the job queue: In this example, for instance, there might be eight nodes with 10 idle cpus, but not two nodes with 40 idle cpus. Without the --ntasks-per-node specification, the job could have started, but with the specification, it will have to wait.

The Fox Sample MPI Job page has an extended example of a normal MPI job.

Accel

Accel jobs are specified just like normal jobs except that they also have to specify --partition=accel. In addition, they must also specify how many GPUs to use, and how they should be distributed across nodes and tasks. The simplest way to do that is, with --gpus=N or --gpus-per-node=N, where N is the number of GPUs to use.

If you need to run GPU jobs longer than 24 hours, you can use the accel_long job type instead. Alternatively, you can set the environment varialbe FOX_ACCEL_AUTO_LONG to 1, for instance by adding the line export FOX_ACCEL_AUTO_LONG=1 to the file ~/.bash_profile (creating the file, if needed). Then accel jobs asking for more than 24 hours walltime will automatically be changed into accel_long jobs. Please note that accel_long jobs only have access to a subset of the GPU nodes, so do not specify more than 24 hours unless really needed.

For a job simple job running one process and using one GPU, the following example is enough:

#SBATCH --account=ec999 --job-name=MyJob
#SBATCH --partition=accel --gpus=1
#SBATCH --time=6:0:0
#SBATCH --mem-per-cpu=8G

Here is an example that asks for 2 tasks and 2 gpus on one gpu node:

#SBATCH --account=ec999 --job-name=MyJob
#SBATCH --partition=accel --gpus-per-node=2
#SBATCH --time=12:0:0
#SBATCH --ntasks-per-node=2 --nodes=1
#SBATCH --mem-per-cpu=8G

There are other GPU related specifications that can be used, and that parallel some of the cpu related specifications. The most useful are probably:

See sbatch or man sbatch for the details, and other GPU related specifications.

Accel_long

Accel_long jobs are specified just like accel jobs except that they specify --partition=accel_long instead of accel.

Please note that accel_long jobs only have access to a subset of the GPU nodes, so do not use accel_long unless really needed.

For a job simple job running one process and using one GPU, the following example is enough:

#SBATCH --account=ec999 --job-name=MyJob
#SBATCH --partition=accel_long --gpus=1
#SBATCH --time=2-0:0:0
#SBATCH --mem-per-cpu=8G

Here is an example that asks for 2 tasks and 2 gpus on one gpu node:

#SBATCH --account=ec999 --job-name=MyJob
#SBATCH --partition=accel_long --gpus-per-node=2
#SBATCH --time=1-12:0:0
#SBATCH --ntasks-per-node=2 --nodes=1
#SBATCH --mem-per-cpu=8G

See the description of accel jobs above for more details.

Devel

A devel job is like a normal job, except that it gets higher priority but has restrictions on job length and size. Devel jobs must specify --qos=devel.

For instance:

#SBATCH --account=ec999
#SBATCH --job-name=MyJob
#SBATCH --qos=devel
#SBATCH --time=00:30:00
#SBATCH --ntasks=16

CC Attribution: This page is maintained by the University of Oslo IT FFU-BT group. It has either been modified from, or is a derivative of, "Job Scripts on Saga" by NRIS under CC-BY-4.0. Changes: Some job types not applicable to Fox.