Job submission

See our training slides: Cluster

Open Grid Scheduler (, version GE2011.11p1 .

A full documentation is available as .pdf here.


qsub: submit a batch job to Sun Grid Engine (default queue: workq).
qarray: submit a batch job-array to Sun Grid Engine.


qlogin: submit an interactive X-windows session to Sun Grid Engine (automaticly queue interq). Just for graphics jobs.
qrsh: submit an interactive login session to Sun Grid Engine.

Without any parameters, on any queue, all jobs are limited to mem=1Gb, h_vmem=8Gb of memory, 1 CPU.

1 - First write a script (ex: with the command line as following:

#$ -o /work/.../output.t
#$ -e /work/.../error.txt
#$ -q workq
#$ -m bea
# My command lines I want to run on the cluster
blastall -d swissprot -p blastx -i /save/.../z72882.fa

2 - To submit the job, use the qsub command line as following:


To change memory reservation, add this options to the submision command (qsub, qarray, qrsh or qlogin):

-l mem=XG -l h_vmem=YG                                 with X < Y. (Default value are X=1G, Y=8G)


  • mem is the amount of memory per slot (in megabytes M, or gigabytes G) that your job will require
  • h_vmem is the upper bound on the amount of memory per slot your job is allowed to use



qsub -l mem=8G -l h_vmem=10G

With default parameters, each job is limited to 1 core (slot).
To book more, use the following options:

# Book n slots on the same node (up to 40 on intel node, to 48 on amd node)
qsub -pe parallel_smp n

# Book n slots on any nodes (could be the same) in case of MPI jobs
qsub -pe parallel_fill n

# Book n slots on strictly different nodes in case of MPI jobs
qsub -pe parallel_rr n

Each jobs are submitted to a specific queue (the default one is the workq).
Each queue has a different priority considering the maximum time of execution allowed.


QueueAccessPriorityMax timeMax slots
workqeveryone1004 days (96h)3072
unlimitqeveryone1180 days500
interq ( demand1 day (24h)32
smpqon demand180 days96
wflowqspecific software180 days3072

To submit an array of jobs, use qarray command (same qsub options):

qarray [qsub options] shell_command_file

To know your quota, use the command:

qquota_cpu login

Academic account quota: 100 000 h/per calendar year
Beyond these 100,000 hours, you will need to submit a science project (by the resources request form) to estimate the real needs of the bioinformatics environment.

According to results from this evaluation, but also their geographical and institutional origin, users can then either continue their treatments or be invited to contribute financially to infrastructure, or be redirected to regional or national mésocentres calculation.

Non-academic account quota:  500 h/per calendar year for testing the infrastructure.
Overtime calculation will be charged (price on request).

Use the following command line (on genotoul server):

mmlsquota -u login

To show availables environments :

qconf -spl

To show a specific environment (example):

qconf -sp parallel_smp

1. First of all, the parallel environment has to be booked on the cluster using the -pe option. As example for a qsub:

#$ -pe parallel_rr 100

2. Then the environment variables and the MPI compiler wanted have to be loaded as following :

module load compiler/intel-2013.3.174
module load mpi/openmpi-1.8.1
mpirun cmd_name

To do so, you can use the qstat command, following are some usefull options:

qstat -u login : list only the specified user's jobs.
qstat -j job_id : provide several information on the specified job.
qstat -s r : list only the running jobs.

You can also have access to a graphical user interface which provides the same informations.
This interface is accessible with the qmon command.

To do so, use the qacct command line as following:

qacct -j job_id

This command line allow as well to make some SGE usage statistics.

To do so, you can use the qdel command, following are some usefull options:

# Kill the specified job
qdel -j job_id

# Kill all job launched by the specified user
qdel -u login