Default Resources

SLURM Cluster

47 INTEL cluster nodes : node101... node147 (each 64 cores with hyperthreading, 4GB memory/core, 250G/node for SLURM)

1 SMP node : genosmp02 (96 cores with hyperthreading & 1,5TB of memory)

1 Visu node : genoview (64cores with hyperthreading, 128G of memory, Graphic Card: Nvidia K40)

/home/user: 1GB available to store your configuration files.
/work/user: 1TB available as working directory. You have read/write access from any cluster node. Files are automaticly deleted if they have not been accessed within the last 120 days (to know them: find repertoire/ -atime +120).
/save/user: 250GB available for data you want to save with 30 retention days. You have read only access on this directory from any cluster nodes.


If you need more space in /work or in /save you are invited to fill the resources request form.


/usr/local/bioinfo/src/: directory gathering all bioinformatic software (see Software FAQ)
/bank: biological banks in different format (see Databanks FAQ)

Academic account quota: 100 000 h/per calendar year
Beyond these 100,000 hours, you will need to submit a science project (by the resources request form) to estimate the real needs of the bioinformatics environment.

According to results from this evaluation, but also their geographical and institutional origin, users can then either continue their treatments or be invited to contribute financially to infrastructure, or be redirected to regional or national mésocentres calculation.

Non-academic account quota:  500 h/per calendar year for testing the infrastructure.
Overtime calculation will be charged (price on request).

 

To know your quota, use the command:
squota_cpu

Without any parameters, on any queue, all jobs are limited to:

  • 2GB (memory)
  • 1 CPU (thread)

It depends on the status of your Linux group (contributors, INRA and/or REGION, others).

Max slotsworkq
(group)
workq
(user)
unlimitq
(all users sum)
unlimitq
(user)
Contributors5036768500125
INRA/Region378057650094
Others125819250031

To kown the status and the limits of your account:

saccount_info login (see "Status of your Linux primary group in Slurm" field)

It depends on the status of your Linux group (contributors, INRA and/or REGION, others).

 

Max mem in Gworkq
(group)
workq
(user)
unlimitq
(all users sum)
unlimitq
(user)
Contributors27.5T3T2T500G
INRA/Region20T2T2T376G
Others7T768G2T124G

To kown the status and the limits of your account:

saccount_info login (see "Status of your Linux primary group in Slurm" field)

Max jobs per user2500
Max job for all users10000
Max task array per job1001

Useful account informations

saccount_info login command will give you some useful informations of your account like:

  • account expiration date and last password change date ( every year)
  • your primary Linux group
  • your secondary Linux groups if you have some
  • status of your Linux primary group in Slurm (contributors, inraregion or others)
  • your groups' members
  • some Slurm limitations of your account

squeue long format

sq_long : squeue verbose with detail for: JOBID NAME USER QOS PARTITION NODES CPUS MIN_MEMORY TIME_LIMIT TIME_LEFT STATE NODELIST REASON. Can take all squeue options. See --help option for help.

sq_debug : squeue verbose for debug. Show COMMAND and WORKDIR for a job. Can take all squeue options. See --help option for help.

sq_run : sq_debug for running jobs. Can take all squeue options. See --help option for help.

sq_pend : sq_debug for pending jobs. Can take all squeue options. See --help option for help.

sacct long format

sa_debug : sacct verbose. Can take all sacct options. See --help option for help.

SGE Cluster (only smp node)

1 SMP node : 240 cores & 3TB of memory

/home/user: 1GB available to store your configuration files.
/work/user: 1TB available as working directory. You have read/write access from any cluster node. Files are automaticly deleted if they have not been accessed within the last 120 days (to know them: find repertoire/ -atime +120).
/save/user: 250GB available for data you want to save with 30 retention days. You have read only access on this directory from any cluster nodes.


If you need more space in /work or in /save you are invited to fill the resources request form.


/usr/local/bioinfo/src/: directory gathering all bioinformatic software (see Software FAQ)
/bank: biological banks in different format (see Databanks FAQ)

Academic account quota: 100 000 h/per calendar year
Beyond these 100,000 hours, you will need to submit a science project (by the resources request form) to estimate the real needs of the bioinformatics environment.

According to results from this evaluation, but also their geographical and institutional origin, users can then either continue their treatments or be invited to contribute financially to infrastructure, or be redirected to regional or national mésocentres calculation.

Non-academic account quota:  500 h/per calendar year for testing the infrastructure.
Overtime calculation will be charged (price on request).


To know your quota, use the command:
qquota_cpu login

Without any parameters, on any queue, all jobs are limited to:

  • mem=1GB (memory)
  • h_vmem=8GB (memory)
  • 1 CPU (slot)

It depends on our genotoul linux group (contributeurs, INRA and/or REGION, others).

Max slotsworkq
(group)
workq
(user)
unlimitq
(group)
unlimitq
(user)
Contributors2678669442110
INRA/Region212153033082
Others707176205

It depends on our genotoul linux group (contributeurs, INRA and/or REGION, others).

 

Max mem in Gworkq
(group)
workq
(user)
unlimitq
(group)
unlimitq
(user)
Contributors2652660
INRA/Region1980492
Others12030

To show all rules :

qconf -srqs

To show a specific rule (example) :

qconf -srqs max_slots_peruser_unlimitq
{
name max_slots_peruser_unlimitq
description Max slots per user for jobs submitted to unlimitq
enabled TRUE
limit users {*} queues unlimitq to slots=64
}