Default Resources

SLURM Cluster
  • Genobioinfo cluster:

38 * AMD 7713 nodes: n001... n038 (each 128 threads, 2TB of memory)

1 SMP node : bigmem01 (128 threads, 4TB of memory)

1 GPU node: gpu01 INTEL Gold 6338 (128 threads, 1TB of memory)

1 VISUnode: visu01 (128 threads, 512GB of memory)

 

  • Genobioinfo cluster:

/home/username: 10GB available to store your configuration files.


/work/user/username: 1TB available as working directory.
RW : Read/Write access from any cluster node.


/save/user/username: 250GB available for data you want to save (replication) with 30 retention days.
RO : Read Only access from any cluster nodes

 

If you need more space in /work or in /save you are invited to fill the resources request form.

 

/usr/local/bioinfo/src/: directory gathering all bioinformatic software (see Software FAQ)


/bank: biological banks in different format (see Databanks FAQ)

On /work:

mmlsquota -u username --block-size G

On /save and /home:

du -csh --apparent-size /save/user/username/ ( /save/user/username/* for details)
du -csh --apparent-size /home/username/ ( /home/username/* for details)

Academic account quota: 100 000 h/per calendar year
Beyond these 100,000 hours, you will need to submit a science project (by the resources request form) to estimate the real needs of the bioinformatics environment.

According to results from this evaluation, but also their geographical and institutional origin, users can then either continue their treatments or be invited to contribute financially to infrastructure, or be redirected to regional or national mésocentres calculation.

Non-academic account quota:  500 h/per calendar year for testing the infrastructure.
Overtime calculation will be charged (price on request).

 

To know your quota, use the command:
squota_cpu

Without any parameters, on any queue, all jobs are limited to:

  • 2GB (memory)
  • 1 CPU (thread)

Genobioinfo cluster

It depends on the status of your Linux group (contributors, INRAe and/or REGION, others).

Max slotsworkq
(group)
workq
(user)
unlimitq
(All users)
unlimitq
(user)
Contributors51842000780500
INRAe/Region38881024780376
Others1296250780100

To known the status and the limits of your account:

saccount_info login

Genobioinfo cluster

It depends on the status of your Linux group (contributors, INRAe and/or REGION, others).

 

Max memworkq
(group)
workq
(user)
unlimitq
(all users)
unlimitq
(user)
Contributors82TB32TB-8TB
INRAe/Region62TB16TB-6TB
Others21TB4TB-2TB

To known the status and the limits of your account:

saccount_info login

Max jobs per user2500
Max job for all users10000
Max task array per job2501

Genobioinfo cluster

 

Slurm PartitionMaxTime
workq,gpuq96H (4 days)
interq12H
unlimitq, wflowq90 days (3 months)

Useful account informations

saccount_info login command will give you some useful informations of your account like:

  • account expiration date and last password change date ( every year)
  • your primary Linux group
  • your secondary Linux groups if you have some
  • status of your Linux primary group in Slurm (contributors, inraregion or others)
  • your groups' members
  • some Slurm limitations of your account

squeue long format

sq_long : squeue verbose with detail for: JOBID NAME USER QOS PARTITION NODES CPUS MIN_MEMORY TIME_LIMIT TIME_LEFT STATE NODELIST REASON. Can take all squeue options. See --help option for help.

sq_debug : squeue verbose for debug. Show COMMAND and WORKDIR for a job. Can take all squeue options. See --help option for help.

sq_run : sq_debug for running jobs. Can take all squeue options. See --help option for help.

sq_pend : sq_debug for pending jobs. Can take all squeue options. See --help option for help.

sacct long format

sa_debug : sacct verbose. Can take all sacct options. See --help option for help.