|
|
Welcome to the Bilbo cluster documentation homepage.
|
|
|
|
|
|
Bilbo is the UPPA teaching computer cluster.
|
|
|
|
|
|
\[\[_TOC_\]\]
|
|
|
|
|
|
## Resource reservation / Calendar (for teachers)
|
|
|
|
|
|
Bilbo must be booked for the sessions of practical courses.
|
|
|
|
|
|
To view the cluster occupation, see the [calendar](https://partage.univ-pau.fr/home/%20Cluster_de_calcul_Enseignement@univ-pau.fr/Calendar.html?view=week&tz=Europe%2FBrussels).
|
|
|
|
|
|
To book the cluster:
|
|
|
1. Connect to https://partage.univ-pau.fr.
|
|
|
1. Go to "Calendrier" -> click on the wanted day.
|
|
|
1. Fill the "Sujet" field.
|
|
|
1. Specify the dates and hours of the beginning and ending of the practical course.
|
|
|
1. Click on "Plus de détails...".
|
|
|
1. Click on "Afficher l'équipement".
|
|
|
1. In "Equipement": enter "Cluster_de_calcul_Enseignement".
|
|
|
1. Click on "Envoyer".
|
|
|
|
|
|
## Hardware characteristics
|
|
|
|
|
|
Bilbo is composed of one front-end machine and 4 compute nodes. It **does not have** fast interconnect or network shared scratch.
|
|
|
|
|
|
### Front-end *slurm-ens-frontal*
|
|
|
The Bilbo front-end is a virtual machine dedicated to user connection.
|
|
|
* OS : Debian 10 "Buster".
|
|
|
* Number of cores: 12.
|
|
|
* RAM: 16 GB.
|
|
|
|
|
|
### Compute nodes *slurm-ens[1-4]*
|
|
|
Compute nodes are dedicated to interactive and batch jobs.
|
|
|
* OS : Debian 10 "Buster".
|
|
|
* Node model: PowerEdge C6420.
|
|
|
* Number of cores/node: 28 (two 14-core processors Intel® Xeon® Gold 5120 @ 2,2 GHz).
|
|
|
* RAM/node: 96 GB.
|
|
|
|
|
|
### Disk space
|
|
|
The `quota -s` command provides:
|
|
|
* The user current disk occupation in the "space" column.
|
|
|
* The user quota in the "quota" column (5 GB per user).
|
|
|
```
|
|
|
username@slurm-ens-frontal:~$ quota -s
|
|
|
Disk quotas for user username (uid XXXX):
|
|
|
Système fichiers space quota limite sursisfichiers quota limite sursis
|
|
|
/dev/vda1 281M 4883M 4893M 4110 0 0
|
|
|
```
|
|
|
|
|
|
## Connexion
|
|
|
Open a terminal and use one of the following commands:
|
|
|
```
|
|
|
# Without graphic redirection:
|
|
|
ssh username@bilbo.univ-pau.fr
|
|
|
# With graphic redirection:
|
|
|
ssh -Y username@bilbo.univ-pau.fr
|
|
|
```
|
|
|
where *username* is your UPPA login. Then enter your UPPA password.
|
|
|
|
|
|
It is also possible to open a graphical desktop on Bilbo with X2Go. To do that, use the [same procedure as on Pyrene](https://git.univ-pau.fr/num-as/pyrene-cluster/-/wikis/1-Environment/1.2-Connection-to-Pyrene#graphical-mode-windows-linux-macos-x2go), but replace the Host name _pyrene.univ-pau.fr_ by **_bilbo.univ-pau.fr_**.
|
|
|
|
|
|
## Interactive jobs
|
|
|
When connected to the cluster, use the following commands (example given for the Gaussview software):
|
|
|
```
|
|
|
salloc
|
|
|
module avail
|
|
|
module purge
|
|
|
module load gaussview/6.0
|
|
|
module list
|
|
|
gaussview &
|
|
|
exit
|
|
|
```
|
|
|
|
|
|
## Batch jobs
|
|
|
|
|
|
Slurm job scripts contains two parts:
|
|
|
1. Slurm directives: starting with `#SBATCH`, they specify Slurm options.
|
|
|
1. Unix directives: commands for your job execution, such as loading modules, launching an executable program, etc.
|
|
|
|
|
|
Here are some self-explanatory job script examples for several softwares/applications on Bilbo:
|
|
|
* [Sequential job](https://git.univ-pau.fr/num-as/bilbo-cluster/blob/master/jobs/sequential/sseq.sh).
|
|
|
* [Parallel, distributed memory (MPI with OpenMPI) job](https://git.univ-pau.fr/num-as/bilbo-cluster/blob/master/jobs/openmpi/spar_distrib_openmpi.sh).
|
|
|
* [Gaussian 16 job](https://git.univ-pau.fr/num-as/bilbo-cluster/blob/master/jobs/gaussian/sg16.sh).
|
|
|
* [VASP job](https://git.univ-pau.fr/num-as/bilbo-cluster/blob/master/jobs/vasp/svasp.sh).
|
|
|
* [CRYSTAL job](https://git.univ-pau.fr/num-as/bilbo-cluster/blob/master/jobs/crystal/scrystal.sh).
|
|
|
|
|
|
Here are some Slurm usual user commands:
|
|
|
|
|
|
Submit a job:
|
|
|
```
|
|
|
sbatch script.sh
|
|
|
```
|
|
|
Cancel the *job_id* job (*job_id* is the number provided by Slurm to identify the job):
|
|
|
```
|
|
|
scancel job_id
|
|
|
```
|
|
|
Display the jobs in the waiting queue:
|
|
|
```
|
|
|
squeue
|
|
|
```
|
|
|
Display the jobs of user *username* in the waiting queue:
|
|
|
```
|
|
|
squeue -u username
|
|
|
```
|
|
|
Display detailed information on a running or recently completed job:
|
|
|
```
|
|
|
scontrol show job job_id
|
|
|
```
|
|
|
Display the current state of compute nodes:
|
|
|
```
|
|
|
sinfo
|
|
|
```
|
|
|
|
|
|
## Kill X2Go sessions (for teachers)
|
|
|
Some teachers (who asked to be sudo users) can kill broken X2Go sessions using the following commands:
|
|
|
```
|
|
|
# List the names NNNN of all current X2Go sessions (sorted):
|
|
|
sudo /usr/sbin/x2golistsessions_root | cut -d '|' -f 2 | sort
|
|
|
# Kill the NNNN session
|
|
|
sudo /usr/bin/x2goterminate-session NNNN
|
|
|
Welcome to the Bilbo cluster documentation homepage.
|
|
|
|
|
|
Bilbo is the UPPA teaching computer cluster.
|
|
|
|
|
|
[[_TOC_]]
|
|
|
|
|
|
## Resource reservation / Calendar (for teachers)
|
|
|
|
|
|
Bilbo must be booked for the sessions of practical courses.
|
|
|
|
|
|
To view the cluster occupation, see the [calendar](https://partage.univ-pau.fr/home/%20Cluster_de_calcul_Enseignement@univ-pau.fr/Calendar.html?view=week&tz=Europe%2FBrussels).
|
|
|
|
|
|
To book the cluster:
|
|
|
1. Connect to https://partage.univ-pau.fr.
|
|
|
1. Go to "Calendrier" -> click on the wanted day.
|
|
|
1. Fill the "Sujet" field.
|
|
|
1. Specify the dates and hours of the beginning and ending of the practical course.
|
|
|
1. Click on "Plus de détails...".
|
|
|
1. Click on "Afficher l'équipement".
|
|
|
1. In "Equipement": enter "Cluster_de_calcul_Enseignement".
|
|
|
1. Click on "Envoyer".
|
|
|
|
|
|
## Hardware characteristics
|
|
|
|
|
|
Bilbo is composed of one front-end machine and 4 compute nodes. It **does not have** fast interconnect or network shared scratch.
|
|
|
|
|
|
### Front-end *slurm-ens-frontal*
|
|
|
The Bilbo front-end is a virtual machine dedicated to user connection.
|
|
|
* OS : Debian 10 "Buster".
|
|
|
* Number of cores: 12.
|
|
|
* RAM: 16 GB.
|
|
|
|
|
|
### Compute nodes *slurm-ens[1-4]*
|
|
|
Compute nodes are dedicated to interactive and batch jobs.
|
|
|
* OS : Debian 10 "Buster".
|
|
|
* Node model: PowerEdge C6420.
|
|
|
* Number of cores/node: 28 (two 14-core processors Intel® Xeon® Gold 5120 @ 2,2 GHz).
|
|
|
* RAM/node: 96 GB.
|
|
|
|
|
|
### Disk space
|
|
|
The `quota -s` command provides:
|
|
|
* The user current disk occupation in the "space" column.
|
|
|
* The user quota in the "quota" column (5 GB per user).
|
|
|
```
|
|
|
username@slurm-ens-frontal:~$ quota -s
|
|
|
Disk quotas for user username (uid XXXX):
|
|
|
Système fichiers space quota limite sursisfichiers quota limite sursis
|
|
|
/dev/vda1 281M 4883M 4893M 4110 0 0
|
|
|
```
|
|
|
|
|
|
## Connexion
|
|
|
Open a terminal and use one of the following commands:
|
|
|
```
|
|
|
# Without graphic redirection:
|
|
|
ssh username@bilbo.univ-pau.fr
|
|
|
# With graphic redirection:
|
|
|
ssh -Y username@bilbo.univ-pau.fr
|
|
|
```
|
|
|
where *username* is your UPPA login. Then enter your UPPA password.
|
|
|
|
|
|
It is also possible to open a graphical desktop on Bilbo with X2Go. To do that, use the [same procedure as on Pyrene](https://git.univ-pau.fr/num-as/pyrene-cluster/-/wikis/1-Environment/1.2-Connection-to-Pyrene#graphical-mode-windows-linux-macos-x2go), but replace the Host name _pyrene.univ-pau.fr_ by **_bilbo.univ-pau.fr_**.
|
|
|
|
|
|
## Interactive jobs
|
|
|
When connected to the cluster, use the following commands (example given for the Gaussview software):
|
|
|
```
|
|
|
salloc
|
|
|
module avail
|
|
|
module purge
|
|
|
module load gaussview/6.0
|
|
|
module list
|
|
|
gaussview &
|
|
|
exit
|
|
|
```
|
|
|
|
|
|
## Batch jobs
|
|
|
|
|
|
Slurm job scripts contains two parts:
|
|
|
1. Slurm directives: starting with `#SBATCH`, they specify Slurm options.
|
|
|
1. Unix directives: commands for your job execution, such as loading modules, launching an executable program, etc.
|
|
|
|
|
|
Here are some self-explanatory job script examples for several softwares/applications on Bilbo:
|
|
|
* [Sequential job](https://git.univ-pau.fr/num-as/bilbo-cluster/blob/master/jobs/sequential/sseq.sh).
|
|
|
* [Parallel, distributed memory (MPI with OpenMPI) job](https://git.univ-pau.fr/num-as/bilbo-cluster/blob/master/jobs/openmpi/spar_distrib_openmpi.sh).
|
|
|
* [Gaussian 16 job](https://git.univ-pau.fr/num-as/bilbo-cluster/blob/master/jobs/gaussian/sg16.sh).
|
|
|
* [VASP job](https://git.univ-pau.fr/num-as/bilbo-cluster/blob/master/jobs/vasp/svasp.sh).
|
|
|
* [CRYSTAL job](https://git.univ-pau.fr/num-as/bilbo-cluster/blob/master/jobs/crystal/scrystal.sh).
|
|
|
|
|
|
Here are some Slurm usual user commands:
|
|
|
|
|
|
Submit a job:
|
|
|
```
|
|
|
sbatch script.sh
|
|
|
```
|
|
|
Cancel the *job_id* job (*job_id* is the number provided by Slurm to identify the job):
|
|
|
```
|
|
|
scancel job_id
|
|
|
```
|
|
|
Display the jobs in the waiting queue:
|
|
|
```
|
|
|
squeue
|
|
|
```
|
|
|
Display the jobs of user *username* in the waiting queue:
|
|
|
```
|
|
|
squeue -u username
|
|
|
```
|
|
|
Display detailed information on a running or recently completed job:
|
|
|
```
|
|
|
scontrol show job job_id
|
|
|
```
|
|
|
Display the current state of compute nodes:
|
|
|
```
|
|
|
sinfo
|
|
|
```
|
|
|
|
|
|
## Kill X2Go sessions (for teachers)
|
|
|
Some teachers (who asked to be sudo users) can kill broken X2Go sessions using the following commands:
|
|
|
```
|
|
|
# List the names NNNN of all current X2Go sessions (sorted):
|
|
|
sudo /usr/sbin/x2golistsessions_root | cut -d '|' -f 2 | sort
|
|
|
# Kill the NNNN session
|
|
|
sudo /usr/bin/x2goterminate-session NNNN
|
|
|
``` |
|
|
\ No newline at end of file |