... | @@ -155,4 +155,15 @@ In the above example, the used cumulated CPU time of the *job_id* job is about 1 |
... | @@ -155,4 +155,15 @@ In the above example, the used cumulated CPU time of the *job_id* job is about 1 |
|
Display the real resources used by the terminated jobs of *username* starting and ending between February 1, 2021 and March 1, 2021 on the bigmem partition:
|
|
Display the real resources used by the terminated jobs of *username* starting and ending between February 1, 2021 and March 1, 2021 on the bigmem partition:
|
|
```
|
|
```
|
|
sacct -u username --partition=bigmem --starttime=2021-02-01 --endtime=2021-03-01 --format=JobID,State,TRESUsageInMax%100 | grep batch
|
|
sacct -u username --partition=bigmem --starttime=2021-02-01 --endtime=2021-03-01 --format=JobID,State,TRESUsageInMax%100 | grep batch
|
|
|
|
```
|
|
|
|
## Connect on nodes during batch jobs
|
|
|
|
|
|
|
|
Direct connection to compute nodes is forbidden in the Pyrene cluster. The only way to access resources allocated to a job is to submit another job. To do this, use the `srun` command to connect inside the targeted job. Assuming you have submitted a _job_id_ batch job:
|
|
|
|
```
|
|
|
|
$ sbatch script.sh
|
|
|
|
Submitted batch job job_id
|
|
|
|
```
|
|
|
|
simply use the _job_id_ allocated by Slurm to connect to the corresponding node:
|
|
|
|
```
|
|
|
|
$ srun --pty --jobid job_id /bin/bash -i
|
|
``` |
|
``` |
|
|
|
\ No newline at end of file |