... | @@ -156,3 +156,14 @@ Display the real resources used by the terminated jobs of *username* starting an |
... | @@ -156,3 +156,14 @@ Display the real resources used by the terminated jobs of *username* starting an |
|
```
|
|
```
|
|
sacct -u username --partition=bigmem --starttime=2021-02-01 --endtime=2021-03-01 --format=JobID,State,TRESUsageInMax%100 | grep batch
|
|
sacct -u username --partition=bigmem --starttime=2021-02-01 --endtime=2021-03-01 --format=JobID,State,TRESUsageInMax%100 | grep batch
|
|
```
|
|
```
|
|
|
|
## Connect on nodes during batch jobs
|
|
|
|
|
|
|
|
Direct connection to compute nodes is forbidden in the Pyrene cluster. The only way to access resources allocated to a job is to submit another job. To do this, use the `srun` command to connect inside the targeted job. Assuming you have submitted a _job_id_ batch job:
|
|
|
|
```
|
|
|
|
$ sbatch script.sh
|
|
|
|
Submitted batch job job_id
|
|
|
|
```
|
|
|
|
simply use the _job_id_ allocated by Slurm to connect to the corresponding node:
|
|
|
|
```
|
|
|
|
$ srun --pty --jobid job_id /bin/bash -i
|
|
|
|
``` |
|
|
|
\ No newline at end of file |