Categories: Usage Tags: Batch System, TinyGPU Cluster
Index
How can I request an interactive job on TinyGPU?
Interactive Slurm Shell (RTX2080Ti, RTX3080, V100 and A100 nodes only)
To generate an interactive Slurm shell on one of the compute nodes, the following command has to be issued on the woody frontend:
salloc.tinygpu --gres=gpu:1 --time=00:30:00
This will give you an interactive shell for 30 minutes on one of the nodes, allocating 1 GPU and the respective number of CPU cores. There, you can then for example compile your code or do test runs of your binary. For MPI-parallel binaries, use srun
instead of mpirun
.
Please note that salloc
automatically exports the environment of your shell on the login node to your interactive job. This can cause problems if you have loaded any modules due to the version differences between the woody frontend and the TinyGPU compute nodes. To mitigate this, purge all loaded modules via module purge
before issuing the salloc
command.
This and more information can be found in our documentation about TinyGPU.