Q: 7
You are managing a Slurm cluster with multiple GPU nodes, each equipped with different types of
GPUs. Some jobs are being allocated GPUs that should be reserved for other purposes, such as
display rendering.
How would you ensure that only the intended GPUs are allocated to jobs?
Options
Discussion
Pretty sure it's A since Slurm relies on gres.conf to control which GPUs are visible for scheduling. Manual steps like nvidia-smi (B) or reinstalling drivers won’t restrict allocation, you have to exclude the display GPUs via config. Agree?
Pretty sure A for this one. Official admin docs and practice questions always point to proper gres.conf and slurm.conf setup when you want precise GPU allocation-just list only the compute GPUs, exclude display ones. Haven't seen nvidia-smi or job script tweaks used for this on recent exams. If anyone's prepping, going through lab configs really helps lock this down.
A tbh. Only config changes in gres.conf/slurm.conf will lock down allocation like this.
A D is a trap, only gres.conf and slurm.conf config can enforce the right GPU allocation for jobs.
Not D, A. The only way you actually prevent Slurm from touching display GPUs is excluding them in gres.conf, not by job script tweaks or reinstalling drivers.
Probably A here. Similar questions on official practice tests refer to correctly setting up gres.conf and slurm.conf so only the right GPUs get scheduled for jobs. Manual tools like nvidia-smi don't control Slurm's GPU selection. Official admin guide explains this pretty well.
Be respectful. No spam.