You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As xterm and nomachine are not aware of the group, every process running in these processes are not aware of the ressources restriction.
This does not apply to VNC because vncserver is ran from within the srun session
For xterm, one workaround could be to allow x11 for the ssh connection from which we trigger the srun command and modify the srun call with --x11 in the srun command.
For NoMachine, I do not know yet for any workaround. At least, we should deactivate the NoMachine action if we do not run with "exclusive" allocation.
The text was updated successfully, but these errors were encountered:
One, probably better way to solve that problem is to allocate with salloc rather than srun and then launch commands within srun :
we ssh -X on the frontal node, and then allocate with salloc -N 1 --x11 ..
in another ssh session, we can trigger a command requiring x11 as : ssh frontal "SLURM_JOB_ID=4117 srun xeyes", where, in this example , 4117 was the job id returned by the salloc
Note that, possibly, as soon as the ssh session with the salloc is closed, the allocation is released. Maybe, if we run the salloc within a screen session, that could keep it alive.
As xterm and nomachine are not aware of the group, every process running in these processes are not aware of the ressources restriction.
This does not apply to VNC because vncserver is ran from within the srun session
For xterm, one workaround could be to allow x11 for the ssh connection from which we trigger the srun command and modify the srun call with
--x11
in the srun command.For NoMachine, I do not know yet for any workaround. At least, we should deactivate the NoMachine action if we do not run with "exclusive" allocation.
The text was updated successfully, but these errors were encountered: