-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: admin might reset the expiration time #23
Comments
you could use ws_allocate in the job script, it even prints out the full workspace path to stdout so you can do
|
Sure, but coyping in the job script would mean to access /home from the nodes at the runtime of the job. I was told not to access /home from a job at the cluster in question. Hence I prepare all stuff in a workspace per job beforehand: the job script, input files and then submit the job from the workspace (essentially the cluster could work without a mounted /home on the nodes). And for such a case resetting the expiration time might be handy. Nevertheless: even if it would be allowed to have short access to /home, I know other clusters where indeed an access to /home is discouraged from the nodes. The reason it simply, that the usual end users don’t fiddle around with the different locations for input files, checkpoint files, scratch files, output files and sophisticated scripting. Hence this approach should guarantee to stay in the workspace area if all has to be prepared from there. While proper scripting could indeed limit the access to /home for just the (often textual) input and output files. |
I do not see how the prolog could know which workspace to touch. |
If someone wants to use this feature, it would be necessary to record the workspace’s name in a comment (SLURM) or job context (SGE) for that particular job to use it in a job prolog. A set environment variable would also work. • each user can do it only once per workspace |
This proposal is not yet settled. Imagine I prepare a workspace in a cluster. I expect a runtime of the job to be 20 days, hence I give 30 days for the
ws_allocation
command. Unfortunately I have to wait 11 days until my job starts. Sure, unless I’m in vacation I could get an email about the expiration and extend it.With an additional option
-q
and some to be implemented hook I could allow a prolog of the queuing system to extend the expiration without consuming one of the available extensions when the job finally start. This new expiration date could be actual-date + (expiration_date - creation_date), hence the original allocation time will count down when the job actually starts only. As this might be a feature in the background, the option-q
would give the user the feedback that there maybe something changing the allocation time.The text was updated successfully, but these errors were encountered: