New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slurm adaptor got invalid key/value pair in output #72
Comments
Perhaps, it's a good time to update the Docker images. |
@sverhoeven: could you given an estimate on how much time is required to fix this? Thanks. |
The ScriptingParser used in the SlurmScheduler class does not know about sections ( |
I've just added a single statement to ignore all lines without an |
I'll add some tests and release a version with this fix |
It's fixed in the We would need a slurm 19 container to do proper testing? |
Thanks.
Yes. |
The conda xenon-cli 3.0.5beta1 package was just made for non-linux users (#73). It does not include the fix in the https://github.com/xenon-middleware/xenon/tree/jobstatus-bug branch, it is a build with the Xenon v3.0.4 release. |
Hmmm... my (new) unit test does parse the output correctly. I think there may be some version mixup with xenon somewhere. I'll see if I can find the problem. update: Ah, it seems the fix may be in the jobstatus-bug branch ;-) |
I'll cleanup the branch and test it with the other (non-slurm) scripting adaptors. I can then merge it into master and make a new release |
I created a draft PR xenon-middleware/xenon#670 for the jobstatus-bug branch, to see the test failures more easily. |
Hmmm... most of the test pass, except for one integration test. Apparently the sbatch argument "--workdir" has changed to "--chdir" at some point. Will fix. |
Fixed in the 3.1.0 release |
CLI v3.0.5 released on conda with Xenon 3.1.0. Please test |
All works fine with the latest release on Slurm 19. Thanks! |
The GridEngine cluster at UMCU has been recently upgraded to use Slurm (v19) (and will replace GE soon-ish). So, I tested the sv-callers workflow but all Slurm jobs failed (also tried without the
--max-memory
arg, see release notes).The text was updated successfully, but these errors were encountered: