-
Notifications
You must be signed in to change notification settings - Fork 8
TESP-API configuration #230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
group_vars/galaxyservers.yml
Outdated
assign: ['db-skip-locked'] | ||
execution: | ||
default: tpv_dispatcher | ||
default: tesp_env |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this changes the executor for everything, we want to only send specific users to tesp (e.g. myself), we need to change tpv configuration to achieve this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you be able to configure that for the usegalaxy.cz testing ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the instances have independent tpv config files, look at https://github.com/CESNET/usegalaxy/blob/main/files/usegalaxy.cz/tpv_rules_local.yml
require: | ||
- tes | ||
destinations: | ||
- id: tesp_env |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMHO, this couldn't work. AFAIK, you cannot use outside-TPV destinations (environments now) inside TPV. You need to specify an inside-TPV environment which will require a specific tag (e.g., tesp) and then use this tag to limit people or tools, etc.
Not quite sure about TES specific parameters, but the structure should be fine now |
32346539383333363333333661663232313663626433346232623837643231356234653834323261 | ||
30326639666163313530616363316135653565663639396631373863613532356234613237656537 | ||
62383831663335633865623933636230353966653266643939646164646538333664 | ||
35346536366566333666616433613161373130313134623465633832376438343965363264393036 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is being changed in the vault?
remote_metadata: false | ||
rewrite_parameters: true | ||
outputs_to_working_directory: false | ||
submit_native_specification: "-l select=1:ncpus={int(cores)}:mem={int(mem)}gb:scratch_local={int(scratch)}gb -l walltime={int(walltime)}:00:00 -q {{ pulsar.pbs_queue }} -N {{ pulsar.nfs_prefix }}_j{job.id}__{tool.id if '/' not in tool.id else tool.id.split('/')[-2]+'_v'+tool.id.split('/')[-1]}__{user.username if user and hasattr(user, 'username') else 'anonymous'}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this line of code, it is very important to understand that user and pbs queue used to submit jobs is set in the playbook individually for each Galaxy server and it is connected to the dual Pulsar playbook configuration. In other words, if TES Pulsar does run under a different user than user configured here it likely will not work. These two users must be the same. Therefore we need to know which user is submitting jobs to PBS.
Configuration to run jobs on TESP-API server.