Skip to content

Conversation

BorisYourich
Copy link
Contributor

Configuration to run jobs on TESP-API server.

assign: ['db-skip-locked']
execution:
default: tpv_dispatcher
default: tesp_env
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this changes the executor for everything, we want to only send specific users to tesp (e.g. myself), we need to change tpv configuration to achieve this

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you be able to configure that for the usegalaxy.cz testing ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the instances have independent tpv config files, look at https://github.com/CESNET/usegalaxy/blob/main/files/usegalaxy.cz/tpv_rules_local.yml

require:
- tes
destinations:
- id: tesp_env
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO, this couldn't work. AFAIK, you cannot use outside-TPV destinations (environments now) inside TPV. You need to specify an inside-TPV environment which will require a specific tag (e.g., tesp) and then use this tag to limit people or tools, etc.

@martindemko
Copy link
Collaborator

Not quite sure about TES specific parameters, but the structure should be fine now

32346539383333363333333661663232313663626433346232623837643231356234653834323261
30326639666163313530616363316135653565663639396631373863613532356234613237656537
62383831663335633865623933636230353966653266643939646164646538333664
35346536366566333666616433613161373130313134623465633832376438343965363264393036
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is being changed in the vault?

remote_metadata: false
rewrite_parameters: true
outputs_to_working_directory: false
submit_native_specification: "-l select=1:ncpus={int(cores)}:mem={int(mem)}gb:scratch_local={int(scratch)}gb -l walltime={int(walltime)}:00:00 -q {{ pulsar.pbs_queue }} -N {{ pulsar.nfs_prefix }}_j{job.id}__{tool.id if '/' not in tool.id else tool.id.split('/')[-2]+'_v'+tool.id.split('/')[-1]}__{user.username if user and hasattr(user, 'username') else 'anonymous'}"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this line of code, it is very important to understand that user and pbs queue used to submit jobs is set in the playbook individually for each Galaxy server and it is connected to the dual Pulsar playbook configuration. In other words, if TES Pulsar does run under a different user than user configured here it likely will not work. These two users must be the same. Therefore we need to know which user is submitting jobs to PBS.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants