Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Break up dynamic dataset collection into chunks #11697

Merged
merged 6 commits into from Mar 23, 2021

Conversation

mvdbeek
Copy link
Member

@mvdbeek mvdbeek commented Mar 22, 2021

One drawback here is that if multiple jobs are concurrently creating datasets in a history the HID is not guaranteed to be sequential between chunks. We can address this, but this is also a problem when breaking workflow executions at maximum_workflow_jobs_per_scheduling_iteration.
If we consider this to be a problem (personally I don't think it is, but I could be convinced otherwise) I would work on this in a different PR.

On the upside the history panel will display an increasing amount of datasets created during the dataset discovery.

What did you do?

  • Break up dynamic dataset collection into chunks

Why did you make this change?

Addresses #11488.

How to test the changes?

(select the most appropriate option; if the latter, provide steps for testing below)

@jmchilton
Copy link
Member

The sequential HID thing is a problem for newer users and smaller workflows - I think at this scale it doesn't really matter or even if it does matter scaling up is more important.

@mvdbeek mvdbeek force-pushed the split_discovery_into_chunks branch from 2989ce3 to f652c4a Compare March 22, 2021 14:11
Addresses galaxyproject#11488.

One drawback here is that if multiple jobs are concurrently creating
datasets in a history the HID is not guaranteed to be sequential
between chunks. We can address this, but this is also a problem
when breaking workflow executions at
``maximum_workflow_jobs_per_scheduling_iteration``.
If we consider this to be a problem (personally I don't think it is, but
I could be convinved otherwise) I would work on this in a different PR.

On the upside the history panel will display an increasing amount of
datasets created during the dataset discovery.
Which breaks down for nested collections ...
If we break between forward and reverse read we would fail validation.
I guess if we want to do validation we should do that at a level before
building the collection ?
@mvdbeek mvdbeek force-pushed the split_discovery_into_chunks branch from 968cba0 to 88074d5 Compare March 23, 2021 09:32
And remove unused method set_collection_elements method in DatasetCollectionManager
@mvdbeek mvdbeek force-pushed the split_discovery_into_chunks branch from f505e94 to 98f49a8 Compare March 23, 2021 09:58
```
In [1]: from galaxy.util.oset import OrderedSet

In [2]: d = {1: 2, 3:4, 5:6, 0:1}

In [3]: d.keys() - {3, 5}
Out[3]: {0, 1}

In [4]: OrderedSet(d.keys()) - {3, 5}
Out[4]: OrderedSet([1, 0])
```
The dictionary's dict_keys object does not behave like a ordered set.
@mvdbeek mvdbeek marked this pull request as ready for review March 23, 2021 12:00
@github-actions github-actions bot added this to the 21.05 milestone Mar 23, 2021
@jmchilton
Copy link
Member

Thanks!

@jmchilton jmchilton merged commit ca0b9bf into galaxyproject:dev Mar 23, 2021
@github-actions
Copy link

This PR was merged without a "kind/" label, please correct.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants