Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BalancingLearner: add a "cycle" strategy, sampling the learners one by one #188

Merged
merged 3 commits into from Jul 11, 2019

Conversation

basnijholt
Copy link
Member

This is useful for example for the AverageLearner1D and 2D where sampling on npoints won't sample all learners equally, because that's not how npoints is defined there.

@@ -176,6 +181,20 @@ def _ask_and_tell_based_on_npoints(self, n):
points, loss_improvements = map(list, zip(*selected))
return points, loss_improvements

def _ask_and_tell_based_on_cycle(self, n):
if not hasattr(self, '_cycle'):
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could also just be set in the __init__.

@akhmerov
Copy link
Contributor

I think this is rather an indication of npoints being inconsistently defined in AverageLearner. We already have a strategy that aims to make the amount of data per learner equal.

If this is a preferable implementation, we should rather remove the other strategy. However it'd make most sense if npoints is the amount of data points.

@basnijholt
Copy link
Member Author

But cycle does something different than npoints, you might, for example, run a BalancingLearner for loss_improvements and then just increase the number of points per learner with cycle, instead of making the number of points equal for all learners using npoints.

@akhmerov
Copy link
Contributor

I see, that's different indeed. Is that a behavior we would want to support?

It is also different from essentially everything else in adaptive: it tells the learner what to do and now what goal to reach.

Copy link
Contributor

@jbweston jbweston left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This strategy is simple enough that I think we can just add it, subject to the minor modifications I requested.

adaptive/learner/balancing_learner.py Outdated Show resolved Hide resolved
adaptive/learner/balancing_learner.py Show resolved Hide resolved
@@ -173,6 +178,20 @@ def _ask_and_tell_based_on_npoints(self, n):
points, loss_improvements = map(list, zip(*selected))
return points, loss_improvements

def _ask_and_tell_based_on_cycle(self, n):
if not hasattr(self, "_cycle"):
self._cycle = itertools.cycle(range(len(self.learners)))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do this in __init__ as I commented above.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But you wrote to do in the strategy property setter?

@basnijholt
Copy link
Member Author

@jbweston I think I've done what you suggested (up to #188 (comment)).

Merge if you like.

@@ -107,10 +110,13 @@ def strategy(self, strategy):
self._ask_and_tell = self._ask_and_tell_based_on_loss
elif strategy == "npoints":
self._ask_and_tell = self._ask_and_tell_based_on_npoints
elif strategy == "cycle":
self._ask_and_tell = self._ask_and_tell_based_on_cycle
self._cycle = itertools.cycle(range(len(self.learners)))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now the cycle will be reset every time the strategy is set dynamically. I'm not sure what the best thing to do here is.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That might be the intention? We could also just put in in __init__...

@basnijholt basnijholt requested a review from jbweston July 10, 2019 16:33
Copy link
Contributor

@jbweston jbweston left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@basnijholt basnijholt merged commit 8b4b583 into master Jul 11, 2019
@basnijholt basnijholt deleted the cycle_strategy branch July 11, 2019 16:46
@basnijholt basnijholt mentioned this pull request Aug 31, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants