Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix bug in classification of transformers #972

Merged
merged 9 commits into from
Apr 17, 2017
Merged

Fix bug in classification of transformers #972

merged 9 commits into from
Apr 17, 2017

Conversation

drvinceknight
Copy link
Member

@drvinceknight drvinceknight commented Apr 14, 2017

Our transformers were not reclassifying the strategies (mostly we weren't trying but in one case we were but it was not tested and not working). See #971

This implements the ability to pass a reclassifier function to the strategy transformer factory so that we can reclassify based on input parameters (for example if using the Noisy transformer but passing noise of 0 or 1 then the classifier does not change).

I've also written such reclassifiers for all relevant transformers EDIT: although technically the Mixed strategy transformer isn't as smart as it could be but I suggest this gets merged under the bug fix rule and an issue gets opened for that transformer. (I've set it to blankly say that it make strategies stochastic which is what we want for any sensible use case). I've done that one too now.

Closes #971

if sum(probability) > 1:
probability = tuple([i / sum(probability) for i in probability])

if probability[0] in [0, 1]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will this fail on probability=(0.0, 0.5) or the very useless (0.0, 0.0)?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so because of the normalisation step that happens before. I'm fixing some remaining failures and will take a look at the (0, 0) case! 👍

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You were right, that did fall over on those cases :) In 149e2e6 I've added more tests and a fix.

- Final transformer reclassification not tested (or working)
- Random transformers not updated the stochastic classification.
Implement a reclassifier option that can be passed to the Transformer
factor. This allows for smart reclassification of strategies based on
the input parameters.

- Also added a test to ensure that original class is not touched.
@marcharper
Copy link
Member

Do we have a test that catches the original issue with cloning these players?

@@ -349,6 +420,7 @@ def test_mixed(self):
probability = (0, 0, 1) # Note can also pass tuple
strategies = [axelrod.TitForTat, axelrod.Grudger, axelrod.Defector]
MD = MixedTransformer(probability, strategies)(axelrod.Cooperator)
self.assertFalse(MD.classifier["stochastic"])

p1 = MD()
# Against a cooperator we see that we only cooperate
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i've been reading through this to try and wrap my head around decorators, which drive me up the wall. it has helped but it's also hard reading. This comment that sneaked in from a copy-paste made me laugh.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing that now :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

which drive me up the wall.

Tell me about it! There be dragons... :'(

@drvinceknight
Copy link
Member Author

drvinceknight commented Apr 15, 2017

Do we have a test that catches the original issue with cloning these players?

I've incorporated a number of tests that check that players are now correctly classified.

This was the original issue (stochastic players were not being recognised as such and they were starting off the cache). EDIT: My first guess was cloning but as I commented later on in #971 it's nothing to do with cloning. It was just that we weren't classifying things properly.

I haven't specifically incorporated a test of the exact symptom but the new tests for the fingerprint (showing that the previous values weren't right) show that this fixes #971

Copy link
Member

@meatballs meatballs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very minor nitpicking stuff from me

@@ -37,6 +39,8 @@ def __call__(self, player, opponent, action)
Any keyword arguments to pass to the wrapper
name_prefix: string, "Transformed "
A string to prepend to the strategy and class name
reclassifier: function,
A function to use to update the class's classifier.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor quibbles:

  1. Somebody new to this might reasonably ask "What class?"
  2. "to use" is redundant

How about: "A function which will update the classifier of the strategy being transformed"?

@@ -229,9 +242,14 @@ def noisy_wrapper(player, opponent, action, noise=0.05):
return flip_action(action)
return action

def noisy_reclassifier(original_classifier, noise):
"""Function to reclassify the strategy"""
if noise not in [0, 1]:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really nitpicky comment: I'd use a tuple here as there is no need for a list

@@ -241,9 +259,15 @@ def forgiver_wrapper(player, opponent, action, p):
return random_choice(p)
return C

def forgiver_reclassifier(original_classifier, p):
"""Function to reclassify the strategy"""
if p not in [0, 1]:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same nitpicky tuple vs list comment as previous

- Better docstring description.
- Use tuples instead of lists.
@drvinceknight
Copy link
Member Author

Have addressed both those @meatballs. I still often slip in the bad habit of using lists when I shouldn't... 👍

@meatballs
Copy link
Member

meatballs commented Apr 16, 2017 via email

Copy link
Member

@marcharper marcharper left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a question on memory_depth for the transformers

@@ -294,9 +318,15 @@ def final_sequence(player, opponent, action, seq):
return seq[-index]
return action

def final_reclassifier(original_classifier, seq):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We probably need to update the memory depth as well (also for initial transformer)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure doing that now.

Also added a small extension of a test that ensures strategy
transformers are completely covered by their unit tests (I assume this
was being covered by another test somewhere else).
@drvinceknight
Copy link
Member Author

I've added reclassification of the memory depth too.

I think this should go in now under the bug fix rule. If there's anything remaining we can pick it up as other issues (I don't think there are but there might be other reclassifiers that need to be added but the main bug was being caused by the stochastic ones).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Bug with strategy transformers
4 participants