Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix capability checking, and refactor/add lots of comments #959

Merged
merged 7 commits into from
Jan 5, 2023

Conversation

byorgey
Copy link
Member

@byorgey byorgey commented Jan 5, 2023

Fixes #397. The only way I could understand this in order to fix it was to totally refactor the code and add lots of comments as I went. I feel like this is some of the most difficult code to wrap one's head around in the codebase. Hopefully now it's a bit easier to understand (though still not easy, I imagine).

ignoreOK (ds, _miss) = (ds, [])

(deviceSets, missingDeviceSets) =
Lens.over both (nubOrd . map S.fromList) . unzip $
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The bug was in the fact that we did a nubOrd here --- which could change the length of the list...


formatDevices = T.intercalate " or " . map (^. entityName) . S.toList
-- capabilities not provided by any device in inventory
missingCaps = S.fromList . map fst . filter (null . snd) $ zip caps deviceSets
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

... but here we zipped that possibly-shorter list with caps, which started out having the same length. But with repeated things that got nubbed away, the two lists were no longer aligned, which sometimes caused the wrong devices to be suggested for an unrelated missing capability.

@byorgey byorgey changed the title Fix capability checking, and add lots of comments Fix capability checking, and refactor/add lots of comments Jan 5, 2023
@byorgey byorgey marked this pull request as ready for review January 5, 2023 12:03
@byorgey byorgey requested review from xsebek and kostmo January 5, 2023 12:03
Copy link
Member

@xsebek xsebek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The detailed comments are great! 👍

We should really split the Step module, it just keeps growing. 😅

EDIT:

Honestly not sure how to write automated tests for this, any ideas?

I would add a test scenario with several entities providing the same capabilities that would trigger the wrong behaviour in the original version.

@byorgey
Copy link
Member Author

byorgey commented Jan 5, 2023

We should really split the Step module, it just keeps growing. 😅

For sure!

EDIT: fortunately someone has already written a nice description of how we might do this! #707 😄

Honestly not sure how to write automated tests for this, any ideas?

I would add a test scenario with several entities providing the same capabilities that would trigger the wrong behaviour in the original version.

Sure, good idea.

@byorgey
Copy link
Member Author

byorgey commented Jan 5, 2023

I would add a test scenario with several entities providing the same capabilities that would trigger the wrong behaviour in the original version.

Oh, now I remember why this is tricky. The problem is that the bug is a wrong error message. In other words this issue only triggers in a scenario where we should get an error message, but we just get the wrong one. There's no way to make a scenario that would fail without this fix but succeed with it. To see whether the fix worked you have to actually inspect the text of the error message.

@xsebek do you know how to inspect the text of generated exceptions as part of an integration test? I vaguely seem to recall that you have done that kind of thing before. EDIT: oh, I think I figured it out: we can use testSolution' and then write an arbitrary predicate on the resulting GameState, so we can e.g. look in the messageLog. EDIT 2: well, this is still tricky. In order to get a GameState from testSolution' at all the scenario needs to complete successfully. So we somehow need to arrange for a scenario which generates an error message into a log somewhere, then completes successfully. EDIT 3: arggh, I finally figured that out but now the error we should get is one of the "bad errors" that we check for in the test suite, so I have to figure out how to selectively allow it in this case...

@byorgey
Copy link
Member Author

byorgey commented Jan 5, 2023

Added a test case and confirmed it fails on current main but passes with this fix! The test might need to be adjusted when #956 is merged, but that's OK.

@byorgey byorgey requested a review from xsebek January 5, 2023 15:33
data/scenarios/Testing/397-wrong-missing.yaml Outdated Show resolved Hide resolved
data/scenarios/Testing/397-wrong-missing.yaml Outdated Show resolved Hide resolved
, testSolution' Default "Testing/397-wrong-missing" AllowBadErrors $ \g -> do
let msgs =
(g ^. messageQueue . to seqToTexts)
<> (g ^.. robotMap . traverse . robotLog . to seqToTexts . traverse)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those are some impressive optics. 🤓

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Haha, thanks, though I don't think it's that bad: ^.. means to list all the targets of a Traversal instead of just projecting out a single value. traverse just gets all the sub-things. So this means "get the robot map, then get all the robots out of it, and for each one get its log, convert it to a list of texts..." The last traverse is just so we get a single list of texts instead of a list of lists.

byorgey and others added 2 commits January 5, 2023 09:56
Co-authored-by: Ondřej Šebek <44544735+xsebek@users.noreply.github.com>
@byorgey byorgey added the merge me Trigger the merge process of the Pull request. label Jan 5, 2023
@mergify mergify bot merged commit 629cff3 into main Jan 5, 2023
@mergify mergify bot deleted the fix/issue-397 branch January 5, 2023 16:15
@kostmo
Copy link
Member

kostmo commented Jan 5, 2023

To see whether the fix worked you have to actually inspect the text of the error message.

I've found that an effective strategy for this kind of issue is to define a global sum type for all possible errors, and adopt this as a standard for errors across the codebase. Any relevant information at the error site is collected in the fields of this sum type's constructors, which is separately used by some toplevel error-message-rendering function to produce the human-readable text.

Then the test suite can pattern match on the error enum for circumstances where a certain error is expected to be produced.

@byorgey
Copy link
Member Author

byorgey commented Jan 5, 2023

Yes, that's a nice strategy, and we already do that to some extent with our Exn type. It might be a nice idea to preserve structured data such as Exn as long as possible, for example, storing Exn values directly in robot logs rather than storing a serialized Text version, so we can retrieve those Exn values intact as part of a test suite or whatever. That would definitely make the test suite code I wrote a bit nicer. However, in this case the main thing I was complaining about wasn't so much that we have to look at the text of the error message but simply that we have to look at it at all.

@byorgey byorgey mentioned this pull request Jan 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
merge me Trigger the merge process of the Pull request.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Wrong device reported missing
3 participants