Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add high-score analysis #72

Merged
merged 12 commits into from
Jul 8, 2021
Merged

Add high-score analysis #72

merged 12 commits into from
Jul 8, 2021

Conversation

angelikatyborska
Copy link
Contributor

@angelikatyborska angelikatyborska commented Jun 5, 2021

  • Adds analysis for high-score as per https://github.com/exercism/elixir/blob/efd211b80aa350e32cf76242f09d12ec4d49df14/exercises/concept/high-score/.meta/design.md#L38 plus requiring a default argument in add_player so that module attribute usage is possible
  • Adds support for suppress_if to assert_[no_]call macros
  • Adds a new magic name _shallow_ignore that only ignores the current node in the AST, but continues matching on the node's children. This was necessary to be able to match a module attribute with any name, but with a specific value because:
    iex(2)> quote do
    ...(2)> @attr_name 3
    ...(2)> end
    {:@, [context: Elixir, import: Kernel], [{:attr_name, [context: Elixir], [3]}]} 
    
    
  • Adds some tests for feature that show the difference between _ignore and _shallow_ignore

Website copy PR: exercism/website-copy#2013

Comment on lines 18 to 37
feature "uses the module attribute in add_player function head" do
find :any
type :essential
suppress_if "uses the module attribute in add_player function body", :pass
comment ElixirAnalyzer.Constants.high_score_use_module_attribute()

form do
def add_player(_ignore, _ignore, _ignore \\ @_ignore) do
_ignore
end
end
end

assert_call "uses the module attribute in add_player function body" do
type :essential
suppress_if "uses the module attribute in add_player function head", :pass
comment ElixirAnalyzer.Constants.high_score_use_module_attribute()
calling_fn module: HighScore, name: :add_player
called_fn name: :@
end
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@neenjaw I am stuck with this part. I want to allow the student to either use the module attribute in the function's head as a default argument, or in the function's body in any way that they want. I don't want to have false positives if they use the attribute in the body in a roundabout way (e.g. x = @initial_score; Map.put(scores, name, x), that's why I'm trying to use assert_callfor this instead offeature`.

This means that I need a cyclic dependency with suppress_if, and that doesn't work. Do you have any thoughts on if it's possible to modify the analyzer to allow cyclic dependencies with suppress_if?

Alternatively, I can give up on the idea of allowing usage of the module attribute in the function body and instead add a new check that will tell students to use a function head with default arguments.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, haven't thought about that. Let me ponder this case

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@neenjaw if you have no ideas, then I'm just going to disallow the usage of the module attribute in the function body by adding some analysis that checks that a default argument was used.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just haven't had time to work on this, but that's a good stopgap for now

@angelikatyborska angelikatyborska marked this pull request as ready for review July 7, 2021 12:10
@neenjaw
Copy link
Contributor

neenjaw commented Jul 8, 2021

I've been playing around with the website, exercise flow the last week and I think it makes me a lot more hesitant in our application of :essential analyzer rules. I think if the analyzer ran with every test-run I would be a lot more comfortable with it. But it feels anticlimactic to solve a problem, pass every test in the test runner, then after submitting the exercise you find out that you 'broke a rule' that there was sometimes no way for you to know about. So then you go back to the solution, try to apply the advice, but then it won't re-analyze until you've passed all of the tests again.

This is probably a bit of a backtrack in my opinion from a previous stance -- but like I said, I am not sure any more whether we should use :essential as liberally as we have (or at all, tbh).

Thoughts?


Like I've said before, I'm not keen on trying to stop "cheaters" since I don't think they are here for the right reasons anyways, and so it seems futile to expend energy on those students when could redirect that energy to helping interested/engaged ones more.

@angelikatyborska
Copy link
Contributor Author

I agree that I have been too careless in using the essential type of comment. I didn't put much thought into alternatives.

you find out that you 'broke a rule' that there was sometimes no way for you to know about.

To avoid this, I think we should only use essential comments for "rules" that should be obvious:

  • if this is a learning exercise for concept X and you didn't even use concept X (like in this case using module attributes as constants)
  • if this practice exercise has a rule "do not use standard library map/reduce/foldr" in its instructions, but you did use it

Otherwise, any comment that suggest an improvement should be actionable, like in this PR the comment about using a default argument (because it's not this exercise's learning concept), or in my other PR for Lasagna to reuse functions.

For Pacman Rules, I would rephrase the current comment to even be a informative comment telling you the difference between and and && if you used && because IMO and vs && doesn't matter that much 🤷

So then you go back to the solution, try to apply the advice, but then it won't re-analyze until you've passed all of the tests again.

If you already have the tests passing and you only need to change one detail, this feedback loop will hopefully be fast enough. But maybe in production the tests runners will be much slower because of the load of all of the users 🙁.

As for the general attitude toward analysis. In my opinion, the main target of analysis is to unburden mentors, and this is achieved by two ways: pointing out obvious problems automatically, so that the mentor doesn't have to (like: reuse functions, use default arguments etc.), and ensure that students learning the concepts actually "learned" them so that the mentor later on doesn't have to fill big gaps in their knowledge.

I also don't think the goal is to prevent cheaters in the sense of people with bad intentions. I think most people that "cheat" (in the sense do not follow the rules) skimmed the instructions instead of reading them carefully so they are not fully aware, and/or operate on autopilot when approaching "easy" coding tasks, which leads them to do things the way they know from language X instead of the Elixir way.

I think it's worth having essential and actionable analyzer rules for those "cheaters" without bad intentions 🙂.

@neenjaw
Copy link
Contributor

neenjaw commented Jul 8, 2021

If you already have the tests passing and you only need to change one detail, this feedback loop will hopefully be fast enough. But maybe in production the tests runners will be much slower because of the load of all of the users .

I agree if this is a minor change, but I think it may be that a suggestion leads to having to change your design/implementation. I think this is good for now, but maybe we can think of a way to measure this to monitor it and adjust our approach later when students start to interact with the system on a larger scale

@angelikatyborska angelikatyborska merged commit dcdb975 into main Jul 8, 2021
@angelikatyborska angelikatyborska deleted the high-score branch July 8, 2021 17:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants