Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

09-defensive.md: Incorrect Error blocks for some assert statements #765

Open
Iain-S opened this issue Nov 27, 2019 · 8 comments
Open

09-defensive.md: Incorrect Error blocks for some assert statements #765

Iain-S opened this issue Nov 27, 2019 · 8 comments

Comments

@Iain-S
Copy link
Contributor

@Iain-S Iain-S commented Nov 27, 2019

Problem description

In the episode on defensive programming, under the "Test-Driven Development" heading, we use assert like so:

assert range_overlap([ (0.0, 1.0) ]) == (0.0, 1.0)

The error block below this code shows an AssertionError. In fact, we get a NameError as range_overlap is not yet defined. This happens again a little further down the page.

Proposed solution

Run the Python assert statements under the TDD heading. Copy the NameError outputs and use them to replace the AssertionErrors, where appropriate.

@ldko
Copy link
Contributor

@ldko ldko commented Dec 2, 2019

@Iain-S you are correct, running that assert statement before range_overlap is defined does result in a NameError: name 'range_overlap' is not defined rather than an AssertionError. I think your solution makes sense. What do you think @maxim-belkin @annefou ?

@annefou
Copy link
Contributor

@annefou annefou commented Dec 2, 2019

Yes I like your solution. Thanks.

@maxim-belkin
Copy link
Contributor

@maxim-belkin maxim-belkin commented Dec 2, 2019

I concur. It looks like someone did def range_overlap(*args): pass (or something like that) before writing the episode.

Thanks for spotting this, @Iain-S!

@ldko
Copy link
Contributor

@ldko ldko commented Dec 11, 2019

@Iain-S would you be willing to submit a PR with your proposed solution?

@leriomaggio
Copy link
Contributor

@leriomaggio leriomaggio commented Jul 13, 2021

I would be very willing to submit a PR to FIX this issue which I've also noticed while reviewing the material in this section.

It seems the issue is quite old (still referred to as Episode 9), so I am asking first whether (1) it is OK for me to go ahead on this, and (2) it would be still considered a useful improvement? (The issue on NameError is still present)

On this note, in the interest of the discussion, I do believe that both options mentioned by @maxim-belkin and @Iain-S would be compliant with the initial TDD cycle, and both having pros and cons .

I am going to try to explain in more details what my thoughts were on this, so we could discuss which solution would be most ideal for an update to the material.


A) Changing the first AssertionError with a more correct NameError due to the range_overlap function not being defined yet.

This solution would be great for many reasons: it perfectly links to previous episode's material (a reference link to the Variable Name Errors subsection can be included), and it would require minimal effort to change current content: just include a short explanation on why a NameError is raised. This would also emphasise how extreme (in Kent Beck's terms) the TDD practice can be, so that the test first programming literally means writing the test code before writing the actual code under test.

That being said, a question now arises: "Where should we go from here?":

A.1) We could write the function stub def range_overlap(ranges): pass to workaround the NameError, and so getting back to the AssertionError we were expecting.

This again would mean minimum impact on the current content, but it would slightly violate the original TDD mantra: "Write just the code required to pass the test".

A.2) Alternatively, we can start implementing the code of the range_overlap function just required to pass the first test statement [*], and then go ahead one step at a time: writing the next test case, and so the next update to function code, and so on, so forth. This has the advantage of being a more realistic TDD development cycle (more on this later).

[*] Small side note: In the episode it is reported three test functions for range_overlap:.
**I would rather consider three test statements or three test cases as alternative better wording, since those are not functions, and also avoids any confusion.


B) A completely different approach would be keeping everything as it is with the AssertionError, but starting with the range_overlap function stub explicitly defined before hand.

As a matter of facts, this is what I would normally do in real coding, being also a more Pythonic approach to the matter.

The downside would be having to explain the pass statement (still required by A.1 anyway) which is not a big deal per se, but can be a bit awkward for those coming from other languages.

In fact, a TDD initial cycle in other languages (e.g. Java) would start similarly to A.2. In Python instead, an empty function stub would be the standard choice.


On a similar note, another feedback I'd like to report pertains the strategy used in the episode to introduce the TDD practice.

Current approach starts with writing a whole bunch of test cases first, and then an implementation of the function which aims at accomplishing // passing all those tests at once.

Whereas, to the best of my knowledge, TDD is meant to be much more incremental than that, and aimed at developing one feature at a time (as usually covered by a single test at the time):

  • 1st test -> 1st feature (to pass the test)
  • 2nd test -> 2nd feature (to pass the test) + any refactoring (if any) and then retest
  • ...

On this note, the test_range_overlap function defined later in the episode is one of the bigger concerns for me.
A (test) function like that would lead to bad testing practice, and could also provide a false perception of what TDD actually is.

A unit test function (esp. when developed w/ TDD) should either cover one feature at a time (with single or related tests), and have a self-explanatory name. This would also serve the purpose of documenting what features or cases are actually covered by tests.

Therefore to conclude - apologies for the very long comment - I would really really love to hear from you on those few points I've made. Any feedback or suggestion will be very much appreciated.

Also, I would love to work on this, and submit my PR to contribute to this episode :)

Looking forward to receiving from you. Thanks very much!

@Iain-S
Copy link
Contributor Author

@Iain-S Iain-S commented Jul 14, 2021

@leriomaggio I certainly don't mind if you want to fix this issue. Somehow, I never got around to it. You've clearly given the TDD section a lot of thought.

The simplest approach, as you've mentioned, would be to either stub the function before the call to assert range_overlap([ (0.0, 1.0) ]) == (0.0, 1.0) (you could use a hard-coded return None or return -1 if you didn't want to explain pass) or correct the error message to NameError and leave pretty much everything else as-is.

Personally, I got a bit of an "aha!" moment the first time I saw someone write the test before they even declared the function. I support being more incremental in how the tests are written. There is a slight complication, though, in how you test that the function exists. I would normally do something like this with unittest:

def test_my_func(self):
    my_func()
    self.assertTrue()

but the section begins by using assert in a normal notebook cell so we'd have to do

# Just try to call the function
range_overlap()

or

# I presume functions are truthy but this is confusing
assert range_overlap

or

# Check it returns some truthy value
assert range_overlap([ (0.0, 1.0) ])

Personally, I think the episode section would be better re-written so that it goes:

  1. intro and motivation (fine as they are)
  2. define and run
    def test_range_overlap:
        range_overlap()
    
  3. then define range_overlap to make test_range_overlap pass
  4. then replace the call to range_overlap() with assert range_overlap([ (0.0, 1.0) ]) == (0.0, 1.0)
  5. then make test_range_overlap pass again
  6. then add assert range_overlap([ (0.0, 1.0), (5.0, 6.0) ]) == None to test_range_overlap
  7. then make test_range_overlap pass again or leave it as an exercise (I think it's unfair that it ends on a cliffhanger)

I would introduce "red, green, refactor" as an easy way to remember that you should always start with a failing test case. I think it would be fine to take out the extra assertions that are there at the moment, they are probably a distraction.

@ldko
Copy link
Contributor

@ldko ldko commented Jul 14, 2021

If anyone plans to do a rewrite of the range_overlap area of the episode, I suggest replacing range_overlap altogether with an example that would be relevant to the inflammation story line for better flow with the lesson. This is a desire that the maintainers have previously mentioned. We would welcome a pull request from anyone who wants to put in the effort. Thanks very much!

@leriomaggio
Copy link
Contributor

@leriomaggio leriomaggio commented Jul 14, 2021

Thank you @ldko for your feedback.

Yes, I would be very happy to work on a revised version of the episode.

Re range_overlap not relevant to the inflammation story line: I thought the same, and thanks very much for point out that issue - I missed it!
I do believe that the range_overlap per se is a brilliant example. We could probably keep it as the exercise case at the end of the episode, in substitution to the testing assertions part, that reiterates the same concerns @alexdesiqueira has in his issue.
Tbh I didn't dare mentioning that too in my comment, as I didn't want to brag it too much, nor give the impression that I was criticising the episode as a whole.
Quite the contrary! I think the episode is brilliant, and I'd love to contribute to improve it :)

Re @Iain-S : I agree with everything you said in your comment 💯 Thank you.
Also:

I would introduce "red, green, refactor" as an easy way to remember that you should always start with a failing test case.

Exactly! My idea was indeed aimed at focusing the episode more on the TDD practice, rather than on testing in general, as it seems to be at the moment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants