Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Checking macro prints not-falsey-or-exception? when fails #17

Closed
firesofmay opened this issue Aug 8, 2015 · 26 comments · Fixed by #21 or #29
Closed

Checking macro prints not-falsey-or-exception? when fails #17

firesofmay opened this issue Aug 8, 2015 · 26 comments · Fixed by #21 or #29

Comments

@firesofmay
Copy link
Contributor

Hi,

I was trying out checking macro and I found the output a little weird.
Here's a sample test:

(deftest do-fail
  (checking "fail me" 1000
            [n gen/nat]
            (is (not= 10 n))))

And here's the output:

> (do-fail)

FAIL in (do-fail) (clojure_test.clj:17)
fail me
{:result false, :seed 1439064390947, :failing-size 48, :num-tests 49, :fail [10], :shrunk {:total-nodes-visited 4, :depth 0, :result false, :smallest [10]}}
expected: (not-falsey-or-exception? (:result result))
  actual: (not (not-falsey-or-exception? false))

FAIL in (do-fail) (demo.clj:32)
fail me
expected: (not= 10 n)
  actual: (not (not= 10 10))

Why does it print this?

expected: (not-falsey-or-exception? (:result result))
actual: (not (not-falsey-or-exception? false))
@lackita
Copy link

lackita commented Aug 8, 2015

Looking into this now.

@gfredericks
Copy link
Owner

hey @lackita would the implementation for checking be easier if test.check provided callbacks for all the different events of the test-running process?

@lackita
Copy link

lackita commented Aug 10, 2015

@gfredericks
Maybe a little bit, the code currently assumes that the reports from the final failure are the smallest example, but I think I remember noticing minor discrepancies in practice. I might be able to eliminate those if I had a callbacks around successes, failures, and done shrinking.

@gfredericks
Copy link
Owner

Okay cool; I was working on such a thing at one point, so might be able to get it into the 0.9.0 release.

@lackita
Copy link

lackita commented Aug 10, 2015

Alright, the specific trigger I'd find useful is one fired when a smaller
example of a failure is identified.
On Aug 10, 2015 8:35 AM, "Gary Fredericks" notifications@github.com wrote:

Okay cool; I was working on such a thing at one point, so might be able to
get it into the 0.9.0 release.


Reply to this email directly or view it on GitHub
#17 (comment)
.

@gfredericks
Copy link
Owner

I'm curious if that's because you would print something on each smaller failure?

@gfredericks
Copy link
Owner

I'm wondering because I've done such a thing myself in a test.check fork, and it ends up being very verbose, which makes me want to have a way to control verbosity.

@lackita
Copy link

lackita commented Aug 10, 2015

Actually, quite the opposite. I only want to report the smallest failure
and I need a way of determining if this new failure is the new smallest.
On Aug 10, 2015 11:14 AM, "Gary Fredericks" notifications@github.com
wrote:

I'm wondering because I've done such a thing myself in a test.check fork,
and it ends up being very verbose, which makes me want to have a way to
control verbosity.


Reply to this email directly or view it on GitHub
#17 (comment)
.

@gfredericks
Copy link
Owner

why can't you do that just by waiting for the whole quick-check call to return and just examining the :shrunk key?

@lackita
Copy link

lackita commented Aug 10, 2015

I wanted to present the actual failure report from those values, not the
value leading to the failure.
On Aug 10, 2015 12:59 PM, "Gary Fredericks" notifications@github.com
wrote:

why can't you do that just by waiting for the whole quick-check call to
return and just examining the :shrunk key?


Reply to this email directly or view it on GitHub
#17 (comment)
.

@gfredericks
Copy link
Owner

oh right, and you need to distinguish it from the subsequent hundreds of test runs that passed before the test runner gave up. Got it.

@firesofmay
Copy link
Contributor Author

@lackita Any updates on this one?

@lackita
Copy link

lackita commented Aug 16, 2015

Sorry, I've been on vacation and haven't had much time to investigate.
That's over tomorrow so I'll be looking at it some more on the commute
home.
On Aug 16, 2015 4:24 AM, "Mayank Jain" notifications@github.com wrote:

@lackita https://github.com/lackita Any updates on this one?


Reply to this email directly or view it on GitHub
#17 (comment)
.

@firesofmay
Copy link
Contributor Author

Sure, no problem :)

@lackita
Copy link

lackita commented Aug 17, 2015

It looks like what happened is the spurious line was necessary when it was written, but then I later started capturing and propagating reports in another way and it became redundant.

@firesofmay
Copy link
Contributor Author

@lackita Thanks will check it out. :)
@gfredericks Any idea when are we releasing the next version?

@gfredericks
Copy link
Owner

Apparently this created a new issue (#23) so I will probably release when that gets figured out

@gfredericks
Copy link
Owner

Reopening this since it caused test failures.

@gfredericks gfredericks reopened this Aug 23, 2015
@firesofmay
Copy link
Contributor Author

@lackita Any updates on this?

@lackita
Copy link

lackita commented Aug 28, 2015

Sorry, I'll try to take a look on the commute home today.

On Fri, Aug 28, 2015 at 7:51 AM, Mayank Jain notifications@github.com
wrote:

@lackita https://github.com/lackita Any updates on this?


Reply to this email directly or view it on GitHub
#17 (comment)
.

@firesofmay
Copy link
Contributor Author

@lackita Hey any update on this?

@lackita
Copy link

lackita commented Sep 2, 2015

@firesofmay Sorry, this got conflated with a conversations in #25, I believe the issue should be fixed now.

@firesofmay
Copy link
Contributor Author

No problem. Thanks for all your help :)
On Sep 3, 2015 2:06 AM, "lackita" notifications@github.com wrote:

@firesofmay https://github.com/firesofmay Sorry, this got conflated
with a conversations in #25
#25, I believe the issue
should be fixed now.


Reply to this email directly or view it on GitHub
#17 (comment)
.

@gfredericks
Copy link
Owner

@firesofmay if you can confirm this fixes the problem I can make a release for it

@firesofmay
Copy link
Contributor Author

@gfredericks LGTM! 👍

@gfredericks
Copy link
Owner

Okay, this is released with 0.1.22

nberger added a commit to nberger/test.chuck that referenced this issue Sep 25, 2015
The checking macro was not printing the test.check result in case of
failures. This was happening since we changed the assertion on the
result of `checking` to only check if there was an exception. That
change was introduced to fix gfredericks#17, which is about the output showing a
"expected: (not-falsey-or-exception? (:result result))" message. But
as a side-effect of that change, we lost the test.check result from the
output, which is very valuable.

The fix is to simple print the test.check result in case of failure.
This is part of what `do-report` would do if we called
`(is (:result result) result)` but we avoid the other stuff that call
would do (incrementing test result counters, output the expected vs
actual result of the assertion, etc).
nberger added a commit to nberger/test.chuck that referenced this issue Sep 25, 2015
The checking macro was not printing the test.check result in case of
failures. This was happening since we changed the assertion on the
result of `checking` to only check if there was an exception. That
change was introduced to fix gfredericks#17, which is about the output showing a
"expected: (not-falsey-or-exception? (:result result))" message. But
as a side-effect of that change, we lost the test.check result from the
output, which is very valuable.

The fix is to simple print the test.check result in case of failure.
This is part of what `do-report` would do if we called
`(is (:result result) result)` but we avoid the other stuff that call
would do (incrementing test result counters, output the expected vs
actual result of the assertion, etc).

As a side-effect of this change, the `:pass` count in case of a failure
was fixed. That can be seen by the change in the failure-detecion-test
nberger added a commit to nberger/test.chuck that referenced this issue Sep 25, 2015
The checking macro was not printing the test.check result in case of
failures. This was happening since we changed the assertion on the
result of `checking` to only check if there was an exception. That
change was introduced to fix gfredericks#17, which was about the output showing an
"expected: (not-falsey-or-exception? (:result result))" message. But
as a side-effect of that change, we lost the test.check result from the
output, which is very valuable.

The fix is to simply print the test.check result in case of failure.
This is part of what `do-report` would do if we called
`(is (:result result) result)` but now we avoid the other stuff that
call would do (incrementing test result counters, output the expected
vs actual result of the assertion, etc).

As a side-effect of this change, the `:pass` count in case of a failure
was fixed. That can be seen by the change in the failure-detecion-test
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants