Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Complete a test with report on specific issues #5

Open
semcogli opened this issue Nov 9, 2016 · 5 comments
Open

Complete a test with report on specific issues #5

semcogli opened this issue Nov 9, 2016 · 5 comments

Comments

@semcogli
Copy link

semcogli commented Nov 9, 2016

Right now any assertion error breaks the checking routine. I think it is preferable to have a complete test with a diagnosis report on all the errors. So we can focus on fixing issues all together before performing another test.

Also, in data development stage, typically we have some known issues in our data. For minor issues, we may choose to keep it that way temporarily but still get the alerts for other problems. A full test will help in that situation.

Thanks.

@smmaurer
Copy link
Member

Yes, i agree, this should be high priority on our roadmap. And good point about sometimes having known issues.

The way I've been thinking about it is that there are a few different layers to Orca_test:

  1. The spec classes (OrcaSpec(), TableSpec(), etc.)
  2. The testing interface (assert_orca_spec(o_spec), etc.)
  3. The low-level functions that check individual spec characteristics (most of the actual code)

What you're suggesting would be a new mode in the middle layer, so that we run a diagnosis report for a spec instead of just asserting it. Hopefully this will be easy to implement through some simple changes to the existing middle-layer functions.

This is a bit speculative, but potentially we could have an Orca_test "mode" setting, and a user could choose whether to run a simulation in "strict" mode or to just log the Orca_test warnings. Or, a user could have multiple sets of specs with different strictnesses. This would be a stretch to implement right now, but it's something to think about as we make architecture decisions.

@Eh2406
Copy link
Contributor

Eh2406 commented Nov 10, 2016

Yes, I think it can be a mid layer. It would call the same lower level calls, but each in a try block that logs errors and continues. If any throw an error, than the mid layer will throw at the end.

@Eh2406
Copy link
Contributor

Eh2406 commented Nov 18, 2016

I got started experimenting on my fork. DIFF

Thoughts? Comments?

Catching each error, looses a lot of the traceback info.

P.S. sorry for the white space diff noise, pycharm atom fixed it.

@smmaurer
Copy link
Member

Excellent! I like this as a general behavior, that the functions like assert_column_spec() don't raise an exception until they've checked every component of the spec.

Some thoughts:

  1. We should probably either log the intermediate error messages (how-to) or pass them along with the final exception, instead of just printing them. Not sure which is better. I'm imagining a use case where higher-level code is compiling a report or providing info to a GUI.
  2. I agree that losing the traceback info is a shame. Not sure what to do about that.
  3. It probably still makes sense to have another mode where the tests don't raise an exception at all, just log the messages. The easiest implementation might be to have assert_orca_spec() vs test_orca_spec() where the latter just runs the former and catches the final exception.

@Eh2406
Copy link
Contributor

Eh2406 commented Nov 21, 2016

If we are logging log.exception(e). Which also gets the traceback. I don't know how to give the details to the caller.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants