Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clean up test_atfparser #85

Merged
merged 7 commits into from
Jul 12, 2019
Merged

Clean up test_atfparser #85

merged 7 commits into from
Jul 12, 2019

Conversation

rillian
Copy link
Contributor

@rillian rillian commented Jul 8, 2019

This is how I would address the remaining file for #82

I also added a few more asserts. According to @willismonroe the ={ syntax marks interlinear glosses in the original text. See P33403 Where ŠUG-su is annotated the same as the subsequent ku-⸢ru⸣-ma-at-su (kurummāssu) in the hover popup.

This set of tests were written to use the `unittest` framework
from the standard library but we invoke them through pytest,
which provides better reporting.

Remove the unittest dependency here to align with changes made
to other tests.

As a minimal change, keep the TestParser class, but remove the
inheritance from `unittest.TestCase`. The pytest package instead
finds tests based on general introspection. By marking the setUp
method, which has a special meaning in the unittest framework as
a pytest fixture with `autouse=True` we ensure it is called before
every test to set up the member variable.
Move the lexer fixture into the `try_parse` helper. None of the
tests need subsequent access to the lexer state, so there's no
reason for it to be shared.

Without shared state, the TestParser class can itself be removed.
It was necessary under the `unittest` framework, but `pytest` can
use module functions instead. This simplifies the code and saves
an indent level.
The parser creates `Score` objects from atf records so marked.
Verify this in the related tests.
These tests include atf snippets for features which the lexer
accepts, but the parser currently skips. They were marked with
a commented-out @Skip decorator. With the migration to pytest
these should become @pytest.skip instead.

Instead I moved the annotation to a TODO comment, mostly because
I like the style better. I also added basic asserts on the parser
result to silence warnings.

For an example of the '={' atf syntax, see the sample corpus, and
e.g. http://oracc.iaas.upenn.edu/saao/saa10/P334031/html
@codecov-io
Copy link

codecov-io commented Jul 8, 2019

Codecov Report

Merging #85 into master will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master      #85   +/-   ##
=======================================
  Coverage   87.22%   87.22%           
=======================================
  Files          27       27           
  Lines        1002     1002           
=======================================
  Hits          874      874           
  Misses        128      128

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 645f1a7...5910213. Read the comment docs.

Copy link
Contributor

@ageorgou ageorgou left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @rillian!

Would it make sense to also check that the contents of the Score object are populated correctly, similar to what is done for e.g. the language and proejct codes? It seems that just checking the attributes would be enough.

I was looking at the empty tests recently, it seems they were un-skipped when the relevant parsing was added but without adding any assertions. I would personally prefer to see them explicitly skipped rather than just have the TODO, although that is certainly no worse than the current situation. I'll open an issue about the missing assertions.

Edit: Opened in #86.

Check the attributes on the Score object returned from the
parser.
Parametrize the test with the different configurations described
in http://oracc.museum.upenn.edu/doc/help/editinginatf/scores/

This verifies more of the documented variants are accepted.
Restore the skip annotation for tests referencing features
supported by the lexer but not the parser. While there's
some value in verifying the parser accepts the input strings,
it's more useful to mark the state of the actual implementation
in the pytest output.
@rillian
Copy link
Contributor Author

rillian commented Jul 11, 2019

Ok, I'll add some checks for the attributes on the Score object.

Marking the tests for unimplemented features as skipped is more honest about the state of the implementation, and hopefully more inspiring than TODO comments as far as addressing the omissions. :)

@rillian
Copy link
Contributor Author

rillian commented Jul 11, 2019

@ageorgou Ready for follow-up review.

Copy link
Contributor

@ageorgou ageorgou left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, thank you!

@ageorgou ageorgou merged commit fc4e3dd into oracc:master Jul 12, 2019
@rillian rillian deleted the pytestbis branch July 12, 2019 15:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants