New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add unit tests for completion models #1572

Merged
merged 20 commits into from Jul 1, 2016

Conversation

Projects
None yet
4 participants
@rcorre
Collaborator

rcorre commented Jun 11, 2016

This establishes a pattern that can probably be used to test any of the
completion models.

See #999.

Let me know if this pattern looks good, and I'll start testing the other models.


This change is Reviewable

@rcorre

This comment has been minimized.

Show comment
Hide comment
@rcorre

rcorre Jun 12, 2016

Collaborator

Just realized there is an easy(ish) way to stub cmdutils.cmd_dict. Gonna do that, ignore this for now.

Collaborator

rcorre commented Jun 12, 2016

Just realized there is an easy(ish) way to stub cmdutils.cmd_dict. Gonna do that, ignore this for now.

Show outdated Hide outdated tests/unit/completion/test_models.py
assert actual["Commands"] == [('drop', 'drop all user data', ''),
('rock', "Alias for 'roll'", ''),
('roll', 'never gonna give you up', 'rr'),
('stop', 'stop qutebrowser', 's')]

This comment has been minimized.

@The-Compiler

The-Compiler Jun 13, 2016

Collaborator

😆 👍

@The-Compiler

The-Compiler Jun 13, 2016

Collaborator

😆 👍

Show outdated Hide outdated tests/unit/completion/test_models.py
...
}
"""
completions = {}

This comment has been minimized.

@The-Compiler

The-Compiler Jun 13, 2016

Collaborator

Maybe this should be a collections.OrderedDict so future tests could also check the order of completions?

@The-Compiler

The-Compiler Jun 13, 2016

Collaborator

Maybe this should be a collections.OrderedDict so future tests could also check the order of completions?

Show outdated Hide outdated tests/unit/completion/test_models.py
config_stub.data['aliases'] = {'rock': 'roll'}
key_config_stub.set_bindings_for('normal', {'s': 'stop', 'rr': 'roll'})
actual = _get_completions(miscmodels.CommandCompletionModel())
assert "Commands" in actual

This comment has been minimized.

@The-Compiler

The-Compiler Jun 13, 2016

Collaborator

assert actual.keys() == ["Commands"] maybe to ensure there are no other categories?

@The-Compiler

The-Compiler Jun 13, 2016

Collaborator

assert actual.keys() == ["Commands"] maybe to ensure there are no other categories?

Show outdated Hide outdated tests/unit/completion/test_models.py
"""Tests for completion models."""
import pytest

This comment has been minimized.

@The-Compiler

The-Compiler Jun 13, 2016

Collaborator

unused import

@The-Compiler

The-Compiler Jun 13, 2016

Collaborator

unused import

Show outdated Hide outdated tests/unit/completion/test_models.py
import pytest
from qutebrowser.completion.models import miscmodels
from qutebrowser.commands import cmdutils

This comment has been minimized.

@The-Compiler

The-Compiler Jun 13, 2016

Collaborator

unused import

@The-Compiler

The-Compiler Jun 13, 2016

Collaborator

unused import

@The-Compiler

This comment has been minimized.

Show comment
Hide comment
@The-Compiler

The-Compiler Jun 13, 2016

Collaborator

Nice! I added a few comments, but looks like a simple good way to test completion models indeed.

Collaborator

The-Compiler commented Jun 13, 2016

Nice! I added a few comments, but looks like a simple good way to test completion models indeed.

(CategoryName: [(name, desc, misc), ...]),
(CategoryName: [(name, desc, misc), ...]),
...
]

This comment has been minimized.

@rcorre

rcorre Jun 13, 2016

Collaborator

Just went with a list for the whole thing here so you can check the order of the categories too.

@rcorre

rcorre Jun 13, 2016

Collaborator

Just went with a list for the whole thing here so you can check the order of the categories too.

@codecov-io

This comment has been minimized.

Show comment
Hide comment
@codecov-io

codecov-io Jun 17, 2016

Current coverage is 82.21%

Merging #1572 into master will increase coverage by 0.74%

@@             master      #1572   diff @@
==========================================
  Files           106        106          
  Lines         15371      15374     +3   
  Methods           0          0          
  Messages          0          0          
  Branches       2462       2463     +1   
==========================================
+ Hits          12523      12640   +117   
+ Misses         2378       2260   -118   
- Partials        470        474     +4   

Powered by Codecov. Last updated by f6fbb09...501319c

codecov-io commented Jun 17, 2016

Current coverage is 82.21%

Merging #1572 into master will increase coverage by 0.74%

@@             master      #1572   diff @@
==========================================
  Files           106        106          
  Lines         15371      15374     +3   
  Methods           0          0          
  Messages          0          0          
  Branches       2462       2463     +1   
==========================================
+ Hits          12523      12640   +117   
+ Misses         2378       2260   -118   
- Partials        470        474     +4   

Powered by Codecov. Last updated by f6fbb09...501319c

@The-Compiler

This comment has been minimized.

Show comment
Hide comment
@The-Compiler

The-Compiler Jun 17, 2016

Collaborator

Is this ready to review/merge?

Collaborator

The-Compiler commented Jun 17, 2016

Is this ready to review/merge?

@rcorre

This comment has been minimized.

Show comment
Hide comment
@rcorre

rcorre Jun 17, 2016

Collaborator

At this point I might as well finish the rest of the models. I think I just have the config models left. I've been grouping the models into separate commits in case its easier to review that way.

Collaborator

rcorre commented Jun 17, 2016

At this point I might as well finish the rest of the models. I think I just have the config models left. I've been grouping the models into separate commits in case its easier to review that way.

@rcorre

This comment has been minimized.

Show comment
Hide comment
@rcorre

rcorre Jun 27, 2016

Collaborator

@The-Compiler assuming the CI tests pass, I think this is ready to go.

Collaborator

rcorre commented Jun 27, 2016

@The-Compiler assuming the CI tests pass, I think this is ready to go.

@rcorre

This comment has been minimized.

Show comment
Hide comment
@rcorre

rcorre Jun 27, 2016

Collaborator

Looks like something is awry with the osx env

Collaborator

rcorre commented Jun 27, 2016

Looks like something is awry with the osx env

@The-Compiler

This comment has been minimized.

Show comment
Hide comment
@The-Compiler

The-Compiler Jun 30, 2016

Collaborator

Sorry for the delay - after failing to properly fix this, I now marked OS X tests as allowed to fail again for the time being... Reviewing this now!

Collaborator

The-Compiler commented Jun 30, 2016

Sorry for the delay - after failing to properly fix this, I now marked OS X tests as allowed to fail again for the time being... Reviewing this now!

Show outdated Hide outdated tests/helpers/stubs.py
def __iter__(self):
"""Iterate over all set values."""
return self.values.__iter__()

This comment has been minimized.

@The-Compiler

The-Compiler Jun 30, 2016

Collaborator

nitpick: This should be iter(self.values) (just like you wouldn't do self.values.__getitem__(key) below) 😉

@The-Compiler

The-Compiler Jun 30, 2016

Collaborator

nitpick: This should be iter(self.values) (just like you wouldn't do self.values.__getitem__(key) below) 😉

This comment has been minimized.

@rcorre

rcorre Jun 30, 2016

Collaborator

Copied from the real config section code :)
I can go ahead and change it there too

@rcorre

rcorre Jun 30, 2016

Collaborator

Copied from the real config section code :)
I can go ahead and change it there too

This comment has been minimized.

@The-Compiler

The-Compiler Jun 30, 2016

Collaborator

Oh, my bad! 😆 Thanks for fixing it!

@The-Compiler

The-Compiler Jun 30, 2016

Collaborator

Oh, my bad! 😆 Thanks for fixing it!

Show outdated Hide outdated tests/helpers/stubs.py
return self.values[key]
class FakeSettingValue:

This comment has been minimized.

@The-Compiler

The-Compiler Jun 30, 2016

Collaborator

Not sure if it really pays off to have a fake for this - couldn't you easily use the real SettingValue object without pulling it too much unrelated code?

Though I guess this makes it easier to customize valid_values and default for tests?

@The-Compiler

The-Compiler Jun 30, 2016

Collaborator

Not sure if it really pays off to have a fake for this - couldn't you easily use the real SettingValue object without pulling it too much unrelated code?

Though I guess this makes it easier to customize valid_values and default for tests?

This comment has been minimized.

@rcorre

rcorre Jun 30, 2016

Collaborator

Though I guess this makes it easier to customize valid_values and default for tests?

Exactly. If I use a real SettingValue, then I have to use a real ConfigType, and my test becomes flaky with regards to changes in configtypes (e.g. if someone added an extra option to IgnoreCase).

@rcorre

rcorre Jun 30, 2016

Collaborator

Though I guess this makes it easier to customize valid_values and default for tests?

Exactly. If I use a real SettingValue, then I have to use a real ConfigType, and my test becomes flaky with regards to changes in configtypes (e.g. if someone added an extra option to IgnoreCase).

Show outdated Hide outdated tests/helpers/stubs.py
@@ -400,9 +452,114 @@ def set_bindings_for(self, section, bindings):
self.bindings[section] = bindings
class FakeHistoryEntry:

This comment has been minimized.

@The-Compiler

The-Compiler Jun 30, 2016

Collaborator

Same as above, I think you could just use webkit.history.Entry here?

@The-Compiler

The-Compiler Jun 30, 2016

Collaborator

Same as above, I think you could just use webkit.history.Entry here?

Show outdated Hide outdated tests/unit/completion/test_models.py
]
def _get_completions(model):

This comment has been minimized.

@The-Compiler

The-Compiler Jun 30, 2016

Collaborator

I think all those utility functions should be at the top of the file rather than the bottom. Makes it easier to read the file from top to bottom 😉

@The-Compiler

The-Compiler Jun 30, 2016

Collaborator

I think all those utility functions should be at the top of the file rather than the bottom. Makes it easier to read the file from top to bottom 😉

@The-Compiler

This comment has been minimized.

Show comment
Hide comment
@The-Compiler

The-Compiler Jun 30, 2016

Collaborator

Nice work overall!

As mentioned in the line comments I'm somewhat wary about all the mocking though. It's always a trade-off, as it increases test maintainance costs for future changes and also opens the possibility of tests passing but the real code failing because of a discrepancy there.

While it makes sense to mock bigger things off which you can't easily instantiate (say, a WebView because of all its dependencies), for smaller things like the ones you creating mocks for, I think it's perfectly fine to use the real thing if possible.

Could you take another look at what actually makes sense to mock out, and where it'd be simpler to use the real thing?

Collaborator

The-Compiler commented Jun 30, 2016

Nice work overall!

As mentioned in the line comments I'm somewhat wary about all the mocking though. It's always a trade-off, as it increases test maintainance costs for future changes and also opens the possibility of tests passing but the real code failing because of a discrepancy there.

While it makes sense to mock bigger things off which you can't easily instantiate (say, a WebView because of all its dependencies), for smaller things like the ones you creating mocks for, I think it's perfectly fine to use the real thing if possible.

Could you take another look at what actually makes sense to mock out, and where it'd be simpler to use the real thing?

@The-Compiler The-Compiler changed the title from Add unit test for CommandCompletionModel. to Add unit tests for completion models Jun 30, 2016

@rcorre

This comment has been minimized.

Show comment
Hide comment
@rcorre

rcorre Jun 30, 2016

Collaborator

Could you take another look at what actually makes sense to mock out, and where it'd be simpler to use the real thing?

Definitely. It depends on whether you really think of these 'unit' tests as 'isolation' tests, or whether they could have some elements of an integration test. I was leaning towards the former, but it definitely feels like I mocked out too much.

In particular, it might make sense to use a real Command object. One refactoring I wanted to do was remove Command.completion and just rely directly on the completion of the ArgInfo. Using a FakeCommand, my tests wouldn't catch that. Actually I can't even remember now why I didn't use a real Command... it seemed like there was a good reason because I definitely thought of this at the time.

Collaborator

rcorre commented Jun 30, 2016

Could you take another look at what actually makes sense to mock out, and where it'd be simpler to use the real thing?

Definitely. It depends on whether you really think of these 'unit' tests as 'isolation' tests, or whether they could have some elements of an integration test. I was leaning towards the former, but it definitely feels like I mocked out too much.

In particular, it might make sense to use a real Command object. One refactoring I wanted to do was remove Command.completion and just rely directly on the completion of the ArgInfo. Using a FakeCommand, my tests wouldn't catch that. Actually I can't even remember now why I didn't use a real Command... it seemed like there was a good reason because I definitely thought of this at the time.

rcorre added some commits Jun 11, 2016

Add unit test for CommandCompletionModel.
This establishes a pattern that can probably be used to test any of the
completion models.

See #999.
Add unit tests for help completion.
Also adds a FakeConfigSection stub for stubbing out configdata.DATA.
Unit test url/quickmark/bookmark completion.
This also adds a few more stubs/fakes to mock out the
quickmark-manager, bookmark-manager, and web-history objects.
Unit test tab (buffer) completion.
This involved adding a few stubs as well as expanding the FakeWebView.
Clean up objreg after test_history.
test_init in test_history was leaving a web-history in the objreg,
which was interfering with the completion tests.
Unit test config option completion.
Had to add the `raw` parameter to ConfigStub.get as the setting option
completion model passes this argument explicitly (although the docs say
only raw=True is supported).
Test completion quickmark/bookmark deletion.
Validate that quickmarks and bookmarks can be deleted from the url
completion menu.
Test delete_cur_item more cleanly.
Reduce duplicate code by mocking out a QTreeView around the model and
setting its index.
Add completion/models/base to perfect_files.
It now has 100% test coverage.

rcorre added some commits Jun 30, 2016

Use iter instead of __iter__.
Corrent two places this was used.
Don't bother mocking a history entry.
Use webkit.history.Entry in tests instead of FakeHistoryEntry.
Clean up Qt view used in completion tests.
Add the view created during the tests to qtot so it gets cleaned up
after the test exits.
@rcorre

This comment has been minimized.

Show comment
Hide comment
@rcorre

rcorre Jun 30, 2016

Collaborator

Rebased on master to pick up the osx test 'fix'. The last 4 commits are from this code review. I've still got some more un-mocking to do, I think.

Collaborator

rcorre commented Jun 30, 2016

Rebased on master to pick up the osx test 'fix'. The last 4 commits are from this code review. I've still got some more un-mocking to do, I think.

@The-Compiler

This comment has been minimized.

Show comment
Hide comment
@The-Compiler

The-Compiler Jun 30, 2016

Collaborator

Definitely. It depends on whether you really think of these 'unit' tests as 'isolation' tests, or whether they could have some elements of an integration test. I was leaning towards the former, but it definitely feels like I mocked out too much.

Yeah, I'm usually leaning a bit more towards the latter.

I think the benefit is clear: More straightforward and often more maintainable tests.

The drawbacks I can think of:

  • Slightly slower tests - I don't think it makes much of a difference, unless
    you have things like network access, which should definitely be mocked out.
  • Unrelated code changes cause unit tests to fail - I think here the benefits
    simply outweight this drawback, and if you mocked things out, you need to
    take care of keeping the mocks up-to-date as well
  • Coverage can be a bit skewed, i.e. a module can appear covered even though
    it's just from other tests "smearing" to that module, and it's not actually
    tested well. To take care of that, I added a mode to run test files
    one-by-one to check_coverage.py, though I haven't actually checked if that
    still works for a while 😆
Collaborator

The-Compiler commented Jun 30, 2016

Definitely. It depends on whether you really think of these 'unit' tests as 'isolation' tests, or whether they could have some elements of an integration test. I was leaning towards the former, but it definitely feels like I mocked out too much.

Yeah, I'm usually leaning a bit more towards the latter.

I think the benefit is clear: More straightforward and often more maintainable tests.

The drawbacks I can think of:

  • Slightly slower tests - I don't think it makes much of a difference, unless
    you have things like network access, which should definitely be mocked out.
  • Unrelated code changes cause unit tests to fail - I think here the benefits
    simply outweight this drawback, and if you mocked things out, you need to
    take care of keeping the mocks up-to-date as well
  • Coverage can be a bit skewed, i.e. a module can appear covered even though
    it's just from other tests "smearing" to that module, and it's not actually
    tested well. To take care of that, I added a mode to run test files
    one-by-one to check_coverage.py, though I haven't actually checked if that
    still works for a while 😆
Show outdated Hide outdated tests/unit/completion/test_models.py
'web-history-max-items': 2}
model = urlmodel.UrlCompletionModel()
# delete item (1, 0) -> (bookmarks, 'https://github.com' )
view = _mock_view_index(model, 1, 0)

This comment has been minimized.

@The-Compiler

The-Compiler Jun 30, 2016

Collaborator

I think you need to do qtbot.add_widget(view) here too.

@The-Compiler

The-Compiler Jun 30, 2016

Collaborator

I think you need to do qtbot.add_widget(view) here too.

Show outdated Hide outdated tests/unit/completion/test_models.py
'web-history-max-items': 2}
model = urlmodel.UrlCompletionModel()
# delete item (0, 1) -> (quickmarks, 'ddg' )
view = _mock_view_index(model, 0, 1)

This comment has been minimized.

@The-Compiler

The-Compiler Jun 30, 2016

Collaborator

I think you need to do qtbot.add_widget(view) here too.

@The-Compiler

The-Compiler Jun 30, 2016

Collaborator

I think you need to do qtbot.add_widget(view) here too.

rcorre added some commits Jul 1, 2016

Eliminate FakeSettingSection/Value.
Don't really need to mock these out for tests as the real classes are
simple enough.
@rcorre

This comment has been minimized.

Show comment
Hide comment
@rcorre

rcorre Jul 1, 2016

Collaborator

I'm pretty much in complete agreement about mocking. I've seen tests that mock out so much they don't even seem to be testing anything ...

I managed to eliminate the fake setting sections and values; I kept around FakeConfigType because its an easy way to mock completions.

I remember now why I didn't use a real Command: the constructor takes the handler method and tears it apart to populate its fields, and that's way more than I wanted to test here. I can take a look at test_cmdutils/test_argparser and see how rigorous they are.

I've got a test for the Completer in the pipeline as well -- should I push that up here or would you rather get this through and review that in another PR?

Collaborator

rcorre commented Jul 1, 2016

I'm pretty much in complete agreement about mocking. I've seen tests that mock out so much they don't even seem to be testing anything ...

I managed to eliminate the fake setting sections and values; I kept around FakeConfigType because its an easy way to mock completions.

I remember now why I didn't use a real Command: the constructor takes the handler method and tears it apart to populate its fields, and that's way more than I wanted to test here. I can take a look at test_cmdutils/test_argparser and see how rigorous they are.

I've got a test for the Completer in the pipeline as well -- should I push that up here or would you rather get this through and review that in another PR?

@The-Compiler The-Compiler self-assigned this Jul 1, 2016

@The-Compiler The-Compiler merged commit d45acb0 into qutebrowser:master Jul 1, 2016

1 of 2 checks passed

continuous-integration/travis-ci/pr The Travis CI build failed
Details
continuous-integration/appveyor/pr AppVeyor build succeeded
Details
@The-Compiler

This comment has been minimized.

Show comment
Hide comment
@The-Compiler

The-Compiler Jul 1, 2016

Collaborator

Let's get this in first - thanks!

Collaborator

The-Compiler commented Jul 1, 2016

Let's get this in first - thanks!

@rcorre rcorre deleted the rcorre:completion_tests branch Jul 1, 2016

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment