New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add integration test for LP: #1900837 #679
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the idea of the mark. It gives us an easy way to run (or exclude) tests for an SRU...though over time we could have an unruly number of SRU marks.
What if we had a generic sru mark that took (kw)args of the SRU release, an optional bug reference, and/or anything else we think is relevant? How much work would it be to make it easy to filter on those values?
|
||
|
||
@pytest.mark.sru_2020_11 | ||
class TestLp1900836: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should name files/classes/tests after bugs. The docstring at the top has the bug number, and we could perhaps add a comment with the bug number somewhere else, but I'd like to be able to quickly glance a file/class/test name to know what the test is doing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering if we are abusing marks here as a generic tagging feature (which I guess it actually is :) ). If we adopt a class or test naming convention that includes SRU or SRU_20_4 (cloud-init Major_minor version or date-based SRU start 2020_11) wouldn't we be able to provide pytest keyword search expressions any avoid the bloat of marker definitions in tox? they we could run things like pytest -k "SRU"
to run any SRU-related tests ever defined or pytest -k "SRU_2020_11"
to match only specific tests related to SRU_2020_11?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should name files/classes/tests after bugs. The docstring at the top has the bug number, and we could perhaps add a comment with the bug number somewhere else, but I'd like to be able to quickly glance a file/class/test name to know what the test is doing.
I agree I'd like to glance at test name and know what it's doing, but I also want a way to search and run specific known tests based on my intent:
- am I SRU testing and can I easily select the subset of tests related to that capacity with a -k or -m param?
- am I trying to validate just one bug and want to provide a bug Id.
This is probably just representing something might be able to be solved per @TheRealFalcon's generic kwargs handling suggestion above or maybe we create a decorator that could optionally skip test or not include a test that didn't match a set of kwargs or environment variable criteria.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should name files/classes/tests after bugs. The docstring at the top has the bug number, and we could perhaps add a comment with the bug number somewhere else, but I'd like to be able to quickly glance a file/class/test name to know what the test is doing.
That's a good point; I think we probably do want the bug number in at least one part of the test ID (so -k 1900837
works, for example, but also so you can easily find the bug based on test output). I'll play around with some naming combinations and see what I like the look of.
I'm wondering if we are abusing marks here as a generic tagging feature (which I guess it actually is :) ).
We are using it that way, and I don't think it's an abuse. :p
If we adopt a class or test naming convention that includes SRU or SRU_20_4 (cloud-init Major_minor version or date-based SRU start 2020_11)
Using cloud-init versions will fall over as soon as we have a non-upstream-release SRU (e.g. 20.2-45-g5f7825e2-0ubuntu1
was our last SRU); that's hard to fit into a mark name, it's even harder to fit into a test name. That's why I went with date-based. (If we're ever unfortunate enough to start two SRUs in a month, we can do sru_2020_11a
or similar.)
wouldn't we be able to provide pytest keyword search expressions any avoid the bloat of marker definitions in tox? they we could run things like
pytest -k "SRU"
to run any SRU-related tests ever defined orpytest -k "SRU_2020_11"
to match only specific tests related to SRU_2020_11?
The advantage marks bring is that we gain validation of them: if we misname a test method/class "SRU_2200_11" and it makes it through code review, then we may never fix that (and so may not run the test as part of SRU validation). Using @pytest.mark.sru_2200_11
errors out without running any tests:
========================================== ERRORS ==========================================
_____________ ERROR collecting tests/integration_tests/bugs/test_lp1900837.py ______________
'sru_2200_11' not found in `markers` configuration option
================================= short test summary info ==================================
ERROR tests/integration_tests/bugs/test_lp1900837.py
!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!
===================================== 1 error in 0.24s =====================================
I don't think we should name files/classes/tests after bugs. The docstring at the top has the bug number, and we could perhaps add a comment with the bug number somewhere else, but I'd like to be able to quickly glance a file/class/test name to know what the test is doing.
I agree I'd like to glance at test name and know what it's doing, but I also want a way to search and run specific known tests based on my intent:
* am I SRU testing and can I easily select the subset of tests related to that capacity with a -k or -m param? * am I trying to validate just one bug and want to provide a bug Id.
This is probably just representing something might be able to be solved per @TheRealFalcon's generic kwargs handling suggestion above or maybe we create a decorator that could optionally skip test or not include a test that didn't match a set of kwargs or environment variable criteria.
We shouldn't need to write any custom code for this: some combination of marking, test naming and -k
/-m
should be sufficient. I think, basically, we'll want to declare that bug number must be included in some part of the pytest nodeid, so that -k
selection can work, and use marks for tagging tests as SRU-relevant. For your first case, -m sru_2020_11
will select this SRU's tests, and for your second case -k 1900837
will select the bug's tests.
(I can't think of a particular reason we would want to run "all SRU tests" instead of just "all tests", but -k
does select based on substrings of mark names, so -k sru
would get you that.)
|
||
@pytest.mark.sru_2020_11 | ||
class TestLp1900836: | ||
def test_lp1900836(self, client): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I kindof like that bugid is encoded in test name here as I think that makes those tests seachable via -k bugid
I think. But I don't know if we need that at class level and test level or how we want to organize naming conventions vs marks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I kindof like that bugid is encoded in test name here as I think that makes those tests seachable via
-k bugid
I think. But I don't know if we need that at class level and test level or how we want to organize naming conventions vs marks
+1 on this, I'm going to do some rejigging to only capture bug number in one part of the test's "full" name, so we can still select on bug number but don't just have "lp1900837" repeated ad nauseum.
Potentially. The only real maintenance "burden" is the list in
I like this idea, in terms of capturing extra metadata, but
I've applied this patch locally: --- a/tests/integration_tests/bugs/test_lp1900837.py
+++ b/tests/integration_tests/bugs/test_lp1900837.py
@@ -11,7 +11,7 @@ def _get_log_perms(client):
return client.execute("stat -c %a /var/log/cloud-init.log")
-@pytest.mark.sru_2020_11
+@pytest.mark.sru("2020_11")
class TestLp1900836:
def test_lp1900836(self, client):
# Confirm that the current permissions aren't 600
diff --git a/tox.ini b/tox.ini
index 066f923a..4c3575da 100644
--- a/tox.ini
+++ b/tox.ini
@@ -169,4 +169,4 @@ markers =
lxd_container: test will only run in LXD container
user_data: the user data to be passed to the test instance
instance_name: the name to be used for the test instance
- sru_2020_11: test is part of the 2020/11 SRU verification
+ sru: test is part of SRU verification and it certainly doesn't get selected in pytest's default
Digging in further, I think this is a fundamental limitation. Using this addition to @pytest.hookimpl(hookwrapper=True)
def pytest_collection_modifyitems(items, config):
yield
assert False, list(items[0].iter_markers()) we can see what the pytest Mark object for this looks like: AssertionError: [Mark(name='sru', args=('2020_11',), kwargs={})] and looking at the code which performs the mark selection in pytest, it's only the name which is matched on. So I think we need to encode all the information we need in mark or test names. (In more detail: So we could implement this ourselves somehow, I'm sure; |
Hmmmm, I suppose I should have looked at the docs before asking the question. I remembered seeing some sort of cli and mark custom code you could do, but didn't remember the specifics. I found it here and it doesn't seem like it would be that difficult. I agree that only doing it to avoid having extra SRU marks in the tox.ini isn't worth it, but I'm also wondering if this could be leveraged for SRU test metadata as well, instead of having to put bug names as test names for example. |
Aha, yeah, I've seen this before but didn't connect the dots; good find.
I wouldn't be opposed to implementing something along these lines to make a particular use case easier, but I'm hesitant to implement this in a way that means people can't use pytest's regular lookup mechanisms as well. The more specialised our test codebase is, the harder it is for people to pick up on what's happening. Let me reshuffle this test so it's not repetitively named and we can see how we feel about it. |
As the first test of this SRU cycle, this also introduces the sru_2020_11 mark to allow us to easily identify the set of tests generated for this SRU.
So thinking about this: testing for a single bug could have multiple test functions and even multiple test classes, so I think the unit which makes most sense to name after the bug is the test file. I've just pushed up a very simple change: --- a/tests/integration_tests/bugs/test_lp1900837.py
+++ b/tests/integration_tests/bugs/test_lp1900837.py
@@ -12,8 +12,8 @@ def _get_log_perms(client):
@pytest.mark.sru_2020_11
-class TestLp1900836:
- def test_lp1900836(self, client):
+class TestLogPermissionsNotResetOnReboot:
+ def test_permissions_unchanged(self, client):
# Confirm that the current permissions aren't 600
assert "644" == _get_log_perms(client) This gives output that looks like this (just including the output lines which exhibit the test name):
I think this looks pretty reasonable: it's clear what the specific failure is ( What this doesn't really do a good job of is making it easy to find tests which (e.g.) already exercise log permissions based on examining filenames (which, of course, is a common way of looking things up from an editor). This is a gap, though passing It also doesn't give us a way of adding SRU tests which can be selected in this way to existing test files (consider if we're fixing a bug in the It certainly doesn't give us a way of indicating that an existing test (presumably with modifications) tests a particular bug: we would have to rename such a test to include the bug number. It also doesn't give us a way of specifying that a particular test instance of a parameterised test is related to a particular bug: using a mark doesn't help us in this case, though, because our pytest support matrix means we aren't able to apply marks to test instances anyway (search for "pytest.param" in the HACKING doc). I propose that we move forward with this pattern for this SRU: the hardcoded SRU mark name and using filenames to indicate the relevant bug with the expectation that test classes/methods are to be descriptive of the test being performed. When we retrospect on this SRU, we should spend some time thinking about what interfaces could have made it easier: once we have multiple tests, I think it'll be easier to determine what the right pattern to apply is (and the cost of rearranging marks is negligible, so it doesn't matter if we don't Get This Right up-front). What do folks think? (I'm happy to be convinced otherwise!) |
I think you did a good job explaining all the pros and cons of this proposal. In my head I was thinking a few "but what about..." and you mentioned them in the next paragraph. I agree that it's a workable solution for now, and that we should also come back to this and see if we can think of a better solution. |
@blackboxsw learns to use cut-n-paste and make review comments.
...
Given that we are categorizing test modules in a bugs subdirectory anyway, I feel organizing those modules by bug_id in the filename makes a lot of sense to me (and makes that module searchable via keywords) +2
TIL: Thank you for that --collect-only reference. I kept fumbling around looking for a dry-run equivalent and not seeing that jump out in docs. This meets my need for bugid searchable and human-readable context of a test at runtime.
I recognize that encoding the bugid in classnames/testnames diminishes the character real-estate we have available for naming tests in a consructive/readable manor and adds "cost" to the development or such tests, but I think that may be worth the cost for tests written outside of the "bugs" subdirectory if we want to get searchable coverage in the future for either SRUing specific cherry-picks or pytest.mark(ing) the set of SRU bugs.
This makes perfect sense. go for it. well need more use cases which we will get this SRU to properly define needs here |
Proposed Commit Message
Test Steps
I ran this test against every current Ubuntu release:
and it failed with:
in each case, as expected.
I then ran this test against every current Ubuntu release with cloud-init installed from the daily PPA (which includes the fix this is testing) and they all passed.
Checklist: