-
-
Notifications
You must be signed in to change notification settings - Fork 7k
pytest: do not ignore server issues #19996
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
When a test server is found or configured, do not silently ignore errors to start and disable them when checking their version. This forces pytest to fail when a server is not operating as it should.
|
@vszakats and @dfandrich : as to the passed vs skipped run of pytests. It prints a line like at the end. There is also a Not sure what is more convenient or better to use in testclutch. |
|
====================== 592 passed, 135 skipped in 53.76s =======================
Test Clutch parses this line, but only to get the overall test success/fail
status. It gets the individual test results by parsing the -v output for each
test.
at the end. There is also a pytest-reportlog plugin that can produce a JSON
file with all test outcomes at the end.
This might be useful. I've hesitated to support machine-readable test output
formats so far because that would double the size of the logs and just make it
harder for humans to navigate. But, perhaps a zstd-compressed base85-encoded
block of JSON after the text log wouldn't be too distracting. It might be worth
some experimentation.
be better for ingestion into testclutch than parsing the raw output?
Not sure what is more convenient or better to use in testclutch.
A machine-readable format would always be better, but usability of humans needs
to be first. Maybe we can do both.
|
The default would be to store the JSON in a file. Can you then access that? Otherwise we could just |
|
If the file were created as a build artifact it should be accessible, with the
right permissions. But, TC would need a new mechanism to search for such
artifacts, fetch them and a new parser to make sense of them. Additionally, it
would need logic to compare the traditionally-parsed log files with the
machine-readable versions to avoid duplicating results and a way to associate
them with the remaining information obtained from the right build in the log
files. All doable, but a project that only makes sense to tackle if the results
make the effort worthwhile.
I looked into the pytest-setupinfo plug-in and didn't immediately see any way
to get JSON out of it.
|
|
The {"nodeid": "tests/http/test_17_ssl_use.py::TestSSLUse::test_17_04_double_dot[http/1.1]", "location": ["tests/http/test_17_ssl_use.py", 131, "TestSSLUse.test_17_04_double_dot[http/1.1]"], "keywords": {"test_17_04_double_dot[http/1.1]": 1, "parametrize": 1, "pytestmark": 1, "http/1.1": 1, "TestSSLUse": 1, "test_17_ssl_use.py": 1, "http": 1, "tests": 1, "curl": 1, "": 1}, "outcome": "passed", "longrepr": null, "when": "setup", "user_properties": [], "sections": [["Captured log setup", "DEBUG filelock:_api.py:294 Attempting to acquire lock 4497406912 on /Users/sei/projects/curl/tests/http/gen/gw1/ca/ca.lock\nDEBUG filelock:_api.py:297 Lock 4497406912 acquired on /Users/sei/projects/curl/tests/http/gen/gw1/ca/ca.lock\nDEBUG filelock:_api.py:327 Attempting to release lock 4497406912 on /Users/sei/projects/curl/tests/http/gen/gw1/ca/ca.lock\nDEBUG filelock:_api.py:330 Lock 4497406912 released on /Users/sei/projects/curl/tests/http/gen/gw1/ca/ca.lock\nDEBUG filelock:_api.py:294 Attempting to acquire lock 4490692464 on /Users/sei/projects/curl/tests/http/gen/ports.lock\nDEBUG filelock:_api.py:297 Lock 4490692464 acquired on /Users/sei/projects/curl/tests/http/gen/ports.lock\nDEBUG filelock:_api.py:327 Attempting to release lock 4490692464 on /Users/sei/projects/curl/tests/http/gen/ports.lock\nDEBUG filelock:_api.py:330 Lock 4490692464 released on /Users/sei/projects/curl/tests/http/gen/ports.lock\nDEBUG filelock:_api.py:294 Attempting to acquire lock 4501963264 on /Users/sei/projects/curl/tests/http/gen/ports.lock\nDEBUG filelock:_api.py:297 Lock 4501963264 acquired on /Users/sei/projects/curl/tests/http/gen/ports.lock\nDEBUG filelock:_api.py:327 Attempting to release lock 4501963264 on /Users/sei/projects/curl/tests/http/gen/ports.lock\nDEBUG filelock:_api.py:330 Lock 4501963264 released on /Users/sei/projects/curl/tests/http/gen/ports.lock"]], "duration": 2.8718773560003683, "start": 1765879421.6158159, "stop": 1765879424.4875772, "$report_type": "TestReport", "item_index": 6, "worker_id": "gw1", "testrun_uid": "65231f3a01704750b2f8bafec7c26120", "node": "<WorkerController gw1>"} |
When a test server is found or configured, do not silently ignore errors to start and disable them when checking their version.
This forces pytest to fail when a server is not operating as it should.