Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions Lib/test/libregrtest/single.py
Original file line number Diff line number Diff line change
Expand Up @@ -328,6 +328,9 @@ def _runtest(result: TestResult, runtests: RunTests) -> None:
support.junit_xml_list = None


import termios
import subprocess

def run_single_test(test_name: TestName, runtests: RunTests) -> TestResult:
"""Run a single test.

Expand All @@ -346,6 +349,21 @@ def run_single_test(test_name: TestName, runtests: RunTests) -> TestResult:
pgo = runtests.pgo
try:
_runtest(result, runtests)
try:
attrs = termios.tcgetattr(sys.__stdin__.fileno())
lflags = attrs[3]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm only abstractly aware of what you're checking here, but it makes me wonder if we can/should add this as an environment marker. Implementing this there might be better at pinpointing where things are going wrong (and also automatically restore things for you, if implemented correctly).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm only abstractly aware of what you're checking here, but it makes me wonder if we can/should add this as an environment marker. Implementing this there might be better at pinpointing where things are going wrong (and also automatically restore things for you, if implemented correctly).

Yeah, that's an interesting idea I might explore when I get a little more time. I've also vaguely thought it would be kind of cool if you could set something to run some arbitrary code around each test, but that would take more design and discussion.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zware I like this suggestion so much, I'm going to refocus this PR to do exactly that. My remote got messed up so either I'll resolve the conflicts or just start over with a new PR, but either way, local testing solves the problem nicely. I think my debugging efforts were unsuccessful because more than one test can actually leave the tty in a -echo state. I'm not going to spend the time to dig into those root causes, but the env save/restore wfm.

Along those lines, I wish regrtest gave me a little more information about which tests changed the env in its test summary. Scrolling up I can see things like test_foo failed (env changed) and then the test gets rerun and succeeds, and I see things like this, which is helpful:

Warning -- stty_echo was modified by test_difflib
Warning --   Before: True
Warning --   After:  False

but it would be nice if the summary, um, summarized these! Maybe that's implied in the 11 re-run tests section. That's a PR for another day.

== Tests result: ENV CHANGED then SUCCESS ==

10 slowest tests:
- test_signal: 44.2 sec
- test_xmlrpc: 27.3 sec
- test_logging: 23.2 sec
- test_remote_pdb: 21.6 sec
- test_ssl: 19.5 sec
- test_faulthandler: 18.3 sec
- test_selectors: 15.1 sec
- test_threading: 14.9 sec
- test.test_asyncio.test_tasks: 13.3 sec
- test_urllib2net: 13.0 sec

20 tests skipped:
    test.test_asyncio.test_windows_events
    test.test_asyncio.test_windows_utils test.test_gdb.test_backtrace
    test.test_gdb.test_cfunction test.test_gdb.test_cfunction_full
    test.test_gdb.test_misc test.test_gdb.test_pretty_print
    test.test_os.test_windows test_android test_devpoll test_epoll
    test_free_threading test_launcher test_msvcrt test_startfile
    test_winapi test_winconsoleio test_winreg test_winsound test_wmi

2 tests skipped (resource denied):
    test_peg_generator test_zipfile64

11 re-run tests:
    test.test_asyncio.test_protocols test_configparser test_ctypes
    test_difflib test_doctest test_locale test_math test_pickle
    test_struct test_ttk test_zipimport

433 tests OK.

Total duration: 48.2 sec
Total tests: run=47,122 skipped=1,955
Total test files: run=464/455 skipped=20 resource_denied=2 rerun=11
Result: ENV CHANGED then SUCCESS

echo = lflags & termios.ECHO
except termios.error:
state = subprocess.run(['stty', '-a'], capture_output=True, text=True)
for word in state.stdout.split():
if word in ('echo', '-echo'):
echo = (word == 'echo')
break
else:
print(f'{red}test {test_name} MAYBE - {state.stdout}')
echo = False
if not echo:
print(f"{red}test {test_name} BROKE -ECHO")
except:
if not pgo:
msg = traceback.format_exc()
Expand Down
Loading