Skip to content

Commit

Permalink
Merge r227064 - WebDriver: add support for test expectations
Browse files Browse the repository at this point in the history
https://bugs.webkit.org/show_bug.cgi?id=180420

Reviewed by Carlos Alberto Lopez Perez.

Tools:

Add support for parsing test expectations from a JSON file and mark tests on collection accordingly.

* Scripts/run-webdriver-tests: Get the retval from process_results().
* Scripts/webkitpy/thirdparty/__init__.py:
(AutoinstallImportHook._install_pytest): Install also py because pytest needs it.
* Scripts/webkitpy/webdriver_tests/pytest_runner.py:
(TestExpectationsMarker): Plugin to mark tests based on given expectations.
(TestExpectationsMarker.__init__): Initialize expectations.
(TestExpectationsMarker.pytest_collection_modifyitems): Mark tests if needed,
(run): Create and use TestExpectationsMarker plugin.
* Scripts/webkitpy/webdriver_tests/webdriver_selenium_executor.py:
(WebDriverSeleniumExecutor.run): Pass expectations to pytest_runner.
* Scripts/webkitpy/webdriver_tests/webdriver_test_runner.py:
(WebDriverTestRunner.__init__): Create a TestExpectations and pass it to the runners.
(WebDriverTestRunner.run): Do not count results here.
(WebDriverTestRunner.process_results): Rename print_results() as process_results() since it now returns the
amount of failures. Printing the test summary while processing results will be made optional in a follow up
patch.
(WebDriverTestRunner.process_results.report): Return the amount of failures.
* Scripts/webkitpy/webdriver_tests/webdriver_test_runner_selenium.py:
(WebDriverTestRunnerSelenium.__init__): Initialize _expectations.
(WebDriverTestRunnerSelenium.collect_tests): Do not include skipped tests.
(WebDriverTestRunnerSelenium.run): Stop returning the tests count.
* Scripts/webkitpy/webdriver_tests/webdriver_test_runner_w3c.py:
(WebDriverTestRunnerW3C.__init__): Initialize _expectations.
(WebDriverTestRunnerW3C.collect_tests): Do not include skipped tests.
(WebDriverTestRunnerW3C._scan_directory): Ditto.
(WebDriverTestRunnerW3C.run): Stop returning the tests count.
* Scripts/webkitpy/webdriver_tests/webdriver_w3c_executor.py:
(WebDriverW3CExecutor.run): Pass expectations to pytest_runner.

WebDriverTests:

Add initial test expectations. For now I'm only adding the W3C test expectations, selenium ones will be added in
a follow up patch.

* TestExpectations.json: Added.
  • Loading branch information
carlosgcampos committed Jan 24, 2018
1 parent cd77bd8 commit 8b2d6c4
Show file tree
Hide file tree
Showing 11 changed files with 504 additions and 48 deletions.
38 changes: 38 additions & 0 deletions Tools/ChangeLog
@@ -1,3 +1,41 @@
2018-01-17 Carlos Garcia Campos <cgarcia@igalia.com>

WebDriver: add support for test expectations
https://bugs.webkit.org/show_bug.cgi?id=180420

Reviewed by Carlos Alberto Lopez Perez.

Add support for parsing test expectations from a JSON file and mark tests on collection accordingly.

* Scripts/run-webdriver-tests: Get the retval from process_results().
* Scripts/webkitpy/thirdparty/__init__.py:
(AutoinstallImportHook._install_pytest): Install also py because pytest needs it.
* Scripts/webkitpy/webdriver_tests/pytest_runner.py:
(TestExpectationsMarker): Plugin to mark tests based on given expectations.
(TestExpectationsMarker.__init__): Initialize expectations.
(TestExpectationsMarker.pytest_collection_modifyitems): Mark tests if needed,
(run): Create and use TestExpectationsMarker plugin.
* Scripts/webkitpy/webdriver_tests/webdriver_selenium_executor.py:
(WebDriverSeleniumExecutor.run): Pass expectations to pytest_runner.
* Scripts/webkitpy/webdriver_tests/webdriver_test_runner.py:
(WebDriverTestRunner.__init__): Create a TestExpectations and pass it to the runners.
(WebDriverTestRunner.run): Do not count results here.
(WebDriverTestRunner.process_results): Rename print_results() as process_results() since it now returns the
amount of failures. Printing the test summary while processing results will be made optional in a follow up
patch.
(WebDriverTestRunner.process_results.report): Return the amount of failures.
* Scripts/webkitpy/webdriver_tests/webdriver_test_runner_selenium.py:
(WebDriverTestRunnerSelenium.__init__): Initialize _expectations.
(WebDriverTestRunnerSelenium.collect_tests): Do not include skipped tests.
(WebDriverTestRunnerSelenium.run): Stop returning the tests count.
* Scripts/webkitpy/webdriver_tests/webdriver_test_runner_w3c.py:
(WebDriverTestRunnerW3C.__init__): Initialize _expectations.
(WebDriverTestRunnerW3C.collect_tests): Do not include skipped tests.
(WebDriverTestRunnerW3C._scan_directory): Ditto.
(WebDriverTestRunnerW3C.run): Stop returning the tests count.
* Scripts/webkitpy/webdriver_tests/webdriver_w3c_executor.py:
(WebDriverW3CExecutor.run): Pass expectations to pytest_runner.

2018-01-15 Carlos Garcia Campos <cgarcia@igalia.com>

[GTK][WPE] Add support for unit test expectations
Expand Down
4 changes: 2 additions & 2 deletions Tools/Scripts/run-webdriver-tests
Expand Up @@ -65,8 +65,8 @@ except NotImplementedError, e:

port._display_server = options.display_server
runner = WebDriverTestRunner(port)
retval = runner.run(args)
runner.print_results()
runner.run(args)
retval = runner.process_results()

if options.json_output is not None:
runner.dump_results_to_json_file(options.json_output)
Expand Down
2 changes: 2 additions & 0 deletions Tools/Scripts/webkitpy/thirdparty/__init__.py
Expand Up @@ -142,6 +142,8 @@ def _install_pytest_timeout(self):
"pytest-timeout-1.2.0/pytest_timeout.py")

def _install_pytest(self):
self._install("https://pypi.python.org/packages/90/e3/e075127d39d35f09a500ebb4a90afd10f9ef0a1d28a6d09abeec0e444fdd/py-1.5.2.tar.gz#md5=279ca69c632069e1b71e11b14641ca28",
"py-1.5.2/py")
self._install("https://pypi.python.org/packages/1f/f8/8cd74c16952163ce0db0bd95fdd8810cbf093c08be00e6e665ebf0dc3138/pytest-3.2.5.tar.gz#md5=6dbe9bb093883f75394a689a1426ac6f",
"pytest-3.2.5/_pytest")
self._install("https://pypi.python.org/packages/1f/f8/8cd74c16952163ce0db0bd95fdd8810cbf093c08be00e6e665ebf0dc3138/pytest-3.2.5.tar.gz#md5=6dbe9bb093883f75394a689a1426ac6f",
Expand Down
26 changes: 24 additions & 2 deletions Tools/Scripts/webkitpy/webdriver_tests/pytest_runner.py
Expand Up @@ -27,6 +27,8 @@
import sys
import tempfile

from webkitpy.common.system.filesystem import FileSystem
from webkitpy.common.webkit_finder import WebKitFinder
import webkitpy.thirdparty.autoinstalled.pytest
import webkitpy.thirdparty.autoinstalled.pytest_timeout
import pytest
Expand Down Expand Up @@ -126,6 +128,24 @@ def record(self, test, status, message=None, stack=None):
self.results.append(new_result)


class TestExpectationsMarker(object):

def __init__(self, expectations):
self._expectations = expectations
self._base_dir = WebKitFinder(FileSystem()).path_from_webkit_base('WebDriverTests')

def pytest_collection_modifyitems(self, session, config, items):
for item in items:
test = os.path.relpath(str(item.fspath), self._base_dir)
expected = self._expectations.get_expectation(test, item.name)[0]
if expected == 'FAIL':
item.add_marker(pytest.mark.xfail)
elif expected == 'TIMEOUT':
item.add_marker(pytest.mark.xfail(reason="Timeout"))
elif expected == 'SKIP':
item.add_marker(pytest.mark.skip)


def collect(directory, args):
collect_recorder = CollectRecorder()
stdout = sys.stdout
Expand All @@ -141,9 +161,10 @@ def collect(directory, args):
return collect_recorder.tests


def run(path, args, timeout, env={}):
def run(path, args, timeout, env, expectations):
harness_recorder = HarnessResultRecorder()
subtests_recorder = SubtestResultRecorder()
expectations_marker = TestExpectationsMarker(expectations)
_environ = dict(os.environ)
os.environ.clear()
os.environ.update(env)
Expand All @@ -159,7 +180,8 @@ def run(path, args, timeout, env={}):
'-p', 'pytest_timeout']
cmd.extend(args)
cmd.append(path)
result = pytest.main(cmd, plugins=[harness_recorder, subtests_recorder])
result = pytest.main(cmd, plugins=[harness_recorder, subtests_recorder, expectations_marker])

if result == EXIT_INTERNALERROR:
harness_recorder.outcome = ('ERROR', None)
except Exception as e:
Expand Down
Expand Up @@ -58,5 +58,5 @@ def __init__(self, driver, display_driver):
def collect(self, directory):
return pytest_runner.collect(directory, self._args)

def run(self, test, timeout):
return pytest_runner.run(test, self._args, timeout, self._env)
def run(self, test, timeout, expectations):
return pytest_runner.run(test, self._args, timeout, self._env, expectations)
36 changes: 25 additions & 11 deletions Tools/Scripts/webkitpy/webdriver_tests/webdriver_test_runner.py
Expand Up @@ -24,6 +24,8 @@
import logging
import os

from webkitpy.common.webkit_finder import WebKitFinder
from webkitpy.common.test_expectations import TestExpectations
from webkitpy.webdriver_tests.webdriver_driver import create_driver
from webkitpy.webdriver_tests.webdriver_test_runner_selenium import WebDriverTestRunnerSelenium
from webkitpy.webdriver_tests.webdriver_test_runner_w3c import WebDriverTestRunnerW3C
Expand All @@ -49,7 +51,13 @@ def __init__(self, port):
_log.info('Using driver at %s' % (driver.binary_path()))
_log.info('Browser: %s' % (driver.browser_name()))

self._runners = [runner_cls(self._port, driver, self._display_driver) for runner_cls in self.RUNNER_CLASSES]
_log.info('Parsing expectations')
self._tests_dir = WebKitFinder(self._port.host.filesystem).path_from_webkit_base('WebDriverTests')
expectations_file = os.path.join(self._tests_dir, 'TestExpectations.json')
build_type = 'Debug' if self._port.get_option('debug') else 'Release'
self._expectations = TestExpectations(self._port.name(), expectations_file, build_type)

self._runners = [runner_cls(self._port, driver, self._display_driver, self._expectations) for runner_cls in self.RUNNER_CLASSES]

def run(self, tests=[]):
runner_tests = [runner.collect_tests(tests) for runner in self._runners]
Expand All @@ -59,13 +67,11 @@ def run(self, tests=[]):
return 0

_log.info('Collected %d test files' % collected_count)
results_count = 0
for i in range(len(self._runners)):
if runner_tests[i]:
results_count += self._runners[i].run(runner_tests[i])
return results_count
self._runners[i].run(runner_tests[i])

def print_results(self):
def process_results(self):
results = {}
expected_count = 0
passed_count = 0
Expand Down Expand Up @@ -93,30 +99,38 @@ def print_results(self):
pass

_log.info('')
retval = 0

if not results:
_log.info('All tests run as expected')
return
return retval

_log.info('%d tests ran as expected, %d didn\'t\n' % (expected_count, failures_count + timeout_count + passed_count))

def report(status, actual, expected=None):
retval = 0
if status not in results:
return
return retval

tests = results[status]
tests_count = len(tests)
if expected is None:
_log.info('Unexpected %s (%d)' % (actual, len(tests)))
_log.info('Unexpected %s (%d)' % (actual, tests_count))
retval += tests_count
else:
_log.info('Expected to %s, but %s (%d)' % (expected, actual, len(tests)))
_log.info('Expected to %s, but %s (%d)' % (expected, actual, tests_count))
for test in tests:
_log.info(' %s' % test)
_log.info('')

return retval

report('XPASS', 'passed', 'fail')
report('XPASS_TIMEOUT', 'passed', 'timeout')
report('FAIL', 'failures')
report('TIMEOUT', 'timeouts')
retval += report('FAIL', 'failures')
retval += report('TIMEOUT', 'timeouts')

return retval

def dump_results_to_json_file(self, output_path):
json_results = {}
Expand Down
Expand Up @@ -34,46 +34,46 @@

class WebDriverTestRunnerSelenium(object):

def __init__(self, port, driver, display_driver):
def __init__(self, port, driver, display_driver, expectations):
self._port = port
self._driver = driver
self._display_driver = display_driver
self._expectations = expectations
self._results = []
self._tests_dir = WebKitFinder(self._port.host.filesystem).path_from_webkit_base('WebDriverTests')

def _tests_dir(self):
return WebKitFinder(self._port.host.filesystem).path_from_webkit_base('WebDriverTests')

def collect_tests(self, tests=[]):
def collect_tests(self, tests):
if self._driver.selenium_name() is None:
return 0

skipped = [os.path.join(self._tests_dir, test) for test in self._expectations.skipped_tests()]
relative_tests_dir = os.path.join('imported', 'selenium', 'py', 'test')
executor = WebDriverSeleniumExecutor(self._driver, self._display_driver)
# Collected tests are relative to test directory.
base_dir = os.path.join(self._tests_dir(), os.path.dirname(relative_tests_dir))
collected_tests = [os.path.join(base_dir, test) for test in executor.collect(os.path.join(self._tests_dir(), relative_tests_dir))]
base_dir = os.path.join(self._tests_dir, os.path.dirname(relative_tests_dir))
collected_tests = [os.path.join(base_dir, test) for test in executor.collect(os.path.join(self._tests_dir, relative_tests_dir))]
selenium_tests = []
if not tests:
tests = [relative_tests_dir]
for test in tests:
if not test.startswith(relative_tests_dir):
continue
test_path = os.path.join(self._tests_dir(), test)
test_path = os.path.join(self._tests_dir, test)
if os.path.isdir(test_path):
selenium_tests.extend([test for test in collected_tests if test.startswith(test_path)])
elif test_path in collected_tests:
selenium_tests.extend([test for test in collected_tests if test.startswith(test_path) and test not in skipped])
elif test_path in collected_tests and test_path not in skipped:
selenium_tests.append(test_path)
return selenium_tests

def run(self, tests=[]):
if self._driver.selenium_name() is None:
return 0
return

executor = WebDriverSeleniumExecutor(self._driver, self._display_driver)
timeout = self._port.get_option('timeout')
for test in tests:
test_name = os.path.relpath(test, self._tests_dir())
harness_result, test_results = executor.run(test, timeout)
test_name = os.path.relpath(test, self._tests_dir)
harness_result, test_results = executor.run(test, timeout, self._expectations)
result = WebDriverTestResult(test_name, *harness_result)
if harness_result[0] == 'OK':
for subtest, status, message, backtrace in test_results:
Expand All @@ -83,7 +83,5 @@ def run(self, tests=[]):
pass
self._results.append(result)

return len(self._results)

def results(self):
return self._results
26 changes: 12 additions & 14 deletions Tools/Scripts/webkitpy/webdriver_tests/webdriver_test_runner_w3c.py
Expand Up @@ -35,29 +35,29 @@

class WebDriverTestRunnerW3C(object):

def __init__(self, port, driver, display_driver):
def __init__(self, port, driver, display_driver, expectations):
self._port = port
self._driver = driver
self._display_driver = display_driver
self._expectations = expectations
self._results = []
self._tests_dir = WebKitFinder(self._port.host.filesystem).path_from_webkit_base('WebDriverTests')

self._server = WebDriverW3CWebServer(self._port)

def _tests_dir(self):
return WebKitFinder(self._port.host.filesystem).path_from_webkit_base('WebDriverTests')

def collect_tests(self, tests=[]):
def collect_tests(self, tests):
skipped = [os.path.join(self._tests_dir, test) for test in self._expectations.skipped_tests()]
relative_tests_dir = os.path.join('imported', 'w3c', 'webdriver', 'tests')
w3c_tests = []
if not tests:
tests = [relative_tests_dir]
for test in tests:
if not test.startswith(relative_tests_dir):
continue
test_path = os.path.join(self._tests_dir(), test)
test_path = os.path.join(self._tests_dir, test)
if os.path.isdir(test_path):
w3c_tests.extend(self._scan_directory(test_path))
elif self._is_test(test_path):
w3c_tests.extend(self._scan_directory(test_path, skipped))
elif self._is_test(test_path) and test_path not in skipped:
w3c_tests.append(test_path)
return w3c_tests

Expand All @@ -72,10 +72,10 @@ def _is_test(self, test):
return False
return True

def _scan_directory(self, directory):
def _scan_directory(self, directory, skipped):
tests = []
for path in self._port.host.filesystem.files_under(directory):
if self._is_test(path):
if self._is_test(path) and path not in skipped:
tests.append(path)
return tests

Expand All @@ -91,8 +91,8 @@ def run(self, tests=[]):
timeout = self._port.get_option('timeout')
try:
for test in tests:
test_name = os.path.relpath(test, self._tests_dir())
harness_result, test_results = executor.run(test, timeout)
test_name = os.path.relpath(test, self._tests_dir)
harness_result, test_results = executor.run(test, timeout, self._expectations)
result = WebDriverTestResult(test_name, *harness_result)
if harness_result[0] == 'OK':
for subtest, status, message, backtrace in test_results:
Expand All @@ -105,7 +105,5 @@ def run(self, tests=[]):
executor.teardown()
self._server.stop()

return len(self._results)

def results(self):
return self._results
Expand Up @@ -145,10 +145,10 @@ def setup(self):
def teardown(self):
self.protocol.teardown()

def run(self, test, timeout):
def run(self, test, timeout, expectations):
env = {'WD_HOST': self.protocol.session_config['host'],
'WD_PORT': str(self.protocol.session_config['port']),
'WD_CAPABILITIES': json.dumps(self.protocol.session_config['capabilities']),
'WD_SERVER_CONFIG': json.dumps(self.server_config)}
args = ['--strict', '-p', 'no:mozlog']
return pytest_runner.run(test, args, timeout, env)
return pytest_runner.run(test, args, timeout, env, expectations)
12 changes: 12 additions & 0 deletions WebDriverTests/ChangeLog
@@ -1,3 +1,15 @@
2018-01-17 Carlos Garcia Campos <cgarcia@igalia.com>

WebDriver: add support for test expectations
https://bugs.webkit.org/show_bug.cgi?id=180420

Reviewed by Carlos Alberto Lopez Perez.

Add initial test expectations. For now I'm only adding the W3C test expectations, selenium ones will be added in
a follow up patch.

* TestExpectations.json: Added.

2018-01-11 Carlos Garcia Campos <cgarcia@igalia.com>

Unreviewed. Update Selenium WebDriver imported tests.
Expand Down

0 comments on commit 8b2d6c4

Please sign in to comment.