Skip to content

Commit

Permalink
Merge r226944 - [GTK][WPE] Add support for unit test expectations
Browse files Browse the repository at this point in the history
https://bugs.webkit.org/show_bug.cgi?id=181589

Reviewed by Michael Catanzaro.

We currently have a way to skip tests by annotating them in the api test runner script. The main problem of this
approach is that we skip tests when they fail in the bots and we never notice if they stop failing, keeping the
tests skipped forever. This is indeed the case of several WebKit2 C API tests. Annotating skipped tests in the
script itself is not a good idea either.

This patch adds a generic TestExpectations class for simple tests based on tests with subtests, like our unit
tests, but also WebDriver tests. It parses a json file with the tests and subtests expectations and provides
convenient methods to query them.

* Scripts/run-gtk-tests:
(GtkTestRunner): Remove all Skipped and Slow tests marked here.
* Scripts/run-wpe-tests:
(WPETestRunner): Ditto.
* Scripts/webkitpy/common/test_expectations.py: Added.
(TestExpectations):
(TestExpectations.__init__):
(TestExpectations._port_name_for_expected):
(TestExpectations._expected_value):
(TestExpectations.skipped_tests):
(TestExpectations.skipped_subtests):
(TestExpectations._expectation_value):
(TestExpectations.is_slow):
(TestExpectations.get_expectation):
* Scripts/webkitpy/common/test_expectations_unittest.py: Added.
(MockTestExpectations):
(MockTestExpectations.__init__):
(MockTestExpectations.is_skip):
(ExpectationsTest):
(assert_exp):
(assert_not_exp):
(assert_bad_exp):
(assert_skip):
(test_basic):
(test_skip):
(test_flaky):
(test_build_type):
* TestWebKitAPI/glib/TestExpectations.json: Added.
* glib/api_test_runner.py:
(TestRunner): Remove SkippedTest implementation.
(TestRunner.__init__): Create a TestExpectations.
(TestRunner._test_cases_to_skip): Use TestExpectations to check skipped tests.
(TestRunner._should_run_test_program): Ditto.
(TestRunner._run_test_glib): Use TestExpectations to check if test suite is slow.
(TestRunner._run_test_glib.parse_line.set_test_result): Register also tests passing.
(TestRunner._run_google_test): Use TestExpectations to check if test cases is slow and register tests passing.
(TestRunner.run_tests): Check if actual result is the expected one and register also unexpected passes.
(TestRunner.run_tests.report): Helper to write report to stdout.
  • Loading branch information
carlosgcampos committed Jan 24, 2018
1 parent bd6a853 commit cd77bd8
Show file tree
Hide file tree
Showing 3 changed files with 423 additions and 0 deletions.
55 changes: 55 additions & 0 deletions Tools/ChangeLog
@@ -1,3 +1,58 @@
2018-01-15 Carlos Garcia Campos <cgarcia@igalia.com>

[GTK][WPE] Add support for unit test expectations
https://bugs.webkit.org/show_bug.cgi?id=181589

Reviewed by Michael Catanzaro.

We currently have a way to skip tests by annotating them in the api test runner script. The main problem of this
approach is that we skip tests when they fail in the bots and we never notice if they stop failing, keeping the
tests skipped forever. This is indeed the case of several WebKit2 C API tests. Annotating skipped tests in the
script itself is not a good idea either.

This patch adds a generic TestExpectations class for simple tests based on tests with subtests, like our unit
tests, but also WebDriver tests. It parses a json file with the tests and subtests expectations and provides
convenient methods to query them.

* Scripts/run-gtk-tests:
(GtkTestRunner): Remove all Skipped and Slow tests marked here.
* Scripts/run-wpe-tests:
(WPETestRunner): Ditto.
* Scripts/webkitpy/common/test_expectations.py: Added.
(TestExpectations):
(TestExpectations.__init__):
(TestExpectations._port_name_for_expected):
(TestExpectations._expected_value):
(TestExpectations.skipped_tests):
(TestExpectations.skipped_subtests):
(TestExpectations._expectation_value):
(TestExpectations.is_slow):
(TestExpectations.get_expectation):
* Scripts/webkitpy/common/test_expectations_unittest.py: Added.
(MockTestExpectations):
(MockTestExpectations.__init__):
(MockTestExpectations.is_skip):
(ExpectationsTest):
(assert_exp):
(assert_not_exp):
(assert_bad_exp):
(assert_skip):
(test_basic):
(test_skip):
(test_flaky):
(test_build_type):
* TestWebKitAPI/glib/TestExpectations.json: Added.
* glib/api_test_runner.py:
(TestRunner): Remove SkippedTest implementation.
(TestRunner.__init__): Create a TestExpectations.
(TestRunner._test_cases_to_skip): Use TestExpectations to check skipped tests.
(TestRunner._should_run_test_program): Ditto.
(TestRunner._run_test_glib): Use TestExpectations to check if test suite is slow.
(TestRunner._run_test_glib.parse_line.set_test_result): Register also tests passing.
(TestRunner._run_google_test): Use TestExpectations to check if test cases is slow and register tests passing.
(TestRunner.run_tests): Check if actual result is the expected one and register also unexpected passes.
(TestRunner.run_tests.report): Helper to write report to stdout.

2018-01-11 Carlos Garcia Campos <cgarcia@igalia.com>

Unreviewed. Update Selenium WebDriver imported tests.
Expand Down
114 changes: 114 additions & 0 deletions Tools/Scripts/webkitpy/common/test_expectations.py
@@ -0,0 +1,114 @@
# Copyright (C) 2018 Igalia S.L.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS'' AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS BE LIABLE FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
# ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

import json
import os


class TestExpectations(object):

def __init__(self, port_name, expectations_file, build_type='Release'):
self._port_name = port_name
self._build_type = build_type
if os.path.isfile(expectations_file):
with open(expectations_file, 'r') as fd:
self._expectations = json.load(fd)
else:
self._expectations = {}

def _port_name_for_expected(self, expected):
if self._port_name in expected:
return self._port_name

name_with_build = self._port_name + '@' + self._build_type
if name_with_build in expected:
return name_with_build

if 'all' in expected:
return 'all'

name_with_build = 'all@' + self._build_type
if name_with_build in expected:
return name_with_build

return None

def _expected_value(self, expected, value, default):
port_name = self._port_name_for_expected(expected)
if port_name is None:
return default

port_expected = expected[port_name]
if value in port_expected:
return port_expected[value]

return default

def skipped_tests(self):
skipped = []
for test in self._expectations:
if 'expected' not in self._expectations[test]:
continue

expected = self._expectations[test]['expected']
if 'SKIP' in self._expected_value(expected, 'status', []):
skipped.append(test)
return skipped

def skipped_subtests(self, test):
skipped = []
if test not in self._expectations:
return skipped

test_expectation = self._expectations[test]
if 'subtests' not in test_expectation:
return skipped

subtests = test_expectation['subtests']
for subtest in subtests:
if 'SKIP' in self._expected_value(subtests[subtest]['expected'], 'status', []):
skipped.append(subtest)
return skipped

def _expectation_value(self, test, subtest, value, default):
retval = default
if test not in self._expectations:
return retval

test_expectation = self._expectations[test]
if 'expected' in test_expectation:
retval = self._expected_value(test_expectation['expected'], value, retval)

if subtest is None or 'subtests' not in test_expectation:
return retval

subtests = test_expectation['subtests']
if subtest not in subtests:
return retval

return self._expected_value(subtests[subtest]['expected'], value, retval)

def is_slow(self, test, subtest=None):
return self._expectation_value(test, subtest, 'slow', False)

def get_expectation(self, test, subtest=None):
return self._expectation_value(test, subtest, 'status', ['PASS'])

0 comments on commit cd77bd8

Please sign in to comment.