Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

flaky test: JsonReportGeneratorTest::test_generate_report #146

Closed
dvzrv opened this issue Jun 4, 2020 · 6 comments
Closed

flaky test: JsonReportGeneratorTest::test_generate_report #146

dvzrv opened this issue Jun 4, 2020 · 6 comments

Comments

@dvzrv
Copy link
Contributor

dvzrv commented Jun 4, 2020

Hi! When building the 3.0.0 package for Arch Linux I ran into this test sometimes failing:

=================================== FAILURES ===================================
_________________ JsonReportGeneratorTest.test_generate_report _________________

self = <diff_cover.tests.test_report_generator.JsonReportGeneratorTest testMethod=test_generate_report>

    def test_generate_report(self):

        # Generate a default report
        self.use_default_values()

        # Verify that we got the expected string
        expected = json.dumps({
            'report_name': ['reports/coverage.xml'],
            'diff_name': 'master',
            'src_stats': {
                'file1.py': {
                    'percent_covered': 66.66666666666667,
                    'violation_lines': [10, 11],
                    'violations': [[10, None], [11, None]]
                },
                'subdir/file2.py': {
                    'percent_covered': 66.66666666666667,
                    'violation_lines': [10, 11],
                    'violations': [[10, None], [11, None]]
                }
            },
            'total_num_lines': 12,
            'total_num_violations': 4,
            'total_percent_covered': 66
        })

>       self.assert_report(expected)

diff_cover/tests/test_report_generator.py:300:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
diff_cover/tests/test_report_generator.py:163: in assert_report
    assert_long_str_equal(expected, output_str, strip=True)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

expected = '{"report_name": ["reports/coverage.xml"], "diff_name": "master", "src_stats": {"file1.py": {"percent_covered": 66.666...iolations": [[10, null], [11, null]]}}, "total_num_lines": 12, "total_num_violations": 4, "total_percent_covered": 66}'
actual = '{"report_name": ["reports/coverage.xml"], "diff_name": "master", "src_stats": {"subdir/file2.py": {"percent_covered":...iolations": [[10, null], [11, null]]}}, "total_num_lines": 12, "total_num_violations": 4, "total_percent_covered": 66}'
strip = True

    def assert_long_str_equal(expected, actual, strip=False):
        """
        Assert that two strings are equal and
        print the diff if they are not.

        If `strip` is True, strip both strings before comparing.
        """
        # If we've been given a byte string, we need to convert
        # it back to unicode.  Otherwise, Python3 won't
        # let us use string methods!
        if isinstance(expected, six.binary_type):
            expected = expected.decode('utf-8')
        if isinstance(actual, six.binary_type):
            actual = actual.decode('utf-8')

        if strip:
            expected = expected.strip()
            actual = actual.strip()

        if expected != actual:

            # Print a human-readable diff
            diff = difflib.Differ().compare(
                expected.split('\n'), actual.split('\n')
            )

            # Fail the test
>           assert False, '\n\n' + '\n'.join(diff)
E           AssertionError:
E
E           - {"report_name": ["reports/coverage.xml"], "diff_name": "master", "src_stats": {"file1.py": {"percent_covered": 66.66666666666667, "violation_lines": [10, 11], "violations": [[10, null], [11, null]]}, "subdir/file2.py": {"percent_covered": 66.66666666666667, "violation_lines": [10, 11], "violations": [[10
, null], [11, null]]}}, "total_num_lines": 12, "total_num_violations": 4, "total_percent_covered": 66}
E           + {"report_name": ["reports/coverage.xml"], "diff_name": "master", "src_stats": {"subdir/file2.py": {"percent_covered": 66.66666666666667, "violation_lines": [10, 11], "violations": [[10, null], [11, null]]}, "file1.py": {"percent_covered": 66.66666666666667, "violation_lines": [10, 11], "violations": [[10
, null], [11, null]]}}, "total_num_lines": 12, "total_num_violations": 4, "total_percent_covered": 66}

diff_cover/tests/helpers.py:43: AssertionError
=========================== short test summary info ============================
FAILED diff_cover/tests/test_report_generator.py::JsonReportGeneratorTest::test_generate_report
======================== 1 failed, 236 passed in 11.25s ========================
@Bachmann1234
Copy link
Owner

I think I see whats going on. SrcStats is a dict. If I had to guess the underlying test is doing a string comparison. Ill take a look when I get a second and put in a fix for this.

Little surprised CI did not catch this given that between all the versions it runs the suite like 6 times.

if I had to guess its 2.7 failing (python3 has ordered dicts)

@Bachmann1234
Copy link
Owner

My assumption was wrong (fails under 3 periodically)

@Bachmann1234
Copy link
Owner

#149 should fix it. Will be merged shortly

@Bachmann1234
Copy link
Owner

Looks like this is part of package testing, ill do a release shortly as well

@Bachmann1234
Copy link
Owner

Checkout https://pypi.org/project/diff-cover/3.0.1/ and let me know if that does not work for ya

@dvzrv
Copy link
Contributor Author

dvzrv commented Jun 5, 2020

@Bachmann1234 thanks, this looks alright now! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants