Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added generic test case handling #117

Merged
merged 17 commits into from
Feb 21, 2020
Merged

Conversation

bluenote10
Copy link
Collaborator

This implements #116, and I'm actually quite happy with how it turned out. A few notes:

  • I started with migration just one test case as an example. If everyone is happy, I can migrate more later.
  • Basically almost all tests that are currently in edge_cases.test.js can be handled by the system. So adding test cases doesn't require to write any code!
  • Note that the logic isn't just "the third feature is an expected result". Instead all feature past the first two can be used to specify expected results. This allows to have test cases without expected results (just test for no crash / no infinite loop), or tests that verify multiple operations. Since diff is asymmetric, I've added a mode diff_ba which allows to test for the reverse diff as well.
  • I'm using a separate folder for these generic test cases, because it is probably not limited to just cases of a GitHub issueXXX pattern, but also basic cases like the hourglasses I've migrated.
  • For file discovery I'm using the (standard nodejs?) glob.
  • I've implemented the REGEN=true mode as suggested by @rowanwins. This works really nicely, just run REGEN=true npm run test and the test cases will be updated. This also provides "auto formatting" of the test cases as a bonus feature. The default JSON.stringify is a little bit too spacious when enabling line wrapping (because it requires several lines for each [x, y] point array). I'm using json-stringify-pretty-compact which seems to fit our use case very well and produces well formatted JSON -- see the hourglasses.geojson, which is formatted by that logic.

@bluenote10
Copy link
Collaborator Author

To give you a better impression of the idea I've added a few more test cases (all the fatal*.geoson). I haven't removed the original files / tests -- we could do that at the end...

The nice thing about storing all the expected results in the geojsons is that is quite easy to come up with a visualization of what we are testing against. For instance I'm using the attached small Python script to verify the contents of the expected test results: test_cases.pdf

There is probably a similarly easy way to visualize the same with plain JS as well, just to as a starting point...

This makes it quite obvious that e.g. fatal3 and fatal4 currently have buggy results, but at least test against not crashing.

Side notes:

  • fatal3 actually is identical to one of the test cases called infinite loop crash in edge_cases.test.js
  • fatal4 looks like it will be fixed by Fix compare segments endpoint touching #115 because it has the same pattern.
Python plot script
#!/usr/bin/env python

from __future__ import print_function

import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages

import json
import os
import sys


def extract_multi_polygon(feature):
    kind = feature["geometry"]["type"]
    if kind == "Polygon":
        return [feature["geometry"]["coordinates"]]
    elif kind == "MultiPolygon":
        return feature["geometry"]["coordinates"]
    else:
        raise ValueError("Feature has wrong type: {}".format(kind))


def plot(ax, multi_polygon, label):
    for j, polygon in enumerate(multi_polygon):
        for k, ring in enumerate(polygon):
            xs = [p[0] for p in ring]
            ys = [p[1] for p in ring]
            ax.plot(xs, ys, "o-", label="{} (poly = {}, ring = {})".format(label, j + 1, k + 1), ms=2)


def main(interactive=False):
    if len(sys.argv) < 2:
        print("ERROR: No geojson files specified.")
        sys.exit(1)

    else:
        files = sys.argv[1:]

        with PdfPages("test_cases.pdf") as pp:

            for f in sorted(files):
                data = json.load(open(f))
                assert data["type"] == "FeatureCollection"

                features = data["features"]
                assert len(features) >= 2

                p1 = extract_multi_polygon(features[0])
                p2 = extract_multi_polygon(features[1])

                for feature in features[2:]:
                    op = feature["properties"]["operation"]
                    p_res = extract_multi_polygon(feature)

                    fig, axes = plt.subplots(1, 3, figsize=(18, 10), sharex=True, sharey=True)

                    plot(axes[0], p1, "A")
                    plot(axes[0], p2, "B")

                    plot(axes[1], p_res, "Result")

                    plot(axes[2], p1, "A")
                    plot(axes[2], p2, "B")
                    plot(axes[2], p_res, "Result")

                    #filename_out = filename.replace(".json", ".png")
                    #plt.savefig(filename_out)

                    axes[0].legend(loc="best")
                    axes[1].legend(loc="best")
                    axes[2].legend(loc="best")

                    fig.suptitle("{} / {}".format(os.path.basename(f), op))

                    plt.tight_layout()
                    plt.subplots_adjust(top=0.93)

                    if interactive:
                        plt.show()

                    pp.savefig(fig)
                    plt.close(fig)


if __name__ == "__main__":
    main()

@w8r
Copy link
Owner

w8r commented Jan 20, 2020

It's a very cool addition, cause basically all the edge cases go there and require adding the dummy lines of code. I thought about doing something like that for the demo app as well

@bluenote10
Copy link
Collaborator Author

👍 Okay I spend the evening converting all manually written edge cases into test case GeoJSONs. This gives the best overview:

all_test_cases.pdf

A few notes:

  • I'm using the properties.comment field of the GeoJSON features to document additional notes on the test cases where needed (plotted as orange subtitles in the pdf overview).
  • Currently I have not modified any test cases, the idea was to move everything from edge_cases.test.js 1-to-1 into the corresponding GeoJSON. There are some dubious test cases (as observed in other threads already), but I'd say we can deal with fixing the test cases in separate PRs.
  • It would be nice if we can move ahead with this PR soon so that cleaning up broken tests gets easier in the other PRs.
  • As a bonus I have added two new simple test cases that came to mind (tie, filling_rectangle).
  • I have removed all duplication of test cases, i.e., files have moved from "fixture" to the "genericTestCases" folder. The only thing I didn't touch is the huge switch in the demo to locate the fixtures. I hope we can clean up the demo in a separate PR (maybe instead of a huge switch we can auto-populate the data?).
  • Here is a quick proof that the generic test cases are active:

image

@@ -0,0 +1,34 @@
{
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file is just intended to make it easier to add new test cases.

@w8r w8r merged commit 89338f6 into w8r:master Feb 21, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants