You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So @joka921 and I just figured out that his change to externalize the vocabulary during index building actually doesn't load the final vocabulary so that the text index can be built. This means the change completely breaks text indices. Still it passes all tests, without even a hint of a problem. Similarly the threading change subtly broke some benchmark queries a while back (though we only have those for a full Freebase index) and also doesn't trigger any test failures.
Therefore this issue tracks adding "end-to-end" tests that should fail when existing functionality breaks (at least in less subtle ways). For this we probably need a combination of true end-to-end tests which build an index and spawn an actual server running queries against it as well as unit tests that cover these processes from an API point of view.
Ideally we also want to run the end-to-end tests on a realistic index so they can't all run on Travis. Still even there we should at least test with the scientists collection.
I tagged this as "help wanted" because I'd be especially interested in a good variety of queries for the scientists collection (we do have more queries for the real data sets).
The text was updated successfully, but these errors were encountered:
As @Buchhold said in private mail, the Scientist Collection may not be ideal as few complex SPARQL queries will work on it. So I think in addition to working on good queries for the Scientist Collection we should think about creating a better test collection. We may then just incorporate both into the End-to-End tests.
So @joka921 and I just figured out that his change to externalize the vocabulary during index building actually doesn't load the final vocabulary so that the text index can be built. This means the change completely breaks text indices. Still it passes all tests, without even a hint of a problem. Similarly the threading change subtly broke some benchmark queries a while back (though we only have those for a full Freebase index) and also doesn't trigger any test failures.
Therefore this issue tracks adding "end-to-end" tests that should fail when existing functionality breaks (at least in less subtle ways). For this we probably need a combination of true end-to-end tests which build an index and spawn an actual server running queries against it as well as unit tests that cover these processes from an API point of view.
Ideally we also want to run the end-to-end tests on a realistic index so they can't all run on Travis. Still even there we should at least test with the scientists collection.
I tagged this as "help wanted" because I'd be especially interested in a good variety of queries for the scientists collection (we do have more queries for the real data sets).
The text was updated successfully, but these errors were encountered: