New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Invalid query results with regexp filter conditions #192
Comments
|
Thank you for finding the issue. That's a bug. |
|
Test query: Bad behavior is caused by coding in sort(removed_ids.begin(), removed_ids.end());
removed_ids.erase(unique(removed_ids.begin(), removed_ids.end()), removed_ids.end());
new_ids.erase(set_difference(new_ids.begin(), new_ids.end(),
removed_ids.begin(), removed_ids.end(), new_ids.begin()), new_ids.end());Situation before Situation after Although the object ids are all identical, the erase statement fails to delete all relevant ones - it just deletes a single entry. Effectively, way 1530957491 still survives in the result, although it has been identified to be removed by the negative key/value_regex. The following pull request fixes this by first removing duplicate entries in sort(new_ids.begin(), new_ids.end());
new_ids.erase(unique(new_ids.begin(), new_ids.end()), new_ids.end());This fix is also deployed for testing on the dev instance: http://overpass-turbo.eu/s/b03 |
|
Issue was fixed in 3935810 |
|
I'm reopening this issue, as the patch in 3935810 causes performance regressions. The following very simple query used to take 50ms, and now takes 500ms. Test case: [out:json];node["name"~"[33][22][22][55]"]["phone"](3.2299951,-76.6815258,3.6299951000000004,-76.2815258);out body;With an additional instrumentation in method It's pretty evident, that we're sorting and removing the same elements (plus a few new ones) over and over again for the same coarse index. I'm proposing to change this in a way that sorting / removing duplicates is only done once per |
|
It is now an enhancement because it is a merely question of performance. |
|
I'm reopening this issue, as there's a strange effect with the Karlsruhe node. I'm testing with keyregexp and a Cyrillic character 'л': node(49.01342968289611,8.403076529502867,49.015037577534585,8.406440019607544)[!name][~"."~"л"];
out;
/*
Test:
nodes: 21487097 -> ok ( [!name] removes node )
240120582 -> not ok ( [!name] has no impact )
*/ http://overpass-turbo.eu/s/pw1 My expectation is that node 240120582 (Karlsruhe) is not shown in the result, because is has a This one is even worse, returning lots of nodes with a name tag: node({{bbox}})[!name][~"."~"a"];
node._[name];
out;http://overpass-turbo.eu/s/pwz Also tested on //dev.overpass-api.de/api_new_feat/ with the same effect. |
|
I would like to obtain all nodes in a region that have no |
|
This query returns invalid results depending on zoom level: http://overpass-turbo.eu/s/Dww |
|
Always the same reason: filter_ids_by_ntags is getting called with duplicates ids in |
|
This issue was fixed in 2ca3f38 |
According to the Wiki:
Let's take a look at the following examples:
out geom;way(271221475);way(271221475) ["addr:city"];way(271221475) ["addr:city"] ["addr:hamlet"!~"."];way(271221475) ["addr:city"] ["addr:hamlet"!~"."] [~"^addr:hamlet:.*$"~"."];In example 3, the condition
["addr:hamlet"!~"."]is not met. To my surprise, adding another key-regexp filter in example 4 overrides this. Is this a bug, undocumented behavior or did I miss something here?Reference data:
The text was updated successfully, but these errors were encountered: