New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wildcard search on not_analyzed field behaves inconsistently #9973
Comments
Sorry for the terrible formatting. The YAML snippets got butchered by markdown. |
Interestingly, using an actual wildcard query here does the right thing (as well as using a POST /9973/_search?pretty
{
"query": {
"wildcard": {
"name": "T*"
}
}
} |
@jdutton okay, @s1monw figured out what was going on here - since the This works as intended for me: POST /9973/_search?pretty
{
"query": {
"query_string": {
"query": "name:T*",
"lowercase_expanded_terms": false
}
}
} |
OK, wow thank you all for the help and fast response. In my case I was really filtering, and this was a surprising and undesirable behavior. But then again, in other cases (e.g. search bar) lowercasing would be the desired behavior. I have to do some soul searching now on how to change my application :-) Thanks again for the help! |
for the record, i think we should remove this lowercasing option completely in parsers, disable it, and let the analysis chain take care. For multitermqueries, its a little tricky, which subset of the filters should be used? For example lowercasing is reasonable, stemming is not. But lucene already annotates each analyzer component deemed to be "reasonable" for wildcards with a marker interface (MultiTermAwareComponent). Things like lowercasefilter have it and things like stemmers dont have it. This is enough to build a "chain", automatically from the query analyzer, that acts reasonably for multitermqueries. I know we don't use the lucene factories (es has its own), but we have a table that maps between them, i know because its in a test I wrote. So the information is there :) All queryparsers have hooks (e.g. factory methods for prefix/wildcard/range) that make it possible to use this, for example solr does it by default, as soon as it did this, people stopped complaining about confusing behavior: both for the unanalyzed, and the analyzed case. it just works. Sorry for the long explanation. Compare:
|
@rmuir +1 to remove the option can you open an issue? |
The analysis chain should be used instead of relying on this, as it is confusing when dealing with different per-field analysers. The `locale` option was only used for `lowercase_expanded_terms`, which, once removed, is no longer needed, so it was removed as well. Fixes elastic#9978 Relates to elastic#9973
I am seeing inconsistent behavior with wildcard searches that makes no sense. I've created a Play to reproduce the issue I'm seeing here - https://www.found.no/play/gist/e452c1d68d6465540d85
For two simple documents:
name: "7000"
name: "T100"
With a simple not_analyzed mapping:
type:
properties:
name:
type: string
index: not_analyzed
The query for "name:7_" matches a single document (as it should), but a query for "name:T_" does not match a document. I'm seeing this bug in ES versions 1.3.2 and 1.4.4.
Trying various searches and documents, it appears that wildcarding starting with a numeric-looking string works, but starting with an alpha character (e.g. "T") fails to get any hits.
The text was updated successfully, but these errors were encountered: