-
Notifications
You must be signed in to change notification settings - Fork 24.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
date_range
with gte
lower than lte
(<24h) but errors with min value greater than max value
#108241
Comments
Pinging @elastic/es-search (Team:Search) |
Did you check the local time of the ES nodes ? My guess is , some of them having different timezones contributing to this issue of < 24 hr durations. |
I did not, as I can reproduce it in a single node already. even tho, the local timezone should be ignored since the timezone is required to be submitted in the documents value. thats why I explicitly disabled |
@boesing The timestamps in the exception are as expected. The
Note the seventh digit in Maybe I don't understand the issue? |
@benwtrent Could you check the data I am sending to elastic? these dates are not integer, though elastic parses these dates to integer and I guess there is a bug. There is a difference in those dates round about 23 hours and a couple of minutes. Something within elasticsearch does not convert those strings into the correct integers and thus - ofc - the values in the error are reflecting an issue but that is not caused on client side. if that would be a client issue, I do not understand why the same request works within elastic cloud which is on 8.11.0 with lucene 9.7.0. |
@boesing you are using
FYI, I did your test with
We parse all values and index them into |
Ah, wasn't aware of that. So its the format which introduces the problem. |
@benwtrent What I still do not understand is, that if elasticsearch uses I've adapted our mapping but I still don't get the underlying issue with my concrete example.
So I would expect issues end of december where the week of the year could become 2025 on december 30th, 2024 but not end of april 🤔 |
I don't think it is? I got the exact same error as you when testing in cloud, however its obvious that the resulting min and max are incorrect for a range mapping.
I honestly don't know. But we are not using Here is the parsing once we have the temporal accessor: elasticsearch/server/src/main/java/org/elasticsearch/common/time/DateFormatters.java Line 2182 in e4bf51d
Here is the code once we have parsed out the TemporalAccessor for WeekOfYear:
|
Elasticsearch Version
8.13.2
Installed Plugins
No response
Java Version
bundled
OS Version
Debian 11.9, Linux 5.10
Problem Description
Somehow, a
date_range
value of one of our indexed fields is not parsed properly.When I try to index a value which seems kinda close to each other, I receive the following issue:
I have tried a bunch of elasticsearch versions, starting from 8.12.3 up to 8.13.2 down to 8.11.0.
Somehow, the issue does not appear on 8.11.0 on elastic cloud (might be due to the fact that the lucene version there is
9.7.0
as that is the only difference I could spot between my local 8.11.0 and the one from elastic.co).I have also tried multiple servers where we have elasticsearch installed, fresh setups via docker and debian, etc. and I always ran into that specific problem. The date range we are trying to persist has <24h difference but it perfectly works if it has 24h and 1 seconds difference (tho I just tested that by changing the time).
Steps to Reproduce
Use docker setup guide of elastic.co to install 8.13.2: https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
Create a fresh index using the following mapping
Use
_bulk
API to create a new document with the following request payload:Logs (if relevant)
No response
The text was updated successfully, but these errors were encountered: