-
Notifications
You must be signed in to change notification settings - Fork 9k
-
Notifications
You must be signed in to change notification settings - Fork 9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Documentation for admin API is no good for deleting metrics #5567
Comments
Unix timestamps are in seconds generally, as is everything else in the API as documented on the page. 1557756000000 is also a perfectly valid unix timestamp. |
yes, it is valued, but in my case it caused all data to be wiped out because, probably, prometheus expected it to be in seconds and since it was in milliseconds it converted it to something quite far in future, and together with end parameter supplied it deleted everything until this date, meaning everything I had. So this leaves a possibility of very unpleasant situation, and I was thinking it would be a good idea to create a issue that could save other people for potential data loss. |
1557756000000 (or 1557756000 in seconds format) is Monday, May 13, 2019 14:00:00 (GMT/UTC) and not 17:00:00 Prometheus uses UTC times, so perhaps you had more data deleted due not accounting for TZ differences? I always use milliseconds and never had an issue. |
Looking at the docs, where we document the semantics of |
I guess, he means to say that all the data got deleted due to a large end value.
I do agree with what @brian-brazil said, but I think we can have |
I don't see how that'd help here, the timestamp was valid. |
On 01 Sep 04:11, Brian Brazil wrote:
> I think we can have add check in prometheus to validate unix timestamp as a sanitary check.
I don't see how that'd help here, the timestamp was valid.
We could refuse timestamps higher than
10000000000
date -d @10000000000
Sat Nov 20 18:46:40 CET 2286
to prevent people who think it is ms for doing so.
And we can revisit that in the year 2250.
…
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
#5567 (comment)
--
(o- Julien Pivotto
//\ Open-Source Consultant
V_/_ Inuits - https://www.inuits.eu
|
that's what I meant as a sanitary check. |
Documentation must specify what unix timestamp format must be used to avoid total data loss (as it happened in my case).
Documentation just say
In my case I have specified timestamp of 2019-05-13 17:00 as 1557756000000, but apparently it needs to be without last 3 digits (so in seconds, not in milliseconds).
In result my data was totally wiped out using this query:
Suggestion:
The text was updated successfully, but these errors were encountered: