-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[0.9.3] DB should not crash when using invalid expression "GROUP BY time" #3902
Comments
@jeremyVignelles thank you for reporting this. I'm getting the same error.
|
@jeremyVignelles in regards to |
@mjdesa : I meant something like this:
The idea behind this is to provide a timeline for events that happens unfrequently (once a day for example), where the user can click to view the data (128 fields * 768 rows with the same timestamp). |
@jeremyVignelles I don't understand what would be different about the output from
Can you maybe give some example output showing what you are describing? |
In the first case, you would get only the distinct timestamps, whereas the second would send all the data with the request. |
@jeremyVignelles if your data is sparse it's probably better to return all the data points instead of having most timestamps filled with null. |
@zimbatm This is what I want, select all the timestamp for all the datapoints, but I would like to remove timestamp duplicates for that, and I would expect the expression
Indeed, I do not want extra null values which will be polluting the data when using |
@jeremyVignelles It sounds like you want something more or less like Please open a feature request in a new issue for that, as this one is now about preventing the panic from the incorrect query. |
Hi,
From a fresh database, i created a point with the admin by writing:
Then, issuing the query
crashes the database.
Here is the log : http://pastebin.com/g6DqnF43
By the way, is there any way I can select all the timestamps for a measurement (optionally filtered by a where clause) ?
Thanks for your work
The text was updated successfully, but these errors were encountered: