Replies: 2 comments
-
I believe it's also relevant to have as a feature (if possible), support for tables which overwrite duplicate rows (where a duplicate can be based on some definition of which columns to consider and which not to consider). I say it's relevant, because if that were to exist (somewhat like InfluxDB operates), then the need to delete duplicate rows by mistake would not exists - it would be a feature where the user can enforce (or not) by design, which I'd consider to be a cleaner implementation. Of course, we still need the delete / update as well for all other cases : ) |
Beta Was this translation helpful? Give feedback.
-
Issues: Roadmaps: |
Beta Was this translation helpful? Give feedback.
-
Standard use case for QuestDB is to store billions of data and currently if you make a mistake, you accidentally store data in wrong format, you store duplicate data or any other mistake the only option you have to fix the table is to export the data, polish them and create a new table. Such operation is quite challenging and time consuming on such large datasets. It is even more tricky if you have platform that is running 24x7 and storing data into QuestDB in realtime and you don't want to miss any updates .... then fixing such data without update/delete operation is currently almost impossible.
I believe that this request applies to almost any use case as it is only matter of time when some unwanted data gets stored.
Also now that QuestDB has O3 support implementing update/delete on top of it shouldn't be that difficult and will make QuestDB even more feature complete product.
Beta Was this translation helpful? Give feedback.
All reactions