Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upAdd a way to ship 2.0 storage blocks elsewhere #3093
Comments
This comment has been minimized.
This comment has been minimized.
pauldix
commented
Aug 21, 2017
|
I'm not sure if the format should be tied to chunks or blocks. Although if you just send blocks it'll be more efficient since you're just copying over the underlying datastructures untouched. Although it looks like tombstones might cause a problem with this since they'll have to be shipped too? Overall I think the real goal would be to have a bulk read/write API. I think from the read perspective it would take a start and end time and include data from all series in that range. And the write would take some contiguous block of time to write all in one go. If you think it makes sense to use blocks then I'm +1. We can work with the actual |
This comment has been minimized.
This comment has been minimized.
|
I think 2.0 blocks is what we should use here, and keep this coupled with Prometheus. I think an API for bulk read would be a bad idea, as the response size would be massive and we already have one of those in the form of the query endpoint. Bulk write is a separate topic. |
This comment has been minimized.
This comment has been minimized.
pauldix
commented
Aug 21, 2017
|
@brian-brazil fair enough. If the write API sends blocks, we'll be able to work with it. |
brian-brazil
added
component/local storage
component/ui
component/remote storage
priority/P2
kind/enhancement
labels
Aug 21, 2017
brian-brazil
changed the title
Remote write bulk api
Add a way to ship 2.0 storage blocks elsewhere
Aug 21, 2017
brian-brazil
removed
component/remote storage
component/ui
labels
Aug 21, 2017
This comment has been minimized.
This comment has been minimized.
|
@fabxc With your recent change over in TSDB don't we just need to add a paramater to the snapshot API to make this practical? |
This comment has been minimized.
This comment has been minimized.
|
And that API is now there, so I'm calling this done. |
brian-brazil
closed this
Mar 8, 2018
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 22, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
gouthamve commentedAug 20, 2017
•
edited
I wanted to write a tool that ships the 2.0 blocks to a remote storage, but looks like there is interest from the others as well. Especially as it allows for bulk insert which is much more efficient.
The plan currently for me is: Make a tool (promtool subcommand) which actually ships the block off to the remote-storage and mark it experimental and iterate on the fields and format of the API and once we have a couple of users, fold it into Prometheus.
@tomwilkie @pauldix Do you guys think this makes sense? Or do we want to ship the chunks instead of the block?
Shipping the block actually makes it really hard to write the integration in a language other than Go and will heavily depend on the structure of TSDB.