You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be useful for cluster users and scalability to be able to open batches/transactions, add a number of pins (or remove), and then close the batch (at which point the update is sent to the network).
Batches are just a way to group several updates together before sending them out to other peers (different from local-datastore-batching-transactions).
Once #1008 is done, this could be approached, but it will need a bunch of things:
TODO: figure out how the consensus component interface changes and where the batch-maintenance should go. LogPins vs BatchPin + BatchCommit .
TODO: figure out how the API changes: let the user submit many pins that go to a batch one by one, or allow sending many pins all at once and make the batch with that
Optional: go-libp2p-raft should probably utilize BatchingFSM (FSM ApplyBatch() libp2p/go-libp2p-raft#61) for efficiency reasons. This means upgrading and fixing something with the new raft logger.
The text was updated successfully, but these errors were encountered:
TODO: figure out how the API changes: let the user submit many pins that go to a batch one by one, or allow sending many pins all at once and make the batch with that
I think this boils down to the error handling in case the batch is larger than the allowed batch size.
An application that can forge its output like a python generator is usually much more lightweight and can start processing and pushing changes while it's still reading the input.
The decision if the change needs to be aborted if the pushed changes cannot be committed as a whole might be just an issue for certain types of inputs.
This would, while processing, allow for an early commit, opening a new changeset with the critical changes which need to be committed together, writing non-critical changes until it's full or another critical one comes around.
It's also possible to implement QoS this way on the application level, allowing for commits when a time-critical write comes around while writing background stuff earlier in the commit.
It would be useful for cluster users and scalability to be able to open batches/transactions, add a number of pins (or remove), and then close the batch (at which point the update is sent to the network).
Batches are just a way to group several updates together before sending them out to other peers (different from local-datastore-batching-transactions).
Once #1008 is done, this could be approached, but it will need a bunch of things:
LogPins
vsBatchPin + BatchCommit
.The text was updated successfully, but these errors were encountered: