-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Closed
Labels
kind/questionSomething requiring a responseSomething requiring a response
Description
This is more of an application question than anything else, so please feel free to direct me elsewhere. Many thanks for this wonderful project!
I run a small service, KVdb, that I'd like to improve the read latency from various points around the Internet. Initially, I'm leaning into a single master + multiple slave setup as a stepping stone to a more proper distributed system :) The Badger API docs lead me to the Stream framework, which is used by the backup and restore functionality, so perhaps something like this would work?
Assumptions/tradeoffs:
- Flat topology of 5~10 slaves
- Master-slave network latencies of 50~200ms
- Reads should be fast, writes can be an order of magnitude slower
- All data will be blindly replicated for now
On the master, for each slave that connects:
- Create a new Stream
- Select keys since the slave's last version timestamp
- Send keys over the network to the slave
On the slave:
- Connect to master
- Request key stream since last version timestamp
- Use KVLoader to ingest keys into our local database
- Repeat
On the slave:
- Redirect write activity to master. (I don't want to go down the Raft rabbit hole at this point.)
- Refuse reads/softfail if slave is too far behind master (e.g., time delta) depending on desired consistency
Am I oversimplifying, plain wrong, or is this at least in the right direction?
Metadata
Metadata
Assignees
Labels
kind/questionSomething requiring a responseSomething requiring a response