-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(zebra-scan): Create a key storage database in RAM #7904
Comments
This looks like a significant change, I was hoping a |
maybe a Hashmap should be enough to begin with |
I think a I am pretty sure that the storage will get complicated enough to have its own sub module (errors, constants like max number of keys, etc). |
Ah, that sounds good, I misunderstood the scope of this ticket. Something similar might also be great for storing the initial scan tasks and adding methods to check on the progress of existing initial scan tasks if we want to implement cancelling & restarting those, or sending them new viewing keys for them to scan down the line. |
Let's open a separate ticket to design the scanning strategy? |
optional suggestion about swapping out storage layersIt might be helpful to have another crate for private key and secret scanning storage. That way, people can swap it out with another crate with the same interface. For example, some applications might just want to store secrets in RAM, and avoid the dependency on a disk-based database. But we could also do this using a Rust feature, so we don't need to make that decision now. I also think we should consider re-using RocksDB here, because we already have well-tested low level interfaces to it. That would involve splitting out:
That refactor would need a separate ticket. But for now, can we just pass a |
I estimated this ticket based on our decision at the meeting: we'll just pass a |
Well, that means there is nothing to do in this ticket. I think we should add all the database stuff here instead of opening more tickets. |
Could we do this as part of #7905 and estimate this issue for re-using rocksdb here?
I think we should block this on #7905 to avoid that. |
I think there are two separate changes here:
Since we want to be able to do some work in parallel, splitting the work into those two tickets would help.
In my experience, blocking on code sometimes causes more rework, because that code includes assumptions that aren't in the design. (And those assumptions don't work with other code from other tickets that are happening in parallel.) I'd like to look at our design over the next week, and work out how to avoid blocking across key, scanning task, and results storage work. Because that can get really complicated to manage. Defining simple interfaces will reduce those dependencies. AnalysisBut for now, let's focus on the specific design question in this ticket. So what's the design question here? Here's one way to find missing parts of the design: Create a list of key operations, and say what happens to the keys, scanning tasks, and results with each operation. For example:
QuestionsI think our current design covers everything except deleting results. And maybe how the scanner task finds out about new or deleted keys. So the design questions are: My SuggestionsIn the meeting we said that we'd just pass a So here are some things we could do in this ticket:
(Or these things can be done in multiple tickets, that's up to Alfredo and Pili.) And here are some things that depend on larger work:
What do you think? Is that a good size for this ticket? Are there any missing dependencies? |
As part of the scanning project, we are going to be holding user defined viewing keys and scanning results in a storage database.
Proposed structure:
This key storage database should be always ready for being traversed by the scanning task.
As the keys are initially added to the config no delete methods should be implemented as part of this ticket.
The text was updated successfully, but these errors were encountered: