-
Notifications
You must be signed in to change notification settings - Fork 0
feat: enable fcu deduplication #11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| #[derive(Clone)] | ||
| pub(crate) struct ProxyHttpInnerAuthrpc { | ||
| config: ConfigAuthrpc, | ||
| fcu_cache: Cache<[u8; 3 * HASH_LEN], ()>, // head + safe + finalised |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why now do the following:
Key - head hash, value - safe + finalized?
This would be MUCH simpler as i see it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or another option - let's have (alloy::Address, alloy::Address, alloy::Address)
This too would be simpler to build and look at
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FCU that we are deduplicating are advancing safe head. most of them happen on the same unsafe head.
i.e. they look like:
| unsafe | safe | finalised |
|---|---|---|
| 1000 | 800 | 600 |
| 1000 | 801 | 600 |
| 1000 | 802 | 600 |
| 1000 | 803 | 600 |
| ... | ... | ... |
| 1001 | 870 | 600 |
| 1001 | 871 | 600 |
| 1001 | 872 | 600 |
| 1001 | 873 | 600 |
and then somewhere in between the finalised head might also advance.
also, it's up to the sequencer to decide on the reorg, so above sequence might go askew just because of it.
tl;dr: we should not be get in the sequencers way too much in something as dumb as a proxy. if we did forward some particular unsafe/safe/filanised triplet already, we can safely reason that there's no need to do that again. but deciding on behalf the sequencer about particular re-combinations of those hashes is too much already.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's have (alloy::Address, alloy::Address, alloy::Address)
I don't understand this.
address is 20 bytes. block hash is 32. how are they interchangeable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
p.s. perhaps you meant using FixedBytes<32> => still I am not sure it's worth using them.
reason is:
- the message we get is already a string representation.
- if we simply concatenate 3x 66 bytes of those string representations, we already get a unique id of the FCU message
- now if we convert them into bytes, then we spend time on the conversion, while not achieving anything extra. the hash-map key would just shrink from 198 to 96 bytes, which I doubt would give us performance boost bigger than performance penalty we'd have to spend on string-2-bytes conversion
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
str is not sized and therefore can not be used for key. that's why I use fixed-size array. so if it's a tuple then it has to be a tuple of arrays.
but what's the difference between ([u8; 66], [u8; 66], [u8; 66]) vs. just [u8; 198]? my intuition is that hashing one array would be (negligibly) faster than 3 arrays.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's not about hashing (probably equal or close to)
But about code ergonomics
This is opinionates, but IMO using tuple for this is simpler than list
Instead of this:
let state = params[0].as_object()?;
let head = state.get("headBlockHash")?.as_str()?;
let safe = state.get("safeBlockHash")?.as_str()?;
let finalized = state.get("finalizedBlockHash")?.as_str()?;
let mut key = [0u8; 3 * HASH_LEN];
key[0..HASH_LEN].copy_from_slice(head.as_bytes());
key[HASH_LEN..2 * HASH_LEN].copy_from_slice(safe.as_bytes());
key[2 * HASH_LEN..3 * HASH_LEN].copy_from_slice(finalized.as_bytes());
You would have this :
#[derive(Clone)]
pub(crate) struct ProxyHttpInnerAuthrpc {
config: ConfigAuthrpc,
fcu_cache: Cache<(String, String, String), ()>, // head + safe + finalised
}
let state = params[0].as_object()?;
let head = state.get("headBlockHash")?.as_str()?.to_string();
let safe = state.get("safeBlockHash")?.as_str()?.to_string();
let finalized = state.get("finalizedBlockHash")?.as_str()?.to_string();
let key = (head, safe, finalized);
if !self.fcu_cache.contains_key(&key) {
self.fcu_cache.insert(key, ());
return None;
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer to avoid allocating strings just for ergonomics. this code is on a hot path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in fact, if there is a good way to have a view of [u8; 66] into underlying str that would be even better to avoid copying data into the array (and using a tuple would make much more sense then)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You place this data into HashMap, so baseline is that you must give up ownership of some data.
As you use reference to jrpc_req you cannot give up ownership of it, so you need to copy the data.
You could change should_intercept function to take jrpc_req: JrpcRequestMetaMaybeBatch,, then you could take ownership of the field and do this without copies
Co-authored-by: Solar Mithril <mikawamp@gmail.com>
this PR enables FCU deduplication in cases when op-node begins to advance safe head from multiple sequencers (active + candidates) which results in that same FCU call is being delivered multiple times.
in addition the backing functionality can be reused to implement chaos-engineering (like there is in
bproxy).