feat: add SyncTransaction endpoint#1224
Conversation
f8b6e1e to
2f04a15
Compare
2f04a15 to
cea5e7f
Compare
bobbinth
left a comment
There was a problem hiding this comment.
Thank you! Looks good. I left some comments inline. The main things are:
- We need to store more data in the
transactionstable to avoid parsing raw blocks. - We need to figure out how to limit response size (or at least send an error if the response is too big).
bobbinth
left a comment
There was a problem hiding this comment.
Thank you! Looks good. Not a full review but left some comments inline. The main thing is to fix the methodology for response size estimation. Once you come up with methodology, could you describe it before implementing?
bobbinth
left a comment
There was a problem hiding this comment.
Looks good! Thank you. I left some more comments inline.
bobbinth
left a comment
There was a problem hiding this comment.
Looks good! Thank you. I left some more comments inline.
|
One more general comment: there is a much more efficient way to serialize note inclusion paths which would guarantee that we fit into 5MB limit even if a block contained |
I just addressed the offset comment, replacing it with a cursor-based approach. Every other comment was previously addressed 👌🏼 . |
bobbinth
left a comment
There was a problem hiding this comment.
Looks good! Thank you. I left a couple more comments inline. The main one is that I'm not sure the logic for filtering out the last block is correct.
bobbinth
left a comment
There was a problem hiding this comment.
Looks good! Thank you. I left a couple more comments inline.
Also, not for this PR, but it would be great to test this functionality somehow - but not really sure what's the best way to do it. Let's create an issue for this.
| // | ||
| // Note: 500 bytes per output note is an over-estimate but ensures we don't | ||
| // exceed memory limits when these transactions are later converted to proto records. | ||
| let header_base_size = 4 + 32 + 16 + 64; // block_num + tx_id + account_id + commitments |
There was a problem hiding this comment.
This could be const, right?
| let note_records = self | ||
| .state | ||
| .get_notes_by_id(tx_header.output_notes.clone()) | ||
| .await | ||
| .map_err(internal_error)?; |
There was a problem hiding this comment.
This might have been discussed before (maybe for a follow up) but going to the store for every single transaction seems potentially wasteful if in a single query you can fit note responses for more than one of them. I think joining the tables is also not very desirable here but that could be another alternative.
There was a problem hiding this comment.
I'd add it to the list of follow-ups. We could make a single query that brings back note data for all transactions in the response - but that's a bit more involved and shouldn't make a huge difference for an in-process database (but would be very important if if the database was on a different machine).
| last_block_num = Some(tx.block_num); | ||
| last_transaction_id = Some(tx.transaction_id.clone()); |
There was a problem hiding this comment.
Looks like the clone() (and the assignment) happen per-row but are really only needed for the last row.
There was a problem hiding this comment.
I split a bit this logic to avoid the extra clones.
| all_transactions | ||
| .into_iter() | ||
| .take_while(|row| row.block_num != last_block_num) | ||
| .collect::<Vec<_>>(), |
There was a problem hiding this comment.
nit: I think this collect could be avoided as well, right?
| let last_included_block = last_block_num.map_or(*block_range.end(), |block_num| { | ||
| BlockNumber::from_raw_sql(block_num).expect("valid block number from database") | ||
| }); |
There was a problem hiding this comment.
Since this is the case where the max payload size was not reached, would this always be equal to the end of the block range?
There was a problem hiding this comment.
Yes, makes sense. The response, in practice, is the same but in the next iteration we avoid going to blocks that we already checked and we saw that didn't matched our criteria. I;/m moving to block_range.end().
| #![allow( | ||
| clippy::cast_possible_wrap, | ||
| reason = "We will not approach the item count where i64 and usize cause issues" | ||
| )] |
There was a problem hiding this comment.
Would this be better to put into the function itself?
| // If no more data, we're done | ||
| if chunk.is_empty() { | ||
| break; | ||
| } |
There was a problem hiding this comment.
nit: I think I'd remove this if because it's functionally very similar to the if after the for loop and it doesn't seem to be too beneficial to return before it since chunk would contain no transactions to loop through
|
Overall logic looking good. Left a few comments mainly related to avoiding unnecessary allocations and a few minor nits (feel free to disregard if you think they don't apply) |
| // List of transaction records. | ||
| repeated TransactionRecord transaction_records = 2; |
There was a problem hiding this comment.
I'd name this just transactions - let's add it to the follow-up.
| // A transaction header. | ||
| transaction.TransactionHeader transaction_header = 2; |
There was a problem hiding this comment.
nit: maybe I'd change this to just header. Let's add this to the follow-ups as well.
closes #1207