-
Notifications
You must be signed in to change notification settings - Fork 554
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Firehose consumers do not receive root block in case of too big seq commit #2893
Labels
bug
Something isn't working
Comments
This was referenced Oct 19, 2024
Hi @dholms! I am pinging you here since you are the last one in the git blame for this file :) |
bump :( |
The intention going forward is that the commit block should be included as the root block. It is also possible it isn't getting passed through the relay? We'll need to investigate. |
Oh rip I didn't see your PR on this 😅 Let me take a closer look, I might prefer yours |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug
Firehose consumers do not receive root block in case of too big seq commit.
To Reproduce
I wrote a corresponding unit test in the linked PR that reproduces the issue
Actual behavior
car.blocks
are always emptyExpected behavior
car.blocks
must be always one (the root one)Additional context
Line no. 29 is so suspicious:
atproto/packages/pds/src/sequencer/events.ts
Lines 25 to 31 in 81ae1b1
Here is why:
.add
ofBlockMap
returns a promise and no one waits for resolve (forgottenawait
).add
is typed asLexValue
which goes throughdataToCborBlock(lexToIpld(value))
. That does not make sense since we just move the block from oneBlockMap
to another one.newBlocks.get
returns bytes already.P.S. I dug into this because I got reports from my lovely Python devs that
commit.blocks
received from the firehose message frame could be missed. But it is marked as a required field in the lexicon. From our observation, it is missed always in combination withtooBig=true
. I will continue my journey, but if you have any ideas I would appreciate itThe text was updated successfully, but these errors were encountered: