Skip to content

Conversation

@karknu
Copy link
Contributor

@karknu karknu commented Jan 18, 2021

No description provided.

@karknu karknu marked this pull request as ready for review January 18, 2021 10:39
@karknu karknu requested review from dcoutts and nfrisby January 18, 2021 10:39
Copy link
Contributor

@dcoutts dcoutts left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent.

It's probably a general thing we should clarify in our docs for typed protocols and the network framework etc, and also audit in our existing code:

Do not allocate resources within a protocol handler. Have all resources allocated (with appropriate bracketing) and passed into the protocol handler. Protocol handlers get killed unceremoniously, and the (continuation) style in which they are written does not support resource cleanup.

)
=> Tracer m (TraceChainSyncServerEvent blk)
-> ChainDB m blk
-> Reader m blk (WithPoint blk (SerialisedHeader blk))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes great, pass the resource in, rather than allocating it within the protocol handler.

Comment on lines +467 to +494
bracket
(chainSyncHeaderServerReader (getChainDB kernel) registry)
ChainDB.readerClose
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@karknu karknu force-pushed the karknu/chainsync_leak branch from a6a6ca7 to 3d66c2e Compare January 18, 2021 13:11
-> m (Reader m blk (WithPoint blk (Serialised blk)))
chainSyncBlockServerReader chainDB registry = ChainDB.newReader chainDB registry getSerialisedBlockWithPoint


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you have another reason to update this PR, please also remove these extraneous newlines while you're at it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed

@nfrisby
Copy link
Contributor

nfrisby commented Jan 18, 2021

@karknu Is this PR ready for merge? I see that Duncan Approved. You said it slowed the leak, so I'm happy to get it in before the upcoming release.

I hope to open a PR that supersedes today to also beat the release, but I might lose that race. So let's have your improvement in place ASAP. Thanks.

@nfrisby
Copy link
Contributor

nfrisby commented Jan 18, 2021

(The failing windows CI build does not currently block merge, especially in light of this unrelated diff.)

@karknu karknu force-pushed the karknu/chainsync_leak branch from 3d66c2e to 1932e92 Compare January 18, 2021 16:26
@karknu
Copy link
Contributor Author

karknu commented Jan 18, 2021

@karknu Is this PR ready for merge? I see that Duncan Approved. You said it slowed the leak, so I'm happy to get it in before the upcoming release.

I hope to open a PR that supersedes today to also beat the release, but I might lose that race. So let's have your improvement in place ASAP. Thanks.

Rebased and will merge.

@karknu karknu force-pushed the karknu/chainsync_leak branch from 1932e92 to 39f8d9c Compare January 18, 2021 18:26
@dcoutts
Copy link
Contributor

dcoutts commented Jan 18, 2021

bors merge

@iohk-bors
Copy link
Contributor

iohk-bors bot commented Jan 18, 2021

@iohk-bors iohk-bors bot merged commit bc4b47b into master Jan 18, 2021
@iohk-bors iohk-bors bot deleted the karknu/chainsync_leak branch January 18, 2021 22:46
coot pushed a commit that referenced this pull request May 16, 2022
2880: Allways close chainsync servers reader state r=dcoutts a=karknu



Co-authored-by: Karl Knutsson <karl.knutsson@iohk.io>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants