Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

routing: shutdown chanrouter correctly. #8497

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

ziggie1984
Copy link
Collaborator

@ziggie1984 ziggie1984 commented Feb 22, 2024

Fixes #8489
EDIT: Fixes #8721

So in the above linked issue, the channel graph could not be synced correctly so the ChanRouter:

2024-02-20 11:18:39.217 [INF] CRTR: Syncing channel graph from height=830127 (hash=00000000000000000003a7ed3b7a5f5fd5571a658972e9db0af2a650f6ade198) to height=831246 (hash=00000000000000000002946973960d53538a7d93333ff7d4653a37a577ba4b58)

...

2024-02-20 11:19:11.325 [WRN] LNWL: Query(34) from peer 142.132.193.144:8333 failed, rescheduling: did not get response before timeout
2024-02-20 11:19:11.326 [DBG] BTCN: Sending getdata (witness block 000000000000000000028352e09a42f6d26d0514a3d483f7f1fb56b2c2954361) to 142.132.193.144:8333 (outbound)
2024-02-20 11:19:15.327 [WRN] LNWL: Query(34) from peer 142.132.193.144:8333 failed and reached maximum number of retries, not rescheduling: did not get response before timeout
2024-02-20 11:19:15.327 [DBG] LNWL: Canceled batch 34
2024-02-20 11:19:15.328 [INF] DISC: Authenticated gossiper shutting down
2024-02-20 11:19:15.328 [INF] DISC: Authenticated Gossiper is stopping

so the 34 query failed and therefore the startup of the chanrouter failed as well.

We fail here and never call the Stop function of the channel router.
https://github.com/lightningnetwork/lnd/blob/master/routing/router.go#L628

When cleaning up all the other subsystems we get stuck however:

goroutine 1652 [select]:
github.com/lightningnetwork/lnd/routing.(*ChannelRouter).UpdateEdge(0xc0002190e0, 0xc00028fea0, {0x0, 0x0, 0x0})
        github.com/lightningnetwork/lnd/routing/router.go:2605 +0x155
github.com/lightningnetwork/lnd/discovery.(*AuthenticatedGossiper).updateChannel(0xc0004a2790, 0xc0004f0580, 0xc00028fea0)
        github.com/lightningnetwork/lnd/discovery/gossiper.go:2182 +0x1f1
github.com/lightningnetwork/lnd/discovery.(*AuthenticatedGossiper).retransmitStaleAnns(0xc0004a2790, {0x0?, 0x100c004d10870?, 0x31f6c60?})
        github.com/lightningnetwork/lnd/discovery/gossiper.go:1643 +0x272
github.com/lightningnetwork/lnd/discovery.(*AuthenticatedGossiper).networkHandler(0xc0004a2790)
        github.com/lightningnetwork/lnd/discovery/gossiper.go:1342 +0x19d
created by github.com/lightningnetwork/lnd/discovery.(*AuthenticatedGossiper).start in goroutine 1
        github.com/lightningnetwork/lnd/discovery/gossiper.go:599 +0x145

because we don't close the quit channel of the channel router and therefore the Authenticated Gossiper cannot stop as well so the cleanup process is stuck holding up the shutdown of all subsystems, causing some sideeffects because other subsystems are still running.

2024-02-20 11:19:15.328 [INF] DISC: Authenticated gossiper shutting down
2024-02-20 11:19:15.328 [INF] DISC: Authenticated Gossiper is stopping

Goroutine Dump:

goroutine 1 [semacquire]:
sync.runtime_Semacquire(0xc0039a30e0?)
        runtime/sema.go:62 +0x25
sync.(*WaitGroup).Wait(0xc0008ff7a0?)
        sync/waitgroup.go:116 +0x48
github.com/lightningnetwork/lnd/discovery.(*AuthenticatedGossiper).stop(0xc0004a2790)
        github.com/lightningnetwork/lnd/discovery/gossiper.go:746 +0x115
github.com/lightningnetwork/lnd/discovery.(*AuthenticatedGossiper).Stop.func1()
        github.com/lightningnetwork/lnd/discovery/gossiper.go:732 +0x69
sync.(*Once).doSlow(0x3?, 0xc00030e6a0?)
        sync/once.go:74 +0xbf
sync.(*Once).Do(...)
        sync/once.go:65
github.com/lightningnetwork/lnd/discovery.(*AuthenticatedGossiper).Stop(0xc0039a32d8?)
        github.com/lightningnetwork/lnd/discovery/gossiper.go:730 +0x3c
github.com/lightningnetwork/lnd.cleaner.run({0xc001c1bc00, 0x1e209aa?, 0x4?})
        github.com/lightningnetwork/lnd/server.go:1858 +0x42github.com/lightningnetwork/lnd.(*server).Start(0xcb01c?)
        github.com/lightningnetwork/lnd/server.go:2248 +0x8egithub.com/lightningnetwork/lnd.Main(0xc0001d0100, {{0x0?, 0x7f4703ac7c40?, 0x101c0000b2000?}}, 0xc000104f60, {0xc0000b3e60, 0xc000222180, 0xc0002221e0, 0xc000222240, {0x0}})
        github.com/lightningnetwork/lnd/lnd.go:684 +0x3be5
main.main()
        github.com/lightningnetwork/lnd/cmd/lnd/main.go:38 +0x1ee

So we need to think how to prevent those situations, because I think we don't close the quit channel for almost all subsystems when the start fails.

Copy link

coderabbitai bot commented Feb 22, 2024

Important

Review skipped

Auto reviews are limited to specific labels.

Labels to auto review (1)
  • llm-review

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@ziggie1984
Copy link
Collaborator Author

But this is just part of the fix why the other node is not able to sync the graph to the chain, we definitely need to retry the blockfetch and not fail immediately if we cannot get the block from the first peer. This issue is already tracked in this issue:

btcsuite/btcwallet#904

@ziggie1984
Copy link
Collaborator Author

cc @yyforyongyu @Roasbeef

@ziggie1984
Copy link
Collaborator Author

Swapped the order when we add the cleanup stop function to the garbage-collector. Let's see if tests pass.

@@ -539,6 +539,13 @@ func (r *ChannelRouter) Start() error {
return nil
}

defer func() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we still need this given the change of orders in the following commit?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree we can remove this now.

server.go Show resolved Hide resolved
@@ -1847,173 +1847,174 @@ func (s *server) Start() error {
cleanup := cleaner{}

s.start.Do(func() {
cleanup = cleanup.add(s.customMessageServer.Stop)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this change is interesting...need to take a second look to see if this can be done safely, as there's some stop pattern that requires a certain state to be reached before it can stop, such as,

lnd/peer/ping_manager.go

Lines 201 to 205 in acc5951

func (m *PingManager) Stop() error {
if m.pingTicker == nil {
return errors.New("PingManager cannot be stopped because it " +
"isn't running")
}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that's fine, because we just call the stop function, and if we cannot stop it we just print a warning and continue with the cleanup of the other subsystems. We cannot know at which state the startup might fail so therefore I think we should precautiously call the stop function.

@ziggie1984
Copy link
Collaborator Author

@yyforyongyu while adding the interruptibility to the startup of the server, I figured out that we need to make sure that each stop call is atomic (only happens once) otherwise we first call it in the cleanup method and when we return an error its also called in the server.Stop() function. While testing I had such a panic with the invoice registry but also the txpublisher.

But I think when the tests pass the switch of the cleanup order should have no side effects and can prevent some cases where subsystems depend on each other and therefore cannot shutdown correctly in case on of them does not close the quit channel.

@saubyk saubyk added the P1 MUST be fixed or reviewed label Jun 25, 2024
@lightninglabs-deploy
Copy link

@ziggie1984, remember to re-request review from reviewers when ready

Copy link
Collaborator

@yyforyongyu yyforyongyu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some comments, will check the itest logs to understand more about the new behavior.

@@ -48,6 +49,9 @@ var (
// ChannelEventStore maintains a set of event logs for the node's channels to
// provide insight into the performance and health of channels.
type ChannelEventStore struct {
started uint32 // To be used atomically.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since there are only two states, I think we can use atomic.Bool instead

@@ -142,6 +146,10 @@ func NewChannelEventStore(config *Config) *ChannelEventStore {
// information from the store. If this function fails, it cancels its existing
// subscriptions and returns an error.
func (c *ChannelEventStore) Start() error {
if !atomic.CompareAndSwapUint32(&c.started, 0, 1) {
return nil
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's always return an error when Start is called twice?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would also put the log log.Info("ChannelEventStore starting") before this check

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm I think the current design calls them twice, erroring out might cause problems, let's see what the itests say with the new design.

@@ -203,6 +211,10 @@ func (c *ChannelEventStore) Start() error {

// Stop terminates all goroutines started by the event store.
func (c *ChannelEventStore) Stop() {
if !atomic.CompareAndSwapUint32(&c.stopped, 0, 1) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here re error out on duplicate Stop, and more logging would be great.

docs/release-notes/release-notes-0.18.1.md Outdated Show resolved Hide resolved
@@ -101,6 +101,9 @@ func (r *htlcReleaseEvent) Less(other queue.PriorityQueueItem) bool {
// created by the daemon. The registry is a thin wrapper around a map in order
// to ensure that all updates/reads are thread safe.
type InvoiceRegistry struct {
started sync.Once
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's use atomic.Bool instead to save us some indentations.

lnd.go Outdated
@@ -675,10 +675,28 @@ func Main(cfg *Config, lisCfg ListenerCfg, implCfg *ImplementationCfg,

// With all the relevant chains initialized, we can finally start the
// server itself.
if err := server.Start(); err != nil {
errChan := make(chan error)
go server.Start(errChan)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the alternative would be, seems easier to follow,

	go func() {
                errChan <- server.Start()
        }()

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes looks better, changed.

Make sure that each subsystem only starts and stop once. This makes
sure we don't close e.g. quit channels twice.
This commit does two things. It starts up the server in a way that
it can be interrupted and shutdown gracefully.
Moreover it makes sure that subsystems clean themselves up when
they fail to start. This makes sure that depending subsytems can
shutdown gracefully as well and the shutdown process is not stuck.

s.txPublisher.Stop()

if err := s.txPublisher.Stop(); err != nil {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yyforyongyu not sure but I forgot why the txpublisher had no error return ? Added one now hmm

@ziggie1984
Copy link
Collaborator Author

Let's see whether all the itests pass after the change to error out when a start/stop is called twice.

Copy link
Collaborator

@yyforyongyu yyforyongyu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good, just a few nits and needs a rebase - think there's a new subserver added, we may need to change that here too.

log.Info("ChannelEventStore starting")
log.Info("ChannelEventStore starting...")

if c.started.Swap(true) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice TILed this nice Swap method.

@@ -198,13 +207,18 @@ func (c *ChannelEventStore) Start() error {
cancel: cancel,
})

log.Info("ChannelEventStore started")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: could use defer instead

@@ -213,6 +227,10 @@ func (c *ChannelEventStore) Stop() {
// Stop the ticker after the goroutine reading from it has exited, to
// avoid a race.
c.cfg.FlapCountTicker.Stop()

log.Infof("ChannelEventStore shutdown complete")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I usually do an info level on starting/stopping, then a debug level log on started/stopped to prevent too many logs - not sure if it really helps since restart should be infrequent.

}
// Start InvoiceExpiryWatcher and prepopulate it with existing
// active invoices.
err = i.expiryWatcher.Start(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍


if err := s.chanRouter.Start(); err != nil {
// The authGossiper depends on the chanRouter and therefore
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we cherry-pick this change to a new commit - seems more like a potential bug fix.

@@ -704,7 +704,6 @@ func (t *TxPublisher) Stop() error {
log.Info("TxPublisher stopped")

return nil

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🙏


=======
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😢

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P1 MUST be fixed or reviewed
Projects
None yet
4 participants