Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Start up optimizations grab bag #1017

Merged
merged 6 commits into from
Feb 12, 2024
Merged

Start up optimizations grab bag #1017

merged 6 commits into from
Feb 12, 2024

Conversation

benthecarman
Copy link
Collaborator

@benthecarman benthecarman commented Feb 7, 2024

#1016

A bunch of startup optimizations, in my testing this brings down the startup time on a wallet with a lot of channels and a configured fedimint from 12 seconds to 6 seconds.

  • 1st is mainly just adding the logs everywhere I have been using. Does an optimization for new wallets, where it'll now create the first node in the background instead of waiting for that to be initialized before returning
  • 2nd backgrounds the saving of our node storage on startup, this was only done so we save any changes made to our LSP config while starting up (just migrations), these aren't the end of the world if they are missed, they'll persisted on the next startup
  • 3rd makes it so we don't block on retrying spendable outputs, these can happen async and there isn't a reason to wait
  • 4th fixes two issues with fedimint startup. We would block on setting the preferred guardian, this is now done in the background. We also would download the federation info everytime but we only needed it if our db didn't have it (we were already only using the federation info if we didn't have it in our db), this fixes to only do the download if its not in our db. This also does some minor touch ups like initializing federation_id instead of doing federation_info.federation_id() since this would do a hash everytime.
  • 5th changes to immediately start fetching the JWT for auth, before we would only fetch it once we got our first VSS request, this saves us about ~200ms because otherwise we wait until we have read all of indexed db until we make this call. Needed to modify the auth client to only allow one call to retrieve_new_jwt at a time, otherwise this would sometimes not complete by the time the first VSS call came in and we'd request multiple JWTs. Otherwise just adds some timing logs and tiny optimizations to indexed db
  • 6th changes to sync scorer in the background rather than upfront, this made the startup ~500ms faster for me. This will still read the scorer on start up so we won't be without one until it is synced.

@benthecarman
Copy link
Collaborator Author

Moved the other start up optimizations here

@benthecarman benthecarman added good first issue Good for newcomers fedimint and removed good first issue Good for newcomers labels Feb 7, 2024
@benthecarman benthecarman force-pushed the final-startup branch 3 times, most recently from 4046e59 to 7b7e445 Compare February 8, 2024 13:25
Comment on lines -623 to -585
// If we are stopped, don't sync
if nm.stop.load(Ordering::Relaxed) {
return;
}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed this because we just immediately do it again in the loop below

@benthecarman benthecarman force-pushed the final-startup branch 2 times, most recently from a9afd33 to 2941589 Compare February 8, 2024 17:24
Copy link
Contributor

@TonyGiorgio TonyGiorgio left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mostly ack otherwise, just a comment about syncing chain data before nodes

mutiny-core/src/lib.rs Outdated Show resolved Hide resolved
mutiny-wasm/src/lib.rs Outdated Show resolved Hide resolved
@TonyGiorgio
Copy link
Contributor

Screenshot 2024-02-09 at 4 15 14 PM

My on chain balance went away when I refreshed after swapping into a channel. I refreshed again, waited ~30s and then it appeared again. I think chain data needs to be cached or something? Not sure why it went away, never saw this issue before.

@TonyGiorgio
Copy link
Contributor

Screenshot 2024-02-09 at 4 21 02 PM

Just saw a stream of thousands of duplicate requests getting block height and tip until it panic'd, right after a refresh. Browser completely crashed and couldn't refresh.

@benthecarman
Copy link
Collaborator Author

My on chain balance went away when I refreshed after swapping into a channel. I refreshed again, waited ~30s and then it appeared again. I think chain data needs to be cached or something? Not sure why it went away, never saw this issue before.

This happened to me once when testing the DLC stuff, I think its a bug in bdk with spending unconfirmed utxos. I assume/hope when we update it'll fix

@benthecarman
Copy link
Collaborator Author

Just saw a stream of thousands of duplicate requests getting block height and tip until it panic'd, right after a refresh. Browser completely crashed and couldn't refresh.

do you have any steps to reproduce? syncing logic hasn't changed at all, just starting after we start the nostr stuff instead of before

@TonyGiorgio
Copy link
Contributor

Just saw a stream of thousands of duplicate requests getting block height and tip until it panic'd, right after a refresh. Browser completely crashed and couldn't refresh.

do you have any steps to reproduce? syncing logic hasn't changed at all, just starting after we start the nostr stuff instead of before

I just did a lot of refreshes as I went through E2E testing.

@benthecarman
Copy link
Collaborator Author

Seems like that is happening during BDK sync, maybe just move it back to where it was until #1002 is merged

@benthecarman benthecarman merged commit 2874b2d into master Feb 12, 2024
9 checks passed
@benthecarman benthecarman deleted the final-startup branch February 12, 2024 14:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants