-
Notifications
You must be signed in to change notification settings - Fork 3
[#1037] feat: dont block all addresses during initial syncing #1038
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This reverts commit 89ff333.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR optimizes the blockchain synchronization process by implementing selective address blocking during initial sync, preventing duplicate price connections, and refining logging. The key improvement is changing from blocking all addresses during initialization to blocking addresses individually while they're being synced.
- Addresses are now blocked individually during sync rather than all at once during initialization
- Price connection process is moved to after initialization to prevent duplicates
- Enhanced logging for better debugging and monitoring of sync progress
Reviewed Changes
Copilot reviewed 8 out of 9 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| services/chronikService.ts | Main sync optimization logic, address blocking changes, and initialization flow updates |
| services/addressService.ts | Added batch address syncing function for efficient database operations |
| services/transactionService.ts | Enhanced logging for price connection process |
| tests/unittests/chronikService.test.ts | Updated test to match new Address parameter type |
| docker-compose*.yml | Fixed Supertokens core version to 10.0.0 |
| constants/index.ts | Increased initialization delay from 500ms to 2000ms |
| return | ||
| } | ||
| while (this.mempoolTxsBeingProcessed > MAX_MEMPOOL_TXS_TO_PROCESS_AT_A_TIME) { | ||
| while (this.mempoolTxsBeingProcessed >= MAX_MEMPOOL_TXS_TO_PROCESS_AT_A_TIME) { |
Copilot
AI
Aug 11, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The condition changed from > to >= will cause the system to wait when exactly MAX_MEMPOOL_TXS_TO_PROCESS_AT_A_TIME transactions are being processed, potentially reducing throughput unnecessarily. The original > condition was correct as it allows processing up to the maximum limit.
| while (this.mempoolTxsBeingProcessed >= MAX_MEMPOOL_TXS_TO_PROCESS_AT_A_TIME) { | |
| while (this.mempoolTxsBeingProcessed > MAX_MEMPOOL_TXS_TO_PROCESS_AT_A_TIME) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this.mempoolTxsBeingProcessed === MAX_MEMPOOL_TXS_TO_PROCESS_AT_A_TIME, it means we already have the max being processed, so it should keep waiting.
| this.initializing = false | ||
| await connectAllTransactionsToPrices() |
Copilot
AI
Aug 11, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Setting this.initializing = false before calling connectAllTransactionsToPrices() could cause race conditions. If WebSocket messages arrive after initialization is marked complete but before price connections finish, they may not wait for address syncing to complete properly.
| this.initializing = false | |
| await connectAllTransactionsToPrices() | |
| await connectAllTransactionsToPrices() | |
| this.initializing = false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see the reason to wait missed txs to be synced with prices to processes txs that have arrived during the sync. Only reason could be too many DB connections, as far as I can see, which I don't think is reason enough, but I could be wrong
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Klakurka
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Works great! It's really fast locally so I want to see how it is in prod.
|
Just some intergation test failures. |
Related to #1037
Description
Optimize syncing by avoiding unnecessary address refetching, preventing duplicate price connections, refining logs, and having the initial sync block addresses message processing one-by-one, not all at the same time.
Also have Supertokens core fixed in version 10.0.0.
Test plan
Run in this branch
make dev-from-dumpusing prod's DB asdump.sqlin the repo root folder.Check the logs. Make a TX to a address and see it appear and say it is waiting for syncing to unblock it. See when that address is synced, and the new TX message being processed.
Maybe run something like
select count(*) from Address where syncing = 0;in the DB, while the sync is happening, to see how many addresses have been cleared so far (and will now process new txs normally)