-
Notifications
You must be signed in to change notification settings - Fork 0
Start server before workload and registry ingestion #49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Speed up server startup by moving ingestion to background - Removed blocking registry and workload ingestion from server startup in `cli.py` - Server now starts immediately after configuration validation and database migration - Ingestion now happens asynchronously in the polling manager background thread - Eliminated artificial startup delay in `polling_manager.py` that was waiting for minimum polling interval - Background thread starts polling after 3-second grace period instead of waiting for initial ingestion to complete - Server is ready to accept connections in seconds instead of minutes - Ingestion failures no longer prevent server startup - polling will retry automatically This change transforms the startup from synchronous blocking model to asynchronous background processing, dramatically reducing time-to-ready while maintaining the same functionality through the existing polling mechanism.
PR ReviewSummaryThis PR successfully transforms server startup from synchronous blocking to asynchronous background processing, dramatically reducing startup time. The changes are clean and well-targeted. ✅ Strengths
🔍 ObservationsCritical - Grace Period Verification Question: Is the grace period implemented elsewhere, or should the PR description be updated to reflect the actual 3-second startup check? Startup Flow
Potential Issue - First Poll Timing
This is fine, but worth noting that the "seconds to ready" claim depends heavily on polling intervals. 🎯 Recommendations1. Update PR description (if applicable) 2. Consider initial delay (optional enhancement, not blocking) ✅ Code Quality
✅ Security & Performance
VerdictLGTM with minor documentation clarification ✅ The code changes are solid. Just verify the grace period description matches implementation. |
|
Still seeing the error in the logs $ thv logs mcp-optimizer --proxy -f
6:03PM INFO Loaded configuration from state for mcp-optimizer
6:03PM INFO Starting tooling server mcp-optimizer...
6:03PM INFO OIDC validation disabled, using local user authentication
6:03PM INFO Using local user authentication for user: aponcedeleonch
6:03PM INFO Setting up streamable-http transport...
6:03PM INFO Deploying workload mcp-optimizer from image mcp-optimizer:latest...
6:03PM INFO Container created: mcp-optimizer
6:03PM INFO Starting streamable-http transport for mcp-optimizer...
6:03PM INFO Setting up transparent proxy to forward from host port 38035 to http://127.0.0.1:45878
6:03PM INFO Applied middleware: mcp-parser
6:03PM INFO Applied middleware: auth
6:03PM INFO No auth info handler provided; skipping /.well-known/ endpoint
6:03PM INFO HTTP transport started for mcp-optimizer on port 38035
6:03PM INFO MCP server mcp-optimizer started successfully
6:03PM INFO Server mcp-optimizer belongs to group optim, updating 0 registered client(s)
6:03PM INFO No target clients found for server mcp-optimizer
6:03PM INFO Running as detached process (PID: 49983)
6:03PM INFO MCP server not initialized yet, skipping health check for mcp-optimizer
6:03PM INFO MCP server not initialized yet, skipping health check for mcp-optimizer
6:04PM INFO MCP server not initialized yet, skipping health check for mcp-optimizerMaybe we could also put in a background process checking for ToolHive? Or at least make it not blocking? |
|
Sorry, hit the close button by mistake, it's right next to the Comment one |
|
#48 (comment) This could be an improvement or not, depending on how you look at it. It doesn't explain why the server never connects. |
|
If the thv proxy error logs are being caused by something else, do we still want to make this change? I think it makes sense to move ingestion outside of server startup regardless. |
|
This is probably a good change, but I don't think it will fix all the problems. See the comment here. It looks like the initialise endpoint needs to be available in less than 0.1s after the process is started. This may not be the best place to do this. |
Speed up server startup by moving ingestion to background
cli.pypolling_manager.pythat was waiting for minimum polling intervalThis change transforms the startup from synchronous blocking model to asynchronous background processing, dramatically reducing time-to-ready while maintaining the same functionality through the existing polling mechanism.