Fix Ollama test timeout on CPU-only CI runners#77
Merged
0xba1a merged 2 commits intobala/add_ollama_local_modelfrom Dec 2, 2025
Merged
Fix Ollama test timeout on CPU-only CI runners#770xba1a merged 2 commits intobala/add_ollama_local_modelfrom
0xba1a merged 2 commits intobala/add_ollama_local_modelfrom
Conversation
…up fixture Co-authored-by: 0xba1a <2942888+0xba1a@users.noreply.github.com>
Contributor
Author
|
Just as a heads up, I was blocked by some firewall rules while working on your feedback. Expand below for details. Warning Firewall rules blocked me from connecting to one or more addresses (expand for details)I tried to connect to the following addresses, but was blocked by firewall rules:
If you need me to access, download, or install something from one of these locations, you can either:
|
Copilot
AI
changed the title
[WIP] Add ollama-local support to Microbots
Fix Ollama test timeout on CPU-only CI runners
Dec 2, 2025
0xba1a
approved these changes
Dec 2, 2025
0xba1a
added a commit
that referenced
this pull request
Dec 3, 2025
* Introduce ollama-local support to Microbots * Add step to clean disk space * Use small coding model instead of qwen3 which is 20 GiB * Replace "result" with "thoughts" in mock messages * Fix member name * Update test/bot/test_reading_bot.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update test/llm/README_OLLAMA_TESTING.md Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update src/microbots/llm/ollama_local.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update src/microbots/llm/ollama_local.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Add ollama-local support to Microbots (#75) * Initial plan * Address all PR review comments in a single commit Co-authored-by: 0xba1a <2942888+0xba1a@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: 0xba1a <2942888+0xba1a@users.noreply.github.com> * Fix unit test failure * change "result" to "thoughts" in test_llm.py * Add invalid response unit test to test the exception handling code * fix test_llm failure and modify it as unit test * Update tests to install ollama if it is not available * Pass model name and port from test code to llm using environmental variables * Use qwen3 model from a constant and update test.yml to sqeeze space from the GitHub runner * Update unit test based on changes made * Use a smaller model for testing in GitHub runner * Disable the clean-up code as using smaller model * Update 2bot test to run faster * Run only ollama tests for faster check * Further modifications to run only ollama tests * Pass json requirement as part of user message * Add a brief timeout for the model to be ready after being pulled * Fix Ollama test timeout on CPU-only CI runners (#77) * Initial plan * Fix Ollama test timeout: add timeout to requests.post and model warm-up fixture Co-authored-by: 0xba1a <2942888+0xba1a@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: 0xba1a <2942888+0xba1a@users.noreply.github.com> * Install ollama model using Marketplace action * Increase timeout for local model based tests * Increase response timeout for local timeout * Increase timeout to 10 minutes * Test mistral model * Try qwen2.5-coder * Try mistral with new system heuristics * Flexible verification of ollama tests * Handle non-json response of local model * Run ollama_local tests separately * Fix retry check logic and append llm message before adding user correction message * add back rerun in test.yml * Ignore even the json exception from local llm * Disable the final assert also * Remove unnecessary else case in _create_llm function --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com> Co-authored-by: 0xba1a <2942888+0xba1a@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
CI test
test_writing_bot_ollamafails withConnectionError: Remote end closed connection without responsedue to Ollama model cold start on CPU-only runners.Changes
ollama_local.py: Settimeout=(30, 600)onrequests.post()to handle model loading time (30s connect, 600s read)conftest.py: Newollama_model_warmed_upsession-scoped fixture pre-loads model before tests run💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.