test: add brainlayer run_tests.sh orchestrator#256
Conversation
There was a problem hiding this comment.
Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (3)
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@codex review |
|
@greptileai review |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.
|
Perfect! Now let me create a summary of my review work: Bugbot Review SummaryI completed a thorough code review of PR #256 ( 🔧 Critical Fix AppliedBug Found: The pytest command included the Impact: If any test in the unit suite failed, pytest would stop immediately, preventing the MCP tool registration tests and Bun test suite from running. Fix Applied: Removed the Commit: ✅ Validation Results
|
…tion Bugbot review found that the -x flag (exit on first failure) contradicts the script's design goal to 'never short-circuit on the first failing command'. This change ensures all test phases (unit suite, MCP tool registration, bun tests) run regardless of individual failures, with exit codes properly aggregated via bitwise OR. Co-authored-by: Etan Heyman <EtanHey@users.noreply.github.com>
There was a problem hiding this comment.
Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: ca1989390e
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if [ "$BRAINLAYER_USE_UV" = "1" ] && command -v uv >/dev/null 2>&1; then | ||
| uv run pytest "$@" |
There was a problem hiding this comment.
Avoid forcing uv-managed pytest by default
When uv is installed, this branch always executes uv run pytest, which switches test execution to uv’s project environment and dependency sync path instead of the caller’s active Python environment. uv run --help documents a default PyPI index and that --active is needed to prefer the active venv, so in offline/pre-provisioned setups this can fail before running any tests (I reproduced this behavior: uv created .venv then failed fetching dependencies). Because BRAINLAYER_USE_UV defaults to 1, the script is brittle by default on machines that happen to have uv installed.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 954b3bcadd
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| run_step "pytest unit suite" run_pytest "$TEST_ROOT/" -v --tb=short -m "not integration" | ||
| run_step \ | ||
| "pytest MCP tool registration" \ | ||
| run_pytest "$TEST_ROOT/test_think_recall_integration.py::TestMCPToolCount" -v --tb=short |
There was a problem hiding this comment.
Don't bind MCP check to overridable test root
The MCP leg is built from BRAINLAYER_TEST_ROOT, so any scoped run (e.g., pointing BRAINLAYER_TEST_ROOT at a temp/subset test directory) fails with pytest collection error file or directory not found for test_think_recall_integration.py::TestMCPToolCount even when all tests in that root pass. This makes the advertised test-root override unusable for partial runs and causes false failures in local/CI orchestration unless the override directory also mirrors this specific file.
Useful? React with 👍 / 👎.


Summary
scripts/run_tests.shas a cross-language gate that runs the repo pytest legs plus any Bun.test.tsfiles undertests/Test plan
ruff check src/ tests/uv run pytest tests/test_run_tests_script.pybash -n scripts/run_tests.sh./scripts/run_tests.shranx->numbarejectingNumPy 2.4Note
Medium Risk
Moderate risk because it introduces a new repo-wide test gate script and hardcodes a specific integration test target; CI behavior and failure reporting could change if environments lack
bun/uvor tests move.Overview
Adds
scripts/run_tests.shas a unified test entrypoint that runs the Python test suite (optionally viauv) plus any TypeScript.test.tsfiles viabun, while continuing through all phases and aggregating failures with a bitwise-OR exit code.Adds contract tests (
tests/test_run_tests_script.py) that stubpytest/bunto verify exit-code aggregation and that the bun phase is skipped when no TypeScript tests exist.Reviewed by Cursor Bugbot for commit 954b3bc. Bugbot is set up for automated code reviews on this repo. Configure here.
Note
Add
run_tests.shorchestrator script with pytest and bun test supportTEST_ROOT; if no.test.tsfiles exist, the bun step is skipped rather than failed.uv runwhenBRAINLAYER_USE_UV=1anduvis available, otherwise falls back to direct invocation.pytestandbunto verify exit code aggregation and bun-skip behavior.Macroscope summarized 954b3bc.