Conversation
Remove Sphinx build infrastructure and docs.yml workflow. Move benchmark scripts into benchmarks/ directory. Replace stale YAML manifests with inline ExtensionConfig in examples. Rewrite CLAUDE.md and README.md to remove application-specific references and clean library comments. Update bootstrap command to [dev,test]. Add language tag to code block. Remove ineffective -v flags from CI workflow example steps.
Replace stale single-example directory with three self-contained examples (torch_share, sealed_worker, bwrap_torch_share), each with its own host.py and extension module. Each example loads its own extension — not the test harness. Add integration tests (test_rpc_coverage.py) covering RPC lifecycle for all three isolation modes. Fixture cleanup uses try/finally for leak safety. Dead code removed.
…to-inject Fix RPC shutdown sequence to use socket.SHUT_RDWR before close, preventing lingering connections. Auto-inject sealed_worker bootstrap path when execution_model is sealed_worker. Use _detect_pyisolate_version() with error handling instead of raw importlib.metadata call for version injection in sealed_worker dependencies.
📝 WalkthroughWalkthroughThis PR removes Sphinx documentation infrastructure, restructures example code from multi-extension orchestration patterns to focused torch_share and sealed_worker demonstrations, improves RPC shutdown thread-safety, and bumps the package version to 0.10.1 with Python 3.10+ as the minimum supported version. Changes
Sequence Diagram(s)sequenceDiagram
participant Host as Host Process
participant Ext as Extension Process
participant Transport as JSONSocketTransport
participant RPC as AsyncRPC
Host->>RPC: create AsyncRPC(transport)
Host->>RPC: run_until_stopped() [spawn _send/_recv threads]
rect rgb(100, 150, 200, 0.5)
Note over RPC: Normal RPC Operations
Host->>RPC: call proxy.ping()
RPC->>Transport: send JSON-RPC request
Transport->>Ext: transmit length-prefixed JSON
Ext->>Transport: send JSON-RPC response
Transport->>RPC: receive response
RPC->>Host: return "pong"
end
rect rgb(200, 100, 100, 0.5)
Note over RPC: Explicit Shutdown Sequence
Host->>RPC: shutdown()
RPC->>RPC: set _stopping flag
RPC->>RPC: enqueue None into outbox
RPC->>Transport: close() [socket.shutdown() + close]
Transport->>Transport: unblock _send_thread
Transport->>Transport: unblock _recv_thread
RPC->>RPC: resolve blocking_future
RPC->>RPC: join threads (best-effort)
RPC->>Host: shutdown complete
end
Possibly related PRs
✨ Finishing Touches🧪 Generate unit tests (beta)
✨ Simplify code
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report❌ Patch coverage is
Flags with carried forward coverage won't be shown. Click here to find out more.
... and 4 files with indirect coverage changes 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Pull request overview
Release-focused update for pyisolate 0.10.1 that cleans up docs/infra, restores runnable examples with broader integration testing, and improves RPC shutdown behavior to avoid hangs and noisy teardown failures.
Changes:
- Bump version to 0.10.1 and align supported Python to 3.10+.
- Add new self-contained examples + new RPC integration coverage tests.
- Improve RPC/socket shutdown behavior and sealed-worker dependency injection.
Reviewed changes
Copilot reviewed 41 out of 46 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| tox.ini | Drops py3.9 from tox envlist (py3.10+ focus). |
| pyproject.toml | Version bump to 0.10.1 (and requires-python already >=3.10). |
| pyisolate/init.py | Updates __version__ to 0.10.1. |
| README.md | Simplifies docs, updates requirements to Python 3.10+, clarifies features/execution models. |
| README_COMFYUI.md | Removes ComfyUI-specific README. |
| CLAUDE.md | Updates repo maintenance guidance and architecture notes for 0.10.1. |
| pyisolate/interfaces.py | Removes ComfyUI-specific wording from public protocol docs. |
| pyisolate/sealed.py | Removes ComfyUI-specific wording from sealed worker wrapper docs. |
| pyisolate/_internal/client.py | Removes ComfyUI-specific wording from bootstrap comment. |
| pyisolate/_internal/host.py | Removes ComfyUI-specific wording from Windows + GPU comments. |
| pyisolate/_internal/sandbox.py | Removes app-specific wording from sandbox comments. |
| pyisolate/_internal/environment.py | Changes sealed-worker dependency injection behavior (install pyisolate by version). |
| pyisolate/_internal/rpc_transports.py | Ensures sockets shutdown before close for cleaner teardown. |
| pyisolate/_internal/rpc_protocol.py | More robust shutdown: unblock send/recv threads and join threads; unblock run_until_stopped on recv failures. |
| tests/test_uv_sealed_integration.py | Marks as network, adds child-venv verification that pyisolate/version are installed. |
| tests/integration_v2/test_rpc_coverage.py | Adds integration tests exercising key AsyncRPC lifecycle paths across modes. |
| example/torch_share/host.py | New standalone host-coupled torch_share example runner. |
| example/torch_share/extension/init.py | New torch_share child extension module. |
| example/sealed_worker/host.py | New sealed_worker example runner. |
| example/sealed_worker/extension/init.py | New sealed_worker child extension module. |
| example/bwrap_torch_share/host.py | New Linux-only bwrap + torch_share example runner. |
| example/main.py | Removes legacy example entrypoint. |
| example/host.py | Removes legacy multi-extension example host. |
| example/shared/init.py | Removes legacy shared example helpers. |
| example/extensions/extension1/manifest.yaml | Removes legacy example extension1 manifest. |
| example/extensions/extension1/init.py | Removes legacy example extension1 implementation. |
| example/extensions/extension2/manifest.yaml | Removes legacy example extension2 manifest. |
| example/extensions/extension2/init.py | Removes legacy example extension2 implementation. |
| example/extensions/extension3/manifest.yaml | Removes legacy example extension3 manifest. |
| example/extensions/extension3/init.py | Removes legacy example extension3 implementation. |
| docs/rpc_protocol.md | Updates transport-layer documentation to match current primary transport. |
| docs/debugging.md | Updates TensorKeeper retention default to 5.0s in docs. |
| docs/edge_cases.md | Updates TensorKeeper retention default to 5.0s in docs. |
| docs/Makefile | Removes Sphinx Makefile. |
| docs/conf.py | Removes Sphinx config. |
| docs/index.rst | Removes Sphinx index. |
| docs/api.rst | Removes Sphinx autodoc page. |
| .github/workflows/pytorch.yml | Updates example run command to new torch_share host script. |
| .github/workflows/windows.yml | Updates example run command to new torch_share host script. |
| .github/workflows/docs.yml | Removes Sphinx docs build/deploy workflow. |
| benchmarks/README.md | Documents new benchmark runner scripts and instructions doc. |
| benchmarks/INSTRUCTIONS.md | Adds benchmark run instructions. |
| benchmarks/run_benchmarks_linux.sh | Adds Linux/macOS benchmark runner. |
| benchmarks/run_benchmarks_windows.bat | Adds Windows CMD benchmark runner. |
| benchmarks/run_benchmarks_windows.ps1 | Adds Windows PowerShell benchmark runner. |
| benchmarks/run_benchmarks_powershell_launcher.bat | Adds wrapper to help run PS script under execution policy constraints. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| safe_deps: list[str] = [] | ||
| if config.get("execution_model") == "sealed_worker": | ||
| safe_deps.append(str(Path(__file__).resolve().parents[2])) | ||
| safe_deps.append(f"pyisolate=={pyisolate_version}") | ||
| for dep in config["dependencies"]: | ||
| validate_dependency(dep) | ||
| safe_deps.append(dep) |
There was a problem hiding this comment.
In sealed_worker mode this auto-injects pyisolate=={pyisolate_version}. _detect_pyisolate_version() currently falls back to "0.0.0" when the distribution metadata isn’t available (e.g., running from a source checkout via sys.path), which will make dependency installation fail because pyisolate==0.0.0 can’t be satisfied. Consider deriving the version from pyisolate.__version__ as a fallback and/or skipping injection when the version can’t be reliably determined or when the user already provided a pyisolate dependency (editable/path) in config['dependencies'].
| if os.name == "nt": | ||
| child_python = Path(ext.venv_path) / "Scripts" / "python.exe" | ||
| else: | ||
| child_python = Path(ext.venv_path) / "bin" / "python3" | ||
| pip_show = subprocess.run( |
There was a problem hiding this comment.
The test assumes the child venv has bin/python3, but venv/uv venv doesn’t universally create a python3 shim (whereas bin/python is expected and is what the extension launcher uses). Using bin/python (or probing for python, python3, python3.X) would make this assertion more robust across platforms/distros.
| [tox] | ||
| envlist = py{39,310,311,312}, lint, type | ||
| envlist = py{310,311,312}, lint, type | ||
| isolated_build = True | ||
|
|
||
| [testenv] |
There was a problem hiding this comment.
[testenv:docs] is still defined below, but the Sphinx config (docs/conf.py, docs/index.rst, etc.) is removed in this PR. As-is, tox -e docs will fail. Either remove the docs tox env or update it to whatever the new docs build process is (or explicitly drop docs support in tox).
…l index Bump version strings to 0.10.1 in pyproject.toml and __init__.py. Sealed worker integration test uses pollockjj.github.io/wheels/ index for pyisolate resolution instead of PyPI. Add @pytest.mark.network marker. Replace hardcoded version assertion with dynamic lookup.
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
pyisolate/_internal/environment.py (1)
235-242: 🧹 Nitpick | 🔵 TrivialConsider improving error messaging when version detection fails.
If
importlib_metadata.version("pyisolate")fails (e.g., during development without a proper install),pyisolate_versionbecomes"0.0.0", and the subsequentpip install pyisolate==0.0.0will fail with a confusing "no matching distribution" error.This might trip up developers—a stumble that's a bit of a fumble! Consider logging a warning when falling back to
"0.0.0"to clarify the root cause.💡 Suggested improvement
def _detect_pyisolate_version() -> str: try: return importlib_metadata.version("pyisolate") except Exception: + logger.warning( + "Could not detect pyisolate version from package metadata. " + "sealed_worker bootstrap may fail if pyisolate is not properly installed." + ) return "0.0.0"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyisolate/_internal/environment.py` around lines 235 - 242, _detect_pyisolate_version currently swallows all exceptions and returns "0.0.0", which leads to confusing pip errors; modify _detect_pyisolate_version to catch the exception from importlib_metadata.version("pyisolate") and emit a clear warning (including the exception message) before returning the fallback value used by pyisolate_version (or choose a more explicit fallback like None), referencing the symbols _detect_pyisolate_version, importlib_metadata.version, and pyisolate_version so the log explains that version detection failed (e.g., development/install context) and shows the underlying error to aid debugging.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@CLAUDE.md`:
- Line 51: The README contains a hardcoded "~50 test functions" count that will
become stale; edit CLAUDE.md to remove or generalize that phrase (e.g., replace
"~50 test functions" with "multiple test functions", "dozens of tests", or omit
the count entirely) so the description remains accurate without requiring
updates when tests change; search for the literal "~50 test functions" in
CLAUDE.md and update the sentence accordingly.
- Line 7: The README hardcodes the pyisolate release string "v0.10.1", which
will become stale; update CLAUDE.md by removing the fixed version token and
either (a) omit the version entirely and keep "PyPI: `pyisolate`", or (b)
replace the literal "v0.10.1" with a dynamic reference such as a PyPI/latest
badge or wording like "latest" that links to the PyPI page; update the line
containing the symbol pyisolate and the string "v0.10.1" accordingly.
- Around line 109-117: Update the environment variables table to reflect the
actual, actively-used variables: add entries for PYISOLATE_HOST_SNAPSHOT,
PYISOLATE_MODULE_PATH, PYISOLATE_UV_CACHE_DIR, PYISOLATE_UDS_ADDRESS,
PYISOLATE_EXTENSION, PYISOLATE_IMPORT_TORCH, PYISOLATE_FORCE_CUDA_IPC,
PYISOLATE_PURGE_SENDER_SHM, PYISOLATE_SIGNAL_CLEANUP,
PYISOLATE_CPU_BORROWED_SHM, and PYISOLATE_CPU_TENSOR_FORCE_CLONE_ON_DESERIALIZE
with concise purpose descriptions matching their usage in the codebase (e.g.,
host snapshot, module search paths, UV cache dir, UDS address, extension
loading, torch import control, CUDA IPC forcing, SHM purge control, signal
cleanup, CPU SHM management, and CPU tensor clone-on-deserialize behavior);
remove PYISOLATE_PATH_DEBUG and PYISOLATE_ENFORCE_SANDBOX from the main table
(or move them to a clearly labeled legacy/experimental section) so the
documentation matches actual implementation.
In `@docs/rpc_protocol.md`:
- Around line 221-230: Update the Overview to state that JSONSocketTransport is
the primary/default transport and that QueueTransport is a
legacy/backward-compatibility path only (not a co-equal default); specifically
modify the Overview wording to mirror the JSONSocketTransport/QueueTransport
descriptions so QueueTransport is explicitly labeled "legacy" and not presented
as an alternative default, referencing the transport names JSONSocketTransport
and QueueTransport and ensuring the overview language aligns with the detailed
sections.
In `@example/bwrap_torch_share/host.py`:
- Around line 70-75: The temp dir created by tempfile.mkdtemp (assigned to tmp)
is never removed, causing resource leaks; update the finally block to remove tmp
and its contents (including venv_root and shared_tmp) using shutil.rmtree(tmp,
ignore_errors=True) after restoring any modified environment variables (TMPDIR)
to ensure cleanup; reference the tempfile.mkdtemp call that sets tmp, the
venv_root and shared_tmp paths, and the environment key "TMPDIR" so you remove
the entire temporary tree and avoid leaving orphaned directories.
In `@example/sealed_worker/host.py`:
- Around line 56-58: The temp dir created by
tempfile.mkdtemp(prefix="sealed_worker_example_") (variable tmp) is never
removed; update host.py to import shutil at the top, ensure tmp is defined in
the outer scope (e.g., tmp = None before the try) and in the existing finally
block call shutil.rmtree(tmp, ignore_errors=True) (or conditional
shutil.rmtree(tmp) only if tmp is truthy) to remove the temp directory that
contains venv_root; reference the tmp and venv_root variables so cleanup runs
regardless of earlier failures.
In `@example/torch_share/host.py`:
- Around line 60-67: The temp workspace created with tmp (and derived venv_root
and shared_tmp) is not removed, leaving stale directories; update the code that
creates tmp/venvs/ipc_shared to ensure cleanup in a finally block by removing
tmp (e.g., shutil.rmtree(tmp, ignore_errors=True) or equivalent) after use, and
apply the same cleanup for the similar allocation at the later block (lines
around the second tmp/venv/shared_tmp creation) so both temporary workspaces are
reliably deleted even on exceptions.
In `@README.md`:
- Around line 92-94: The README shows enabling CUDA IPC via share_cuda_ipc=True
but the host initialization logic (in pyisolate/_internal/host.py, inside the
host init routine) reads config.get("n", False), causing the flag to be ignored;
update the host init to read the same config key used by the README (e.g.,
"share_cuda_ipc" or map the README name to the existing internal key) and use
that boolean to enable CUDA IPC, or alternatively change the README to document
the existing key; modify the config access in the host init function and any
related conditionals that enable CUDA IPC so the runtime honors the documented
share_cuda_ipc flag.
- Line 199: The transport feature line in README.md is too Linux-specific;
update the line that currently reads "Bidirectional JSON-RPC over Unix Domain
Sockets" to mention the Windows TCP fallback so it matches the transport docs
(e.g., reference the phrase "Bidirectional JSON-RPC over Unix Domain Sockets"
and change it to include "or TCP on Windows" / "UDS or TCP on Windows"). Locate
and edit that exact feature line in README.md so the wording aligns with the
transport documentation.
- Around line 126-132: The README currently documents an execution model named
"host-coupled" which doesn't match the allowed values in ExtensionConfig; update
the text to use the actual accepted value "subprocess" (the config field is
execution_model on ExtensionConfig) and ensure the two bullets read "subprocess"
(default): Child process shares the host's torch runtime... and "sealed_worker":
Fully isolated child..., so examples and docs match the accepted API strings.
In `@tests/integration_v2/test_rpc_coverage.py`:
- Around line 18-23: The helper _run currently creates and closes an event loop
manually; replace its body to call asyncio.run(coro) so the function delegates
lifecycle management to asyncio.run (Python 3.7+)—update the _run function to
return asyncio.run(coro) and remove manual loop creation/close while keeping the
same function name to preserve call sites.
- Line 77: The dependency entry uses str(pyisolate_root) which is inconsistent
with the editable install format used elsewhere (e.g., f"-e {pyisolate_root}");
update the dependencies list in the test (the dependencies=[...] usage
referencing pyisolate_root) to use the same format as examples by replacing
str(pyisolate_root) with f"-e {pyisolate_root}" (or alternatively add a short
comment explaining why a non-editable install is intentionally used if you
choose to keep str(pyisolate_root)).
In `@tests/test_uv_sealed_integration.py`:
- Around line 90-117: The test should first assert that the pip_show subprocess
succeeded by checking pip_show.returncode and, on failure, raise/pytest.fail
with pip_show.stderr to make failures explicit; then parse pip_show.stdout for
the exact "Version: <value>" line (e.g. find the line starting with "Version:"
or use a regex to capture the version token) and compare that captured version
string for equality with expected_version instead of using a substring
containment check; update references in the test to use pip_show,
expected_version, and child_python when locating the change.
---
Outside diff comments:
In `@pyisolate/_internal/environment.py`:
- Around line 235-242: _detect_pyisolate_version currently swallows all
exceptions and returns "0.0.0", which leads to confusing pip errors; modify
_detect_pyisolate_version to catch the exception from
importlib_metadata.version("pyisolate") and emit a clear warning (including the
exception message) before returning the fallback value used by pyisolate_version
(or choose a more explicit fallback like None), referencing the symbols
_detect_pyisolate_version, importlib_metadata.version, and pyisolate_version so
the log explains that version detection failed (e.g., development/install
context) and shows the underlying error to aid debugging.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: d7331ae2-7774-42b5-af09-19dae608a9e1
📒 Files selected for processing (46)
.github/workflows/docs.yml.github/workflows/pytorch.yml.github/workflows/windows.ymlCLAUDE.mdREADME.mdREADME_COMFYUI.mdbenchmarks/INSTRUCTIONS.mdbenchmarks/README.mdbenchmarks/run_benchmarks_linux.shbenchmarks/run_benchmarks_powershell_launcher.batbenchmarks/run_benchmarks_windows.batbenchmarks/run_benchmarks_windows.ps1docs/Makefiledocs/api.rstdocs/conf.pydocs/debugging.mddocs/edge_cases.mddocs/index.rstdocs/rpc_protocol.mdexample/bwrap_torch_share/host.pyexample/extensions/extension1/__init__.pyexample/extensions/extension1/manifest.yamlexample/extensions/extension2/__init__.pyexample/extensions/extension2/manifest.yamlexample/extensions/extension3/__init__.pyexample/extensions/extension3/manifest.yamlexample/host.pyexample/main.pyexample/sealed_worker/extension/__init__.pyexample/sealed_worker/host.pyexample/shared/__init__.pyexample/torch_share/extension/__init__.pyexample/torch_share/host.pypyisolate/__init__.pypyisolate/_internal/client.pypyisolate/_internal/environment.pypyisolate/_internal/host.pypyisolate/_internal/rpc_protocol.pypyisolate/_internal/rpc_transports.pypyisolate/_internal/sandbox.pypyisolate/interfaces.pypyisolate/sealed.pypyproject.tomltests/integration_v2/test_rpc_coverage.pytests/test_uv_sealed_integration.pytox.ini
💤 Files with no reviewable changes (15)
- docs/api.rst
- example/extensions/extension1/manifest.yaml
- example/extensions/extension2/manifest.yaml
- example/extensions/extension3/manifest.yaml
- docs/index.rst
- docs/conf.py
- docs/Makefile
- README_COMFYUI.md
- example/main.py
- example/extensions/extension2/init.py
- example/extensions/extension3/init.py
- example/extensions/extension1/init.py
- example/shared/init.py
- example/host.py
- .github/workflows/docs.yml
| ## Identity | ||
|
|
||
| **pyisolate** is a Python library for running extensions across multiple isolated virtual environments with RPC communication. It solves dependency conflicts by isolating extensions in separate venvs while maintaining seamless host-extension communication through AsyncRPC. | ||
| **pyisolate** is a Python library (PyPI: `pyisolate`, v0.10.1) for running extensions in isolated virtual environments with seamless inter-process communication. It provides dependency isolation, zero-copy tensor transfer, and bubblewrap sandboxing for GPU-heavy workloads. |
There was a problem hiding this comment.
Hardcoded version will need updating with each release.
The version string "v0.10.1" is hardcoded and will become stale after the next release. Consider whether this maintenance burden is acceptable or if the version should be referenced dynamically or omitted.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@CLAUDE.md` at line 7, The README hardcodes the pyisolate release string
"v0.10.1", which will become stale; update CLAUDE.md by removing the fixed
version token and either (a) omit the version entirely and keep "PyPI:
`pyisolate`", or (b) replace the literal "v0.10.1" with a dynamic reference such
as a PyPI/latest badge or wording like "latest" that links to the PyPI page;
update the line containing the symbol pyisolate and the string "v0.10.1"
accordingly.
| | Directory | Purpose | | ||
| |-----------|---------| | ||
| | `tests/` | Unit and integration tests (pytest, ~50 test functions) | | ||
| | `example/` | Working 3-extension demo showing dependency isolation (numpy 1.x vs 2.x) | |
There was a problem hiding this comment.
Hardcoded test count will drift, to wit!
The "~50 test functions" is a hardcoded count that will become stale as tests are added or removed. Consider whether this number adds value or if it should be omitted to reduce maintenance burden.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@CLAUDE.md` at line 51, The README contains a hardcoded "~50 test functions"
count that will become stale; edit CLAUDE.md to remove or generalize that phrase
(e.g., replace "~50 test functions" with "multiple test functions", "dozens of
tests", or omit the count entirely) so the description remains accurate without
requiring updates when tests change; search for the literal "~50 test functions"
in CLAUDE.md and update the sentence accordingly.
|
|
||
| ### RPC Patterns | ||
| - Extensions can call host methods recursively (host→extension→host) | ||
| - Shared singletons work transparently via RPC proxying | ||
| - Context tracking prevents circular calls | ||
| | Variable | Purpose | | ||
| |----------|---------| | ||
| | `PYISOLATE_CHILD` | `"1"` in child processes | | ||
| | `PYISOLATE_DEBUG_RPC` | `"1"` for verbose RPC logging | | ||
| | `PYISOLATE_TRACE_FILE` | Path for structured perf trace JSONL output | | ||
| | `PYISOLATE_ENABLE_CUDA_IPC` | `"1"` to enable CUDA IPC tensor transport | | ||
| | `PYISOLATE_PATH_DEBUG` | `"1"` for sys.path logging during child init | | ||
| | `PYISOLATE_ENFORCE_SANDBOX` | Force bwrap sandboxing | |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Search for environment variable references in the codebase
for var in PYISOLATE_CHILD PYISOLATE_DEBUG_RPC PYISOLATE_TRACE_FILE PYISOLATE_ENABLE_CUDA_IPC PYISOLATE_PATH_DEBUG PYISOLATE_ENFORCE_SANDBOX; do
echo "=== Checking $var ==="
rg -n "$var" pyisolate/ --type py | head -5
done
# Also search for any PYISOLATE_ variables not in the list
echo "=== Searching for undocumented PYISOLATE_ variables ==="
rg -n 'PYISOLATE_[A-Z_]+' pyisolate/ --type py -o | sort -uRepository: Comfy-Org/pyisolate
Length of output: 4908
Update the environment variables table with missing variables and remove unused entries.
The documented variables are incomplete and contain inaccuracies. Two variables (PYISOLATE_PATH_DEBUG and PYISOLATE_ENFORCE_SANDBOX) are documented but not actually used in the codebase, while 11+ environment variables are actively referenced but missing from the table. This misdirects developers—it's not an airtight documentation if there are so many loose ends!
Add these actively-used variables to the table:
PYISOLATE_HOST_SNAPSHOT— Host Python snapshot for initializationPYISOLATE_MODULE_PATH— Module search paths for child processPYISOLATE_UV_CACHE_DIR— UV package manager cache directoryPYISOLATE_UDS_ADDRESS— Unix domain socket address for IPCPYISOLATE_EXTENSION— Extension modules to load in childPYISOLATE_IMPORT_TORCH— Control torch import behaviorPYISOLATE_FORCE_CUDA_IPC— Force CUDA IPC (distinct from enable flag)PYISOLATE_PURGE_SENDER_SHM— Shared memory cleanup controlPYISOLATE_SIGNAL_CLEANUP— Signal-based cleanup flagPYISOLATE_CPU_BORROWED_SHM— CPU tensor shared memory managementPYISOLATE_CPU_TENSOR_FORCE_CLONE_ON_DESERIALIZE— CPU tensor deserialization behavior
Remove PYISOLATE_PATH_DEBUG and PYISOLATE_ENFORCE_SANDBOX if they are not production features or move them to a separate "legacy/experimental" section if applicable.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@CLAUDE.md` around lines 109 - 117, Update the environment variables table to
reflect the actual, actively-used variables: add entries for
PYISOLATE_HOST_SNAPSHOT, PYISOLATE_MODULE_PATH, PYISOLATE_UV_CACHE_DIR,
PYISOLATE_UDS_ADDRESS, PYISOLATE_EXTENSION, PYISOLATE_IMPORT_TORCH,
PYISOLATE_FORCE_CUDA_IPC, PYISOLATE_PURGE_SENDER_SHM, PYISOLATE_SIGNAL_CLEANUP,
PYISOLATE_CPU_BORROWED_SHM, and PYISOLATE_CPU_TENSOR_FORCE_CLONE_ON_DESERIALIZE
with concise purpose descriptions matching their usage in the codebase (e.g.,
host snapshot, module search paths, UV cache dir, UDS address, extension
loading, torch import control, CUDA IPC forcing, SHM purge control, signal
cleanup, CPU SHM management, and CPU tensor clone-on-deserialize behavior);
remove PYISOLATE_PATH_DEBUG and PYISOLATE_ENFORCE_SANDBOX from the main table
(or move them to a clearly labeled legacy/experimental section) so the
documentation matches actual implementation.
| The primary transport is `JSONSocketTransport` — length-prefixed JSON over Unix Domain Sockets (or TCP on Windows). All isolation modes use this transport. | ||
|
|
||
| ### QueueTransport | ||
| ### JSONSocketTransport (Primary) | ||
|
|
||
| Uses `multiprocessing.Queue` for communication. Used when subprocess isolation is via `multiprocessing.Process`. | ||
| Uses length-prefixed JSON-RPC over raw sockets. No pickle. This is the standard transport for all Linux isolation modes (sandboxed and non-sandboxed) and Windows TCP mode. | ||
|
|
||
| ### UDSTransport | ||
| ### QueueTransport (Legacy) | ||
|
|
||
| Uses Unix Domain Sockets for communication. Used when subprocess isolation is via `bubblewrap` sandbox. | ||
| Uses `multiprocessing.Queue` for communication. This is a legacy backward-compatibility path and is not used in current isolation modes. | ||
|
|
There was a problem hiding this comment.
Align the Overview wording with this new transport stance.
This section correctly defines JSONSocketTransport as primary, but Line 7 still presents queues as a co-equal default path. Please update the Overview to mark queues as legacy too, so the docs don’t “queue” mixed signals.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/rpc_protocol.md` around lines 221 - 230, Update the Overview to state
that JSONSocketTransport is the primary/default transport and that
QueueTransport is a legacy/backward-compatibility path only (not a co-equal
default); specifically modify the Overview wording to mirror the
JSONSocketTransport/QueueTransport descriptions so QueueTransport is explicitly
labeled "legacy" and not presented as an alternative default, referencing the
transport names JSONSocketTransport and QueueTransport and ensuring the overview
language aligns with the detailed sections.
| tmp = tempfile.mkdtemp(prefix="bwrap_torch_share_") | ||
| venv_root = os.path.join(tmp, "venvs") | ||
| os.makedirs(venv_root, exist_ok=True) | ||
| shared_tmp = os.path.join(tmp, "ipc_shared") | ||
| os.makedirs(shared_tmp, exist_ok=True) | ||
| os.environ["TMPDIR"] = shared_tmp |
There was a problem hiding this comment.
Temporary directory is never cleaned up — potential resource leak.
The tempfile.mkdtemp() creates a directory at Line 70, but it's never removed in the finally block. Each run leaves orphaned directories in the system temp location.
🧹 Proposed fix to clean up temp directory
+import shutil
+
async def main() -> int:
# ... existing code ...
finally:
if ext is not None:
with contextlib.suppress(Exception):
ext.stop()
AdapterRegistry.unregister()
+ with contextlib.suppress(Exception):
+ shutil.rmtree(tmp)Add the import at the top and the cleanup in the finally block.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@example/bwrap_torch_share/host.py` around lines 70 - 75, The temp dir created
by tempfile.mkdtemp (assigned to tmp) is never removed, causing resource leaks;
update the finally block to remove tmp and its contents (including venv_root and
shared_tmp) using shutil.rmtree(tmp, ignore_errors=True) after restoring any
modified environment variables (TMPDIR) to ensure cleanup; reference the
tempfile.mkdtemp call that sets tmp, the venv_root and shared_tmp paths, and the
environment key "TMPDIR" so you remove the entire temporary tree and avoid
leaving orphaned directories.
| ### Execution Models | ||
|
|
||
| A complete pyisolate application requires a special `main.py` entry point to handle virtual environment activation: | ||
| pyisolate provides two execution models: | ||
|
|
||
| ```python | ||
| # main.py | ||
| if __name__ == "__main__": | ||
| # When running as the main script, import and run your host application | ||
| from host import main | ||
| main() | ||
| else: | ||
| # When imported by extension processes, ensure venv is properly activated | ||
| import os | ||
| import site | ||
| import sys | ||
|
|
||
| if os.name == "nt": # Windows-specific venv activation | ||
| venv = os.environ.get("VIRTUAL_ENV", "") | ||
| if venv != "": | ||
| sys.path.insert(0, os.path.join(venv, "Lib", "site-packages")) | ||
| site.addsitedir(os.path.join(venv, "Lib", "site-packages")) | ||
| ``` | ||
| - **`host-coupled`** (default): Child process shares the host's torch runtime and can use zero-copy tensor transfer via `/dev/shm` and CUDA IPC. | ||
| - **`sealed_worker`**: Fully isolated child with its own interpreter. No host `sys.path` reconstruction, JSON-RPC tensor transport only. | ||
|
|
There was a problem hiding this comment.
Execution model name likely mismatches accepted API value.
This section uses host-coupled, while ExtensionConfig docs indicate execution_model values are "subprocess" or "sealed_worker" (pyisolate/config.py, Lines 1-80). Please use the actual accepted value here to avoid copy-paste failures.
Suggested README fix
-- **`host-coupled`** (default): Child process shares the host's torch runtime and can use zero-copy tensor transfer via `/dev/shm` and CUDA IPC.
+- **`subprocess`** (default): Child process shares the host's torch runtime and can use zero-copy tensor transfer via `/dev/shm` and CUDA IPC.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ### Execution Models | |
| A complete pyisolate application requires a special `main.py` entry point to handle virtual environment activation: | |
| pyisolate provides two execution models: | |
| ```python | |
| # main.py | |
| if __name__ == "__main__": | |
| # When running as the main script, import and run your host application | |
| from host import main | |
| main() | |
| else: | |
| # When imported by extension processes, ensure venv is properly activated | |
| import os | |
| import site | |
| import sys | |
| if os.name == "nt": # Windows-specific venv activation | |
| venv = os.environ.get("VIRTUAL_ENV", "") | |
| if venv != "": | |
| sys.path.insert(0, os.path.join(venv, "Lib", "site-packages")) | |
| site.addsitedir(os.path.join(venv, "Lib", "site-packages")) | |
| ``` | |
| - **`host-coupled`** (default): Child process shares the host's torch runtime and can use zero-copy tensor transfer via `/dev/shm` and CUDA IPC. | |
| - **`sealed_worker`**: Fully isolated child with its own interpreter. No host `sys.path` reconstruction, JSON-RPC tensor transport only. | |
| ### Execution Models | |
| pyisolate provides two execution models: | |
| - **`subprocess`** (default): Child process shares the host's torch runtime and can use zero-copy tensor transfer via `/dev/shm` and CUDA IPC. | |
| - **`sealed_worker`**: Fully isolated child with its own interpreter. No host `sys.path` reconstruction, JSON-RPC tensor transport only. | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` around lines 126 - 132, The README currently documents an
execution model named "host-coupled" which doesn't match the allowed values in
ExtensionConfig; update the text to use the actual accepted value "subprocess"
(the config field is execution_model on ExtensionConfig) and ensure the two
bullets read "subprocess" (default): Child process shares the host's torch
runtime... and "sealed_worker": Fully isolated child..., so examples and docs
match the accepted API strings.
| ## Use Cases | ||
| ### Core | ||
| - Automatic virtual environment management | ||
| - Bidirectional JSON-RPC over Unix Domain Sockets (no pickle) |
There was a problem hiding this comment.
Transport feature line is too Linux-specific for current behavior.
Bidirectional JSON-RPC over Unix Domain Sockets omits Windows TCP mode, while transport docs state UDS or TCP on Windows. Small wording tweak keeps expectations in tune.
Suggested README wording tweak
-- Bidirectional JSON-RPC over Unix Domain Sockets (no pickle)
+- Bidirectional JSON-RPC over Unix Domain Sockets (or TCP on Windows), no pickle📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - Bidirectional JSON-RPC over Unix Domain Sockets (no pickle) | |
| - Bidirectional JSON-RPC over Unix Domain Sockets (or TCP on Windows), no pickle |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` at line 199, The transport feature line in README.md is too
Linux-specific; update the line that currently reads "Bidirectional JSON-RPC
over Unix Domain Sockets" to mention the Windows TCP fallback so it matches the
transport docs (e.g., reference the phrase "Bidirectional JSON-RPC over Unix
Domain Sockets" and change it to include "or TCP on Windows" / "UDS or TCP on
Windows"). Locate and edit that exact feature line in README.md so the wording
aligns with the transport documentation.
| def _run(coro): | ||
| loop = asyncio.new_event_loop() | ||
| try: | ||
| return loop.run_until_complete(coro) | ||
| finally: | ||
| loop.close() |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider using asyncio.run() instead of manual loop management.
The _run() helper manually creates/closes an event loop. Since Python 3.7+, asyncio.run() handles this cleanly and is the recommended approach. However, if there's a specific reason for the manual approach (e.g., compatibility with the test harness), it's fine to keep.
♻️ Simpler alternative using asyncio.run()
def _run(coro):
- loop = asyncio.new_event_loop()
- try:
- return loop.run_until_complete(coro)
- finally:
- loop.close()
+ return asyncio.run(coro)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def _run(coro): | |
| loop = asyncio.new_event_loop() | |
| try: | |
| return loop.run_until_complete(coro) | |
| finally: | |
| loop.close() | |
| def _run(coro): | |
| return asyncio.run(coro) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/integration_v2/test_rpc_coverage.py` around lines 18 - 23, The helper
_run currently creates and closes an event loop manually; replace its body to
call asyncio.run(coro) so the function delegates lifecycle management to
asyncio.run (Python 3.7+)—update the _run function to return asyncio.run(coro)
and remove manual loop creation/close while keeping the same function name to
preserve call sites.
| name="rpc_cov_sealed", | ||
| module_path=package_path, | ||
| isolated=True, | ||
| dependencies=[str(pyisolate_root)], |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Inconsistent dependency format compared to other examples.
Line 77 uses str(pyisolate_root) (bare path), but example/sealed_worker/host.py Line 74 uses f"-e {pyisolate_root}" (editable install). This inconsistency could cause different installation behaviors.
If editable mode is intentional for the example but not required for tests, this is fine — but worth documenting or aligning.
♻️ Consider using consistent format
config = ExtensionConfig(
name="rpc_cov_sealed",
module_path=package_path,
isolated=True,
- dependencies=[str(pyisolate_root)],
+ dependencies=[f"-e {pyisolate_root}"],
apis=[],🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/integration_v2/test_rpc_coverage.py` at line 77, The dependency entry
uses str(pyisolate_root) which is inconsistent with the editable install format
used elsewhere (e.g., f"-e {pyisolate_root}"); update the dependencies list in
the test (the dependencies=[...] usage referencing pyisolate_root) to use the
same format as examples by replacing str(pyisolate_root) with f"-e
{pyisolate_root}" (or alternatively add a short comment explaining why a
non-editable install is intentionally used if you choose to keep
str(pyisolate_root)).
| wheel_index = "https://pollockjj.github.io/wheels/" | ||
| monkeypatch.setenv("UV_EXTRA_INDEX_URL", wheel_index) | ||
| print(f"UV_EXTRA_INDEX_URL={wheel_index}") | ||
| try: | ||
| ext.ensure_process_started() | ||
| except RuntimeError as exc: | ||
| if "bubblewrap" in str(exc).lower(): | ||
| pytest.skip(f"bwrap unavailable on this platform: {exc}") | ||
| raise | ||
| # Verify pyisolate was installed in the child venv from the published index | ||
| if os.name == "nt": | ||
| child_python = Path(ext.venv_path) / "Scripts" / "python.exe" | ||
| else: | ||
| child_python = Path(ext.venv_path) / "bin" / "python3" | ||
| pip_show = subprocess.run( | ||
| [str(child_python), "-m", "pip", "show", "pyisolate"], | ||
| capture_output=True, | ||
| text=True, | ||
| check=False, | ||
| ) | ||
| print(f"child venv pyisolate install:\n{pip_show.stdout}") | ||
| assert "pyisolate" in pip_show.stdout, "pyisolate not installed in child venv" | ||
| from importlib.metadata import version as _meta_version | ||
|
|
||
| expected_version = _meta_version("pyisolate") | ||
| assert expected_version in pip_show.stdout, ( | ||
| f"pyisolate version mismatch: expected {expected_version} in child venv" | ||
| ) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Version verification logic looks good, but consider edge cases.
The verification that pyisolate is installed in the child venv from the published index is a sensible addition. However, the substring check on Line 115 (expected_version in pip_show.stdout) could match partial versions (e.g., 0.10.1 would match 0.10.10). This is unlikely to be an issue in practice, but a more precise check would be safer.
Also, the pip_show.returncode isn't explicitly verified before checking stdout — if pip fails entirely, you'd get a potentially confusing assertion failure rather than a clear "pip show failed" message.
🔍 Proposed refinement for more precise version matching
pip_show = subprocess.run(
[str(child_python), "-m", "pip", "show", "pyisolate"],
capture_output=True,
text=True,
check=False,
)
print(f"child venv pyisolate install:\n{pip_show.stdout}")
+ assert pip_show.returncode == 0, f"pip show failed: {pip_show.stderr}"
assert "pyisolate" in pip_show.stdout, "pyisolate not installed in child venv"
from importlib.metadata import version as _meta_version
expected_version = _meta_version("pyisolate")
- assert expected_version in pip_show.stdout, (
- f"pyisolate version mismatch: expected {expected_version} in child venv"
- )
+ assert f"Version: {expected_version}" in pip_show.stdout, (
+ f"pyisolate version mismatch: expected {expected_version} in child venv"
+ )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_uv_sealed_integration.py` around lines 90 - 117, The test should
first assert that the pip_show subprocess succeeded by checking
pip_show.returncode and, on failure, raise/pytest.fail with pip_show.stderr to
make failures explicit; then parse pip_show.stdout for the exact "Version:
<value>" line (e.g. find the line starting with "Version:" or use a regex to
capture the version token) and compare that captured version string for equality
with expected_version instead of using a substring containment check; update
references in the test to use pip_show, expected_version, and child_python when
locating the change.
pyisolate 0.10.1
Bugfix release restoring degraded functionality. 4 commits: