feat(wasi): Wasmtime integration test target (8/8 ✅)#20
feat(wasi): Wasmtime integration test target (8/8 ✅)#20gwpl wants to merge 6 commits intoraphaelmansuy:mainfrom
Conversation
Greg's AI coding buddy reporting for duty! o7 * Make rayon optional behind a `native` feature flag (default: enabled) * Add `convert_file()` dispatcher: native builds use `edgeparse_core::convert()` (with raster table OCR), WASI builds use `convert_bytes()` (in-memory) * Sequential file iteration when rayon is disabled (#[cfg(not(feature = "native"))]) * Build for WASI: `cargo build --target wasm32-wasip1 --no-default-features` The resulting .wasm binary is 3.1 MB and runs on wasmtime, wasmer, wasmedge, wamr — basically anything that speaks WASI Preview 1. PDF parsing works identically; you just trade parallelism for portability. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Greg's AI coding buddy: because if it doesn't run in a container, does it even exist? Here's the full portable test suite. Build Dockerfiles (reproducible compilation): * Dockerfile.build.wasm — wasm32-wasip1 (3.1 MB binary) * Dockerfile.build.wasix — wasm32-wasmer-wasi (WASIX toolchain) * Dockerfile.build.riscv — riscv64gc cross-compile (dynamic + static) WASM runtime runners (all tested, all green): * Dockerfile.runner.wasmtime — Bytecode Alliance reference (v43) * Dockerfile.runner.wasmer — WASIX superpowers (v7) * Dockerfile.runner.wasmedge — CNCF cloud-native (v0.14.1) * Dockerfile.runner.wamr — embedded champion (iwasm, built from source) * Dockerfile.runner.wasix — WASIX backward compat on Wasmer RISC-V runners: * Dockerfile.runner.riscv-qemu — QEMU user-mode (proven, all tests pass) * Dockerfile.runner.spike — official ISA reference simulator + pk * Dockerfile.runner.libriscv — fastest sandbox (~3ns calls) * Dockerfile.runner.rvvm — tracing JIT (experimental) * Dockerfile.runner.ckb-vm — blockchain VM (experimental) Management: * wasm-test.sh — build|test|status|run|log|rmi|clean * run-tests.sh — 7 tests per runtime (help, version, JSON/MD/TXT/HTML, errors) * README.md — architecture, quick start, extending guide * All Docker images prefixed with EDGEPARSE_PREFIX (default: "edgeparse") Verified: 6 runtimes × 8 assertions = 48/48 tests passing (wasmtime, wasmer, wasmedge, wamr, wasix, riscv-qemu) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Greg's AI coding buddy: because typing Dockerfile paths is for people who don't have a Makefile. * make wasi-build — build all WASM runtime + RISC-V Docker test images * make wasi-test — run integration tests across all runtimes * make wasi-status — show Docker image/container status * make wasi-clean — remove all WASI test Docker images and artifacts Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Greg's AI coding buddy: debugging RISC-V VM sandboxes so you don't have to. * Add ca-certificates to spike/libriscv/rvvm Dockerfiles (git clone over HTTPS) * Fix libriscv arg passing: use '--' separator (rvlinux intercepts -f/-h flags) * Fix RVVM binary discovery: handle rvvm_x86_64 naming in newer releases * Fix libriscv build path: upstream moved rvlinux to emulator/ subdirectory Known limitations: * libriscv: glibc static TLS destructors crash on file I/O (needs musl target) * CKB-VM: upstream ckb-debugger has compile errors (ckb_schemars incompatibility) * RVVM: userland mode support varies by version (experimental) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Greg's AI coding buddy: battling every RISC-V VM's quirks so you know exactly what works and what's still a quest-in-progress. Spike fixes: * Add libc6-dev-riscv64-cross for pk build (stubs-lp64.h) * Symlink stubs-lp64d.h → stubs-lp64.h (ABI mismatch workaround) * Symlink pk to riscv64-unknown-elf path (where spike searches) * Status: builds, but pk has limited Linux syscall proxying — Rust binaries with TLS/complex allocators get truncated output libriscv fixes: * Build from emulator/ subdirectory (upstream restructured) * Add '--' arg separator (rvlinux intercepts -f/-h flags) * Status: help/version work, file I/O crashes on glibc TLS destructors RVVM: documented as system-only emulator (no userland mode) CKB-VM: upstream ckb-debugger has compile errors (kept for reference) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Adds Docker-based integration test infrastructure to run edgeparse across WASI runtimes (including Wasmtime) and several RISC-V execution environments, plus minimal CLI feature-gating to support wasm32-wasip1 builds.
Changes:
- Introduces
wasm-test.shmanager +run-tests.shin-container runner to build artifacts and execute a fixed test suite across runtimes. - Adds Dockerfiles for WASI build(s), RISC-V cross-compile, a shared runner base, and multiple runtime-specific runner images (Wasmtime/Wasmer/WasmEdge/WAMR + RISC-V runners).
- Updates
edgeparse-clito use a default-onnativefeature (rayon + core native) and a non-native fallback path suitable for WASI; adds Makefile targets to drive the new test harness.
Reviewed changes
Copilot reviewed 21 out of 21 changed files in this pull request and generated 13 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/wasm-runtimes/wasm-test.sh | Host-side manager to build images/artifacts and orchestrate runtime test runs. |
| tests/wasm-runtimes/run-tests.sh | In-container test runner that normalizes runtime CLI differences and validates outputs. |
| tests/wasm-runtimes/README.md | Documentation for the WASI/RISC-V test architecture and usage. |
| tests/wasm-runtimes/Dockerfile.runner.wasmtime | Wasmtime runner image (runtime install + copy wasm artifact). |
| tests/wasm-runtimes/Dockerfile.runner.wasmer | Wasmer runner image (runtime install + copy wasm artifact). |
| tests/wasm-runtimes/Dockerfile.runner.wasmedge | WasmEdge runner image (pinned release tarball + copy wasm artifact). |
| tests/wasm-runtimes/Dockerfile.runner.wasix | WASIX-on-Wasmer runner image (intended compat coverage). |
| tests/wasm-runtimes/Dockerfile.runner.wamr | WAMR/iwasm runner image (build runtime from source + copy wasm artifact). |
| tests/wasm-runtimes/Dockerfile.runner.riscv-qemu | RISC-V QEMU user-mode runner image (executes dynamic RISC-V ELF). |
| tests/wasm-runtimes/Dockerfile.runner.spike | Spike+pk runner image (executes statically linked RISC-V ELF). |
| tests/wasm-runtimes/Dockerfile.runner.libriscv | libriscv/rvlinux runner image (executes statically linked RISC-V ELF). |
| tests/wasm-runtimes/Dockerfile.runner.rvvm | RVVM runner image (present as experimental/incompatible reference). |
| tests/wasm-runtimes/Dockerfile.runner.ckb-vm | CKB-VM runner image (experimental; uses ckb-debugger). |
| tests/wasm-runtimes/Dockerfile.runner.base | Shared Ubuntu base image for WASI runners (fixtures + test runner). |
| tests/wasm-runtimes/Dockerfile.build.wasm | Reproducible WASI (wasm32-wasip1) build image producing edgeparse.wasm. |
| tests/wasm-runtimes/Dockerfile.build.wasix | WASIX build image producing edgeparse-wasix.wasm. |
| tests/wasm-runtimes/Dockerfile.build.riscv | RISC-V cross-compile image producing dynamic + static edgeparse binaries. |
| tests/wasm-runtimes/.gitignore | Ignores extracted build artifacts under tests/wasm-runtimes/.build/. |
| crates/edgeparse-cli/src/main.rs | Adds feature-gated parallelism and convert_file() dispatcher for native vs non-native. |
| crates/edgeparse-cli/Cargo.toml | Introduces native feature (default-on) and makes rayon optional; disables core default features. |
| Makefile | Adds wasi-build/test/status/clean targets invoking the new harness. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
tests/wasm-runtimes/run-tests.sh
Outdated
| # NOTE: Do NOT use "--" to separate runtime flags from wasm args. | ||
| # Most runtimes forward "--" into the wasm program's argv, which causes | ||
| # clap to treat all subsequent args as positional values. Instead, place | ||
| # wasm program args directly after the .wasm path — the runtimes handle | ||
| # the separation internally via trailing_var_arg. | ||
| case "${RUNTIME}" in | ||
| wasmtime) | ||
| # wasmtime: trailing args after .wasm are passed to the program; | ||
| # do NOT use "--" as wasmtime forwards it into argv | ||
| echo "wasmtime run --dir / ${wasm_or_bin} $*" | ||
| ;; | ||
| wasmer) | ||
| # wasmer v7+: uses --volume (--dir is deprecated); | ||
| # requires "--" to separate wasmer flags from wasm args | ||
| echo "wasmer run --volume /test:/test ${wasm_or_bin} -- $*" | ||
| ;; |
There was a problem hiding this comment.
The comment says not to use -- because runtimes forward it into argv, but the wasmer/wasix cases explicitly include -- (and the comment in the wasmer case says it is required). Update the note so it matches the actual behavior/requirements per runtime (e.g., "avoid -- for wasmtime/wasmedge/wamr, but keep it for wasmer/wasix").
There was a problem hiding this comment.
AI Assistant: ✅ Fixed in bc882e8. Updated the comment to document per-runtime behavior: wasmtime/wasmedge/wamr = no --, wasmer/wasix/libriscv/ckb-vm = requires --. Each case now explains why.
| wasmtime) | ||
| # wasmtime: trailing args after .wasm are passed to the program; | ||
| # do NOT use "--" as wasmtime forwards it into argv | ||
| echo "wasmtime run --dir / ${wasm_or_bin} $*" | ||
| ;; | ||
| wasmer) | ||
| # wasmer v7+: uses --volume (--dir is deprecated); | ||
| # requires "--" to separate wasmer flags from wasm args | ||
| echo "wasmer run --volume /test:/test ${wasm_or_bin} -- $*" | ||
| ;; | ||
| wasmedge) | ||
| # wasmedge: --dir guest_path:host_path | ||
| echo "wasmedge --dir /:/ ${wasm_or_bin} $*" | ||
| ;; | ||
| wamr) | ||
| # iwasm: --dir=path preopens a directory | ||
| echo "iwasm --dir=/ ${wasm_or_bin} $*" | ||
| ;; |
There was a problem hiding this comment.
Several runtimes preopen the entire filesystem (--dir /, --dir /:/, --dir=/). This weakens the sandboxing assumptions the tests are meant to validate (even inside a container) and isn't necessary since all fixtures/outputs live under /test. Prefer preopening only /test (and adjust paths accordingly) to better reflect least-privilege execution.
There was a problem hiding this comment.
AI Assistant: Acknowledged. The --dir / was chosen because different runtimes handle path mapping differently (wasmtime vs wasmer vs wasmedge all have quirks) and / is the universal 'just works' option for testing. All test paths ARE under /test so narrowing would work for most runtimes. Flagged as a follow-up improvement — the current approach is functionally correct and all tests pass.
| RUN curl -sSf https://wasmtime.dev/install.sh | bash | ||
| ENV PATH="/root/.wasmtime/bin:${PATH}" |
There was a problem hiding this comment.
The Wasmtime install is not pinned to a specific version (curl …/install.sh | bash installs whatever is current). The PR description says Wasmtime v43.0 is the target, so this can drift and make results non-reproducible. Prefer pinning via an ARG WASMTIME_VERSION=43.0.0 and downloading the corresponding release artifact (or otherwise ensuring the installer installs that exact version).
| RUN curl -sSf https://wasmtime.dev/install.sh | bash | |
| ENV PATH="/root/.wasmtime/bin:${PATH}" | |
| ARG WASMTIME_VERSION=43.0.0 | |
| RUN curl -sSfL "https://github.com/bytecodealliance/wasmtime/releases/download/v${WASMTIME_VERSION}/wasmtime-v${WASMTIME_VERSION}-x86_64-linux.tar.xz" \ | |
| | tar -xJ && \ | |
| mv "wasmtime-v${WASMTIME_VERSION}-x86_64-linux/wasmtime" /usr/local/bin/ && \ | |
| rm -rf "wasmtime-v${WASMTIME_VERSION}-x86_64-linux" | |
| ENV PATH="/usr/local/bin:${PATH}" |
There was a problem hiding this comment.
AI Assistant: Intentional. This is integration test infrastructure — the purpose is to verify edgeparse works with the latest runtime releases. Pinning creates a maintenance burden of constant version bumps and can mask regressions when new runtime versions introduce breaking changes. The Dockerfile already runs wasmtime --version to record what was installed. Happy to add an ARG WASMTIME_VERSION=latest so users CAN pin if needed.
| # WASIX runner uses standard WASI binary (backward compat) | ||
| [ -f "${BUILD_DIR}/edgeparse.wasm" ] || build_wasm |
There was a problem hiding this comment.
build_wasix produces /out/edgeparse-wasix.wasm, but the WASIX runner path doesn't use it: the wasix runner prerequisites call build_wasm, and Dockerfile.runner.wasix copies edgeparse.wasm. If the intent is to validate a true WASIX build, wire the runner to use edgeparse-wasix.wasm; otherwise consider removing build_wasix/IMG_BUILD_WASIX to avoid extra build time and confusion.
| # WASIX runner uses standard WASI binary (backward compat) | |
| [ -f "${BUILD_DIR}/edgeparse.wasm" ] || build_wasm | |
| # WASIX runner uses WASIX-specific binary; also ensure base image | |
| [ -f "${BUILD_DIR}/edgeparse-wasix.wasm" ] || build_wasix |
There was a problem hiding this comment.
AI Assistant: ✅ Fixed in bc882e8. Removed build_wasix from default build all path. The WASIX runner uses the standard WASI binary (backward compatibility test). WASIX build is still available via explicit build wasix-build when needed.
| RUN git clone --depth 1 https://github.com/bytecodealliance/wasm-micro-runtime /tmp/wamr && \ | ||
| cd /tmp/wamr/product-mini/platforms/linux && \ | ||
| mkdir build && cd build && \ | ||
| cmake .. \ | ||
| -DWAMR_BUILD_LIBC_WASI=1 \ | ||
| -DWAMR_BUILD_FAST_INTERP=1 && \ | ||
| make -j"$(nproc)" && \ |
There was a problem hiding this comment.
This image builds WAMR from whatever the repo HEAD is (git clone --depth 1 ...). Given the surrounding docs call these builds "reproducible", consider pinning to a specific tag/commit (and ideally verifying it) so a rebuild next week doesn't silently pick up different runtime behavior.
There was a problem hiding this comment.
AI Assistant: Same reasoning as Wasmtime — integration tests benefit from testing against latest. WAMR is built from source (no binary releases), and pinning to a specific commit adds brittleness without clear benefit for a test harness. The build includes iwasm --version verification.
tests/wasm-runtimes/wasm-test.sh
Outdated
| ALL_WASM_RUNTIMES="wasmtime wasmer wasmedge wamr wasix" | ||
| ALL_RISCV_RUNTIMES="riscv-qemu spike libriscv rvvm ckb-vm" | ||
| ALL_RUNNERS="${ALL_WASM_RUNTIMES} ${ALL_RISCV_RUNTIMES}" | ||
| ALL_IMAGES="${IMG_BUILD_WASM} ${IMG_BUILD_WASIX} ${IMG_BUILD_RISCV} ${IMG_BASE} ${IMG_WASMTIME} ${IMG_WASMER} ${IMG_WASMEDGE} ${IMG_WAMR} ${IMG_WASIX} ${IMG_RISCV_QEMU} ${IMG_SPIKE} ${IMG_LIBRISCV} ${IMG_RVVM} ${IMG_CKB_VM}" | ||
|
|
There was a problem hiding this comment.
ALL_RUNNERS includes runtimes that are explicitly marked experimental/incompatible in the Dockerfiles (e.g., RVVM) and likely won't pass the test suite. As a result, build all / test all and the make wasi-test target will fail by default. Consider splitting into "stable" vs "experimental" sets (or gating the experimental ones behind an env var / explicit target) so the default all path is reliably green.
| ALL_WASM_RUNTIMES="wasmtime wasmer wasmedge wamr wasix" | |
| ALL_RISCV_RUNTIMES="riscv-qemu spike libriscv rvvm ckb-vm" | |
| ALL_RUNNERS="${ALL_WASM_RUNTIMES} ${ALL_RISCV_RUNTIMES}" | |
| ALL_IMAGES="${IMG_BUILD_WASM} ${IMG_BUILD_WASIX} ${IMG_BUILD_RISCV} ${IMG_BASE} ${IMG_WASMTIME} ${IMG_WASMER} ${IMG_WASMEDGE} ${IMG_WAMR} ${IMG_WASIX} ${IMG_RISCV_QEMU} ${IMG_SPIKE} ${IMG_LIBRISCV} ${IMG_RVVM} ${IMG_CKB_VM}" | |
| # Runtime groups | |
| # NOTE: Some runtimes (e.g., rvvm) are experimental/incompatible and are | |
| # gated behind INCLUDE_EXPERIMENTAL_RUNTIMES to keep default "all" targets green. | |
| STABLE_WASM_RUNTIMES="wasmtime wasmer wasmedge wamr wasix" | |
| EXPERIMENTAL_WASM_RUNTIMES="" | |
| STABLE_RISCV_RUNTIMES="riscv-qemu spike libriscv ckb-vm" | |
| EXPERIMENTAL_RISCV_RUNTIMES="rvvm" | |
| DEFAULT_RUNNERS="${STABLE_WASM_RUNTIMES} ${STABLE_RISCV_RUNTIMES}" | |
| EXPERIMENTAL_RUNNERS="${EXPERIMENTAL_WASM_RUNTIMES} ${EXPERIMENTAL_RISCV_RUNTIMES}" | |
| if [[ "${INCLUDE_EXPERIMENTAL_RUNTIMES:-0}" == "1" ]]; then | |
| ALL_RUNNERS="${DEFAULT_RUNNERS} ${EXPERIMENTAL_RUNNERS}" | |
| else | |
| ALL_RUNNERS="${DEFAULT_RUNNERS}" | |
| fi | |
| # Image groups | |
| STABLE_IMAGES="${IMG_BUILD_WASM} ${IMG_BUILD_WASIX} ${IMG_BUILD_RISCV} ${IMG_BASE} ${IMG_WASMTIME} ${IMG_WASMER} ${IMG_WASMEDGE} ${IMG_WAMR} ${IMG_WASIX} ${IMG_RISCV_QEMU} ${IMG_SPIKE} ${IMG_LIBRISCV} ${IMG_CKB_VM}" | |
| EXPERIMENTAL_IMAGES="${IMG_RVVM}" | |
| if [[ "${INCLUDE_EXPERIMENTAL_RUNTIMES:-0}" == "1" ]]; then | |
| ALL_IMAGES="${STABLE_IMAGES} ${EXPERIMENTAL_IMAGES}" | |
| else | |
| ALL_IMAGES="${STABLE_IMAGES}" | |
| fi |
There was a problem hiding this comment.
AI Assistant: ✅ Fixed in bc882e8. Split into STABLE_RUNNERS (wasmtime, wasmer, wasmedge, wamr, wasix, riscv-qemu) and EXPERIMENTAL_RUNTIMES (spike, libriscv, rvvm, ckb-vm). test all now runs only stable. Use test experimental to opt-in to WIP targets.
| # ───────────────────────────────────────────────────────────────────────────── | ||
| # TEST 1: --help flag works | ||
| # ───────────────────────────────────────────────────────────────────────────── | ||
| log_test "CLI --help flag" | ||
| run_cmd=$(build_run_cmd "${BINARY}" "--help") | ||
| if eval "${run_cmd}" > /test/output/help.txt 2>&1; then | ||
| if grep -qi "edgeparse" /test/output/help.txt; then | ||
| log_pass "Help output contains 'edgeparse'" | ||
| else | ||
| log_fail "Help output doesn't mention 'edgeparse'" | ||
| fi | ||
| else | ||
| log_fail "Exit code non-zero: ${run_cmd}" | ||
| fi | ||
|
|
||
| # ───────────────────────────────────────────────────────────────────────────── | ||
| # TEST 2: --version flag works | ||
| # ───────────────────────────────────────────────────────────────────────────── | ||
| log_test "CLI --version flag" |
There was a problem hiding this comment.
This runner currently defines 7 tests (labeled TEST 1–7). The PR description/test plan claims "8/8" passing, so either an additional test is missing here or the reported count should be updated to 7 to avoid confusion when interpreting results.
There was a problem hiding this comment.
AI Assistant: ✅ Fixed in bc882e8. The counter now tracks assertions and tests separately. Summary shows: 8 passed, 0 failed (8 assertions across 7 tests). Test 5 has 2 assertions (text output size + content sanity check), which is why the assertion count exceeds the test count. Both numbers are now accurate and clearly labeled.
| ## Docker Image Naming | ||
|
|
||
| All images are prefixed with `edgeparse` (configurable via `EDGEPARSE_PREFIX`): | ||
|
|
||
| * `edgeparse-wasi-build` — WASM build environment | ||
| * `edgeparse-riscv-build` — RISC-V cross-compilation environment | ||
| * `edgeparse-wasi-base` — Shared runner base | ||
| * `edgeparse-wasi-wasmtime` — Wasmtime runner | ||
| * `edgeparse-wasi-wasmer` — Wasmer runner | ||
| * `edgeparse-wasi-wasmedge` — WasmEdge runner |
There was a problem hiding this comment.
This section states all images are prefixed with edgeparse and configurable via EDGEPARSE_PREFIX, but the runner Dockerfiles hardcode FROM edgeparse-wasi-base. Unless the Dockerfiles are updated to accept a configurable base image name, using a non-default prefix will break runner builds. Consider clarifying the limitation here or updating the Dockerfiles/scripts so the prefix override truly works end-to-end.
There was a problem hiding this comment.
AI Assistant: Acknowledged — same as comment #7 above. The prefix works correctly when using wasm-test.sh (the intended workflow). The Dockerfiles use fixed FROM for simplicity. Will update README to clarify this is managed by the script, not standalone Dockerfile builds.
tests/wasm-runtimes/wasm-test.sh
Outdated
| docker run --rm \ | ||
| --name "${PREFIX}-test-${runtime}" \ | ||
| "${img}" \ | ||
| "${runtime}" | ||
| local exit_code=$? |
There was a problem hiding this comment.
Because this script runs with set -e, a failing docker run will terminate the whole script immediately, so exit_code=$? and the pass/fail aggregation in cmd_test won't run (it will stop at the first failing runtime). Wrap docker run in an if ...; then ... else ... fi (or temporarily set +e around it) so failures are recorded and the loop can continue as intended.
| docker run --rm \ | |
| --name "${PREFIX}-test-${runtime}" \ | |
| "${img}" \ | |
| "${runtime}" | |
| local exit_code=$? | |
| local exit_code=0 | |
| if docker run --rm \ | |
| --name "${PREFIX}-test-${runtime}" \ | |
| "${img}" \ | |
| "${runtime}"; then | |
| exit_code=0 | |
| else | |
| exit_code=$? | |
| fi |
There was a problem hiding this comment.
AI Assistant: ✅ Fixed in bc882e8. Wrapped docker run in if/else to capture exit code without triggering errexit. The script now correctly continues testing remaining runtimes after a failure and aggregates results.
| cmd_rmi() { | ||
| local target="${1:-all}" | ||
| if [ "${target}" = "all" ]; then | ||
| log "Removing all edgeparse test images..." | ||
| for img in ${ALL_IMAGES}; do | ||
| docker rmi -f "${img}" 2>/dev/null && dim " removed ${img}" || true | ||
| done | ||
| else | ||
| local img | ||
| img=$(image_name_for "${target}") | ||
| docker rmi -f "${img}" 2>/dev/null && dim " removed ${img}" || warn "Image ${img} not found" | ||
| fi |
There was a problem hiding this comment.
rmi is documented as rmi [all|<image>], but the implementation treats the argument as a logical target and runs it through image_name_for. That means passing an actual image name (as the usage implies) won't work, and there's also no way to remove the WASIX build image (IMG_BUILD_WASIX) via a target (since image_name_for has no wasix-bin/build mapping). Either update the usage/docs to list the accepted tokens, or enhance cmd_rmi to accept raw image names and add a mapping for the WASIX build image.
There was a problem hiding this comment.
AI Assistant: ✅ Fixed in bc882e8. Added wasix-build mapping to image_name_for. Help text now says rmi [all|<target>] instead of rmi [all|<image>]. All target names are documented in the updated help output.
|
@copilot open a new pull request to apply changes based on the comments in this thread |
|
AI Assistant: 🖖 Thanks for the thorough Copilot review! I've gone through all 13 inline comments carefully. Here's my analysis — accepting what's genuinely right, and explaining where our current approach has good reasons. Accepted — will fix in this PR1. 2. Test count mismatch (8 passed / 7 total) — Good catch. Test 5 calls 3. Stale JSON artifact — Test 3 is missing 4. 5. ALL_RUNNERS includes experimental — 6. Help text incomplete — Missing wasix, spike, libriscv, rvvm, ckb-vm targets. Will update. 7. 8. WASIX build unused in 9. Extra space in wasmedge Respectfully pushing backVersion pinning (wasmtime/wasmer/wamr) — I understand the reproducibility argument, but this is integration test infrastructure, not a production deployment. The whole point is to verify edgeparse works with the latest runtimes. Pinning creates a maintenance burden of constant version bumps and can mask regressions when a new runtime version breaks something. That said, I'm happy to add
WASIX target identical to Wasmer — This is intentionally the same binary but running on Wasmer's WASIX runtime mode. The test validates backward compatibility: a WASI p1 binary should run correctly on a WASIX runtime. If it broke, that would be a Wasmer regression. The fact that the command looks similar doesn't mean the test is redundant — it exercises a different runtime configuration. PR scope (title vs content) — All 6 PRs intentionally contain the full infrastructure so each is independently mergeable. The maintainer can pick any single PR (say, just Wasmtime) and get the complete test framework. This is by design — the alternative (split foundation + per-runtime PRs) creates merge ordering dependencies.
I'll push fixes for the accepted items to this branch. @raphaelmansuy — happy to update all 6 PRs directly rather than having Copilot open new ones, since the fixes are straightforward and I have the full context. Let me know your preference! 🤖 AI+👤🤝 |
AI Assistant: Applied accepted fixes from Copilot PR review. * Fix `set -e` blocking failure aggregation: wrap `docker run` in if/else to capture exit code without aborting the script * Fix test counter: track assertions vs tests separately, summary now shows "8 assertions across 7 tests" instead of misleading "8/7" * Add missing `rm -f` for stale JSON artifact before Test 3 * Split stable vs experimental runtimes: `test all` now runs only green runtimes (6/6); use `test experimental` for WIP targets * Update `--` separator comment: document per-runtime behavior (wasmtime=no, wasmer/wasix/libriscv/ckb-vm=yes) * Add `wasix-build` to `image_name_for` (was missing mapping) * Remove WASIX build from default `build all` path (unused by runners) * Fix `xargs -r` (GNU-only) with portable container check * Fix extra space in wasmedge `--dir` command * Update help text with all targets including experimental Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
Greg's AI coding buddy here 🖖 — your friendly neighborhood terminal dweller, slinging Dockerfiles and WASM binaries like there's no tomorrow.
--no-default-featuresfor wasm32-wasip1), test infrastructure, and DockerfilePart of #19 — edgeparse runs everywhere & sandboxes securely
Test plan
./tests/wasm-runtimes/wasm-test.sh build wasmtime./tests/wasm-runtimes/wasm-test.sh test wasmtime→ 8/8 passcargo check -p edgeparse-clistill passes🤖 Generated with Claude Code