feat(test): replace inline AI mock servers with fixture-based system#13234
Merged
nic-6443 merged 17 commits intoapache:masterfrom Apr 22, 2026
Merged
feat(test): replace inline AI mock servers with fixture-based system#13234nic-6443 merged 17 commits intoapache:masterfrom
nic-6443 merged 17 commits intoapache:masterfrom
Conversation
…-based mocks Replace inline mock servers with fixture references via X-AI-Fixture header: - ai-rate-limiting.t: port 16724 → 127.0.0.1:1980 + openai/chat-model-echo.json - ai-rate-limiting-consumer-isolation.t: same conversion - ai-rate-limiting-expression.t: port 16725 → 127.0.0.1:1980 + anthropic fixtures - ai-aliyun-content-moderation.t: LLM endpoints → 127.0.0.1:1980 + aliyun/chat-with-harmful.json, moderation mock kept inline (plugin forces path=/) Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Remove inline mock server (port 6724) and replace with fixture headers: - Chat tests use openai/chat-basic.json fixture - Embeddings tests use vertex-ai/predictions-embeddings.json fixture - All endpoints changed from localhost:6724 to 127.0.0.1:1980 - Keep extra_yaml_config and extra_init_worker_by_lua (GCP token mock) - Remove mock server auth check log assertion from TEST 6 Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Replace inline mock servers (port 6724) with fixture references via X-AI-Fixture header pointing to t/fixtures/ files. Endpoints changed from localhost:6724 to 127.0.0.1:1980 (built-in test server). Converted files: - ai-proxy-anthropic.t - ai-proxy-azure-openai.t - ai-proxy-gemini.t - ai-proxy-openrouter.t - ai-proxy-multi.openai-compatible.t - ai-proxy.openai-compatible.t - ai-proxy-kafka-log.t - ai-proxy-multi.t Port 7737 SSE tests left unchanged (pre-existing SSE server). Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Replace inline content_by_lua_block mock servers across all AI test files
with a centralized fixture-based system. Tests now specify mock responses
via the X-AI-Fixture request header, pointing to static fixture files in
t/fixtures/.
Changes:
- Add t/lib/fixture_loader.lua: loads fixture files, auto-detects
Content-Type (.json/.sse), supports {{model}} template substitution
and X-AI-Fixture-Status header for custom status codes
- Add AI endpoints to t/lib/server.lua: v1_chat_completions, v1_messages,
v1_embeddings, v1_responses, delay_v1_chat_completions, random,
status_gpt4, aliyun_moderation
- Create t/fixtures/ with 31 fixture files organized by provider
(openai, anthropic, protocol-conversion, vertex-ai, aliyun, prometheus)
- Convert 22 test files from inline mocks to fixture-based approach
- Add t/plugin/ai-proxy-fixture.t validation test (14 tests)
Files kept with inline mocks (require dynamic/stateful behavior):
- ai-proxy-multi.balancer.t (5 server blocks on different ports)
- ai-proxy-multi3.t (shared dict counter for health checks)
- ai-aliyun-content-moderation.t (moderation endpoint: dynamic body
inspection)
- ai-proxy.t TEST 20/22/23/24b (header forwarding validation)
- ai-proxy.t TEST 31 (timing-sensitive fragmented SSE)
- Add error handling for fixture_loader.load() in aliyun moderation mock - Use function replacement in gsub to avoid % interpretation in model names - Add server.lua endpoints: bad_request, internalservererror, check_extra_options, test_params_in_overridden_endpoint
There was a problem hiding this comment.
Pull request overview
This PR refactors AI-related tests to replace per-test inline mock upstream servers with a shared fixture-based mock system served by the existing test upstream (t/lib/server.lua), with responses selected via the X-AI-Fixture request header.
Changes:
- Added a reusable fixture loader (
t/lib/fixture_loader.lua) and wired new AI mock endpoints into the shared test server (t/lib/server.lua). - Converted many AI plugin tests to target the shared mock upstream (
127.0.0.1:1980) and select responses usingX-AI-Fixture/X-AI-Fixture-Status. - Introduced a structured fixture repository under
t/fixtures/plus documentation (t/fixtures/README.md) and a dedicated validation test (t/plugin/ai-proxy-fixture.t).
Reviewed changes
Copilot reviewed 56 out of 56 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| t/plugin/prometheus-ai-proxy2.t | Removes inline mock server config and points test routes at shared upstream. |
| t/plugin/prometheus-ai-proxy.t | Switches to shared upstream and uses X-AI-Fixture for mock responses (incl. delay endpoint). |
| t/plugin/ai-request-rewrite2.t | Removes inline upstream mocks and repoints override endpoints to shared upstream. |
| t/plugin/ai-request-rewrite.t | Removes inline upstream mocks and repoints override endpoints to shared upstream. |
| t/plugin/ai-rate-limiting.t | Converts upstream mocking to fixtures and updates requests to include X-AI-Fixture. |
| t/plugin/ai-rate-limiting-expression.t | Converts Anthropic upstream mocking to fixtures (JSON + SSE) via X-AI-Fixture. |
| t/plugin/ai-rate-limiting-consumer-isolation.t | Converts OpenAI upstream mocking to fixtures via X-AI-Fixture. |
| t/plugin/ai-proxy3.t | Switches OpenAI upstream to fixture-based responses (incl. null content fixture). |
| t/plugin/ai-proxy2.t | Switches upstream to fixture-based responses and uses X-AI-Fixture-Status to simulate failures. |
| t/plugin/ai-proxy.t | Switches most upstream mocking to fixtures; keeps some inline mocks for header-forwarding / special SSE cases. |
| t/plugin/ai-proxy.openai-compatible.t | Migrates openai-compatible tests to fixture-backed upstream. |
| t/plugin/ai-proxy-vertex-ai.t | Migrates Vertex AI-related tests to fixture-backed upstream (incl. predictions embeddings fixture). |
| t/plugin/ai-proxy-protocol-conversion.t | Migrates protocol conversion edge-case SSE tests to fixture files. |
| t/plugin/ai-proxy-openrouter.t | Migrates OpenRouter tests to fixture-backed upstream. |
| t/plugin/ai-proxy-multi2.t | Migrates ai-proxy-multi2 tests to fixtures and shared upstream. |
| t/plugin/ai-proxy-multi.t | Migrates ai-proxy-multi tests to fixtures/shared upstream (while keeping some existing SSE coverage). |
| t/plugin/ai-proxy-multi.openai-compatible.t | Migrates multi + openai-compatible tests to fixture-backed upstream. |
| t/plugin/ai-proxy-kafka-log.t | Migrates kafka-log tests to fixture-backed upstream. |
| t/plugin/ai-proxy-gemini.t | Migrates Gemini tests to fixture-backed upstream. |
| t/plugin/ai-proxy-fixture.t | New test file validating fixture loader behavior (JSON/SSE/status/template/path traversal). |
| t/plugin/ai-proxy-azure-openai.t | Migrates Azure OpenAI tests to fixture-backed upstream. |
| t/plugin/ai-proxy-anthropic.t | Migrates Anthropic tests to fixture-backed upstream. |
| t/plugin/ai-aliyun-content-moderation.t | Replaces embedded JSON bodies with fixture loader reads for moderation responses. |
| t/lib/server.lua | Adds shared AI mock endpoints (chat/messages/embeddings/responses/delay) and Aliyun moderation mock logic. |
| t/lib/fixture_loader.lua | New module: loads fixture files, validates fixture names, applies {{model}} substitution, dispatches by header. |
| t/fixtures/vertex-ai/predictions-embeddings.json | Adds Vertex AI embeddings prediction fixture. |
| t/fixtures/protocol-conversion/usage-only-final-chunk.sse | Adds SSE fixture for “usage-only final chunk” conversion edge case. |
| t/fixtures/protocol-conversion/system-prompt-ok.json | Adds JSON fixture used for system-prompt conversion test. |
| t/fixtures/protocol-conversion/openrouter-first-chunk.sse | Adds SSE fixture for first-chunk role+content edge case. |
| t/fixtures/protocol-conversion/openrouter-double-finish.sse | Adds SSE fixture for double-finish_reason edge case. |
| t/fixtures/protocol-conversion/openai-to-anthropic-stream.sse | Adds SSE fixture used for OpenAI→Anthropic stream conversion tests. |
| t/fixtures/protocol-conversion/null-finish-reason.sse | Adds SSE fixture for JSON null finish_reason handling. |
| t/fixtures/protocol-conversion/empty-sse-frames.sse | Adds SSE fixture with empty data frames between events. |
| t/fixtures/protocol-conversion/deepseek-usage-null.sse | Adds DeepSeek-style SSE fixture where usage is null on non-final chunks. |
| t/fixtures/protocol-conversion/anthropic-mismatch.sse | Adds Anthropic-format SSE fixture for mismatch/502 conversion test. |
| t/fixtures/prometheus/chat-basic.json | Adds minimal OpenAI-like JSON response fixture for Prometheus metrics tests. |
| t/fixtures/openai/responses-streaming.sse | Adds Responses API streaming SSE fixture. |
| t/fixtures/openai/responses-basic.json | Adds Responses API JSON fixture. |
| t/fixtures/openai/null-content.json | Adds OpenAI chat response fixture with content: null. |
| t/fixtures/openai/embeddings-list.json | Adds OpenAI embeddings list fixture. |
| t/fixtures/openai/chat-tools.json | Adds OpenAI tool-calling JSON fixture. |
| t/fixtures/openai/chat-tools-streaming.sse | Adds OpenAI tool-calling streaming SSE fixture. |
| t/fixtures/openai/chat-streaming.sse | Adds basic OpenAI chat streaming SSE fixture. |
| t/fixtures/openai/chat-multi-chunk.sse | Adds SSE fixture with multiple events in a single chunk scenario. |
| t/fixtures/openai/chat-model-echo.json | Adds {{model}} templated fixture for model passthrough assertions. |
| t/fixtures/openai/chat-fragmented.sse | Adds fragmented chat SSE fixture. |
| t/fixtures/openai/chat-basic.json | Adds basic OpenAI chat completion fixture (with usage fields). |
| t/fixtures/anthropic/messages-with-cache.json | Adds Anthropic messages fixture including cache token usage. |
| t/fixtures/anthropic/messages-tool-use.json | Adds Anthropic tool-use response fixture. |
| t/fixtures/anthropic/messages-streaming.sse | Adds Anthropic streaming SSE fixture. |
| t/fixtures/anthropic/messages-streaming-with-cache.sse | Adds Anthropic streaming SSE fixture including cache token usage. |
| t/fixtures/anthropic/messages-basic.json | Adds basic Anthropic messages JSON fixture. |
| t/fixtures/aliyun/moderation-safe.json | Adds Aliyun moderation “safe” fixture. |
| t/fixtures/aliyun/moderation-risk.json | Adds Aliyun moderation “risk” fixture. |
| t/fixtures/aliyun/chat-with-harmful.json | Adds upstream chat response fixture containing harmful text for moderation tests. |
| t/fixtures/README.md | Documents fixture structure, supported formats, and how to use fixture headers in tests. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- Add license header to t/fixtures/README.md - Add *.sse to .licenserc.yaml paths-ignore (data files, not source code) - Add *.sse editorconfig exception (SSE data may contain intentional trailing whitespace) - Add final newlines to JSON fixture files - Fix trailing whitespace in ai-proxy2.t
- TEST 4 in ai-proxy-multi.t now uses X-AI-Fixture-Status: 401 to simulate upstream auth rejection (was incorrectly changed to 200) - Fix more_header -> more_headers typo in ai-rate-limiting.t - Add openai/unauthorized.json fixture
- server.lua: v1_chat_completions now has auth-checking + message-echo fallback when X-AI-Fixture header is absent, restoring behavior needed by ai-request-rewrite tests (auth validation, message concatenation) - server.lua: _M.random() returns proper OpenAI JSON format instead of plain text, matching what ai-request-rewrite.t TEST 8 expects - ai-proxy.t TEST 12: updated assertion for new /random response format - ai-proxy2.t TEST 4,8: fixed response_body assertion to allow optional spaces after colons in JSON (fixture files are pretty-printed)
- Sync ai-proxy-fixture.t from enterprise (eval qr patterns, direct tests) - Fix ai-proxy.t TEST 12 response_body_like trailing newline - Fix ai-aliyun-content-moderation.t: - Restore /v1/chat/completions location in 6724 mock - Fix 'if not content' bug (was 'if not load_err') - Restore prometheus refresh_interval config
- ai-proxy-fixture.t TEST 10: use plugin config model (gpt-4o) in assertion since ai-proxy overwrites the model from plugin options - ai-proxy.t TEST 14: use inline SSE mock server for proper chunked delivery instead of fixture system (which sends all content at once) - ai-proxy3.t TEST 2: fix access_log regex to match full upstream URI including port and path
…nt Host header) ai-proxy uses cosocket, so nginx upstream_host stays at default $http_host (localhost from test-nginx), not the actual upstream target.
- server.lua: add test-type dispatch (options, header_forwarding) + query param logging - server.lua: _M.random() returns 'path override works' for path override verification - ai-proxy.t TEST 4: remove fixture, verify wrong auth gets 401 - ai-proxy.t TEST 11: remove fixture, verify options merging via test-type=options - ai-proxy2.t TEST 2: remove fixture, verify wrong api_key gets 401 - ai-proxy2.t TEST 8: remove fixture, verify query params via error_log - ai-proxy-multi.t TEST 11: remove fixture, verify options merging - ai-proxy-vertex-ai.t TEST 6: use header_forwarding to verify GCP token reaches upstream - Remove 3 unused fixture files
ai-request-rewrite.t TEST 8 json.decode()s the response, so _M.random() must return valid JSON. Updated assertion in both ai-request-rewrite and ai-proxy-multi to match.
ai-request-rewrite plugin uses extract_response_text() which expects
choices[0].message.content structure, not a flat {data:...} object.
…sion 5 regressions where static fixtures bypassed upstream request validation: 1. ai-proxy-protocol-conversion.t TEST 10: Anthropic system prompt -> OpenAI messages[0].role=system conversion now validated via test-type: system-prompt 2. ai-proxy-protocol-conversion.t TEST 12: Anthropic tools -> OpenAI tools[].type=function conversion now validated via test-type: tools 3. ai-proxy.t TEST 26/27: Responses API stream_options injection guard restored - v1_responses handler now rejects bodies containing stream_options before serving fixture 4. ai-proxy-vertex-ai.t TEST 3/4: OpenAI embeddings -> Vertex instances shape conversion now validated via test-type: vertex-embeddings 5. ai-proxy-multi.t TEST 4 / ai-proxy-multi2.t TEST 2: removed X-AI-Fixture-Status: 401 hack, let v1_chat_completions auth check naturally reject wrong credentials
membphis
approved these changes
Apr 21, 2026
moonming
approved these changes
Apr 21, 2026
Baoyuantop
approved these changes
Apr 22, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
This replaces inline
content_by_lua_blockmock servers across all AI test files with a centralized fixture-based system, cutting ~1200 lines of duplicated mock code.Tests now specify mock responses via the
X-AI-Fixturerequest header, pointing to static fixture files int/fixtures/.What changed
Infrastructure:
t/lib/fixture_loader.lua: loads fixture files, auto-detects Content-Type (.json/.sse), supports{{model}}template substitution andX-AI-Fixture-Statusfor custom status codest/lib/server.lua: added AI endpoints (v1_chat_completions, v1_messages, v1_embeddings, v1_responses, delay_v1_chat_completions, etc.) that dispatch to fixture_loadert/fixtures/: 31 fixture files organized by provider (openai, anthropic, protocol-conversion, vertex-ai, aliyun, prometheus)22 test files converted from inline mocks to fixture-based approach.
Files kept with inline mocks (require dynamic/stateful behavior):
How it works
The fixture loader reads the file and serves it with the correct Content-Type. The
{{model}}template gets replaced with the model from the request body at serve time.