Hey
I am building an agentic UI and ran across aimock last week. The homepage pitch sold me instantly:
From zero to fixtures in one command
Record. Save. Replay.
That is exactly the loop I want. No API keys in CI. No flake. Tests that just work.
I wired it up with Playwright. Flipped on --record. Pointed the OpenAI client at the mock server. First run felt like magic.
Then I ran the rest of my suite and looked at the fixtures dir:
fixtures/recorded/
openai-2026-04-30T08-15-22-a3f9c1b2.json
openai-2026-04-30T08-15-23-4e7d8a91.json
openai-2026-04-30T08-15-24-9b2c3f04.json
openai-2026-04-30T08-15-26-1c7e9d83.json
... 47 more
|
const filename = `${providerKey}-${timestamp}-${crypto.randomUUID().slice(0, 8)}.json`; |
So now I have to figure out which file belongs to which test. Open each one. Match the userMessage back to the test that produced it. Rename. Move into folders. And the next run does the exact same thing again because the timestamp is always fresh so I cannot even tell new from stale without diffing contents
This feels off. The recorder already knows what test the request belongs to. I am sending it via X-Test-Id:
test.beforeEach(async ({ page }, testInfo) => {
await page.setExtraHTTPHeaders({
"x-test-id": testInfo.titlePath.join(" › "),
});
});
getTestId(req) already reads it. The match-count journal is already namespaced by it. The only place it does not flow through is fixture naming
Feature Request
I believe allowing snapshot-style recording instead would be a great improvement. Same mental model as Jest snapshots or Playwright toMatchSnapshot. Fixture files live next to the test that owns them. Re-running a test extends or refreshes its own file. Other tests are not touched..
Concretely something like:
fixtures/recorded/
agent_chat_handles_tool_call/
openai.json
agent_chat_streams_correctly/
openai.json
agent_chat_recovers_from_error/
openai.json
One directory or file per test and/or provider
This would give me stable names I can grep for
And stable diffs in PRs to review
All it would take would to allow the specify the fixture name in the config e.g. fixtures/recorded/[xTestId]/[provider].json and a way to merge fixtures
Happy to try creating a PR if you like
Hey
I am building an agentic UI and ran across aimock last week. The homepage pitch sold me instantly:
That is exactly the loop I want. No API keys in CI. No flake. Tests that just work.
I wired it up with Playwright. Flipped on
--record. Pointed the OpenAI client at the mock server. First run felt like magic.Then I ran the rest of my suite and looked at the fixtures dir:
aimock/src/recorder.ts
Line 335 in d7dfea8
So now I have to figure out which file belongs to which test. Open each one. Match the
userMessageback to the test that produced it. Rename. Move into folders. And the next run does the exact same thing again because the timestamp is always fresh so I cannot even tell new from stale without diffing contentsThis feels off. The recorder already knows what test the request belongs to. I am sending it via
X-Test-Id:getTestId(req)already reads it. The match-count journal is already namespaced by it. The only place it does not flow through is fixture namingFeature Request
I believe allowing snapshot-style recording instead would be a great improvement. Same mental model as Jest snapshots or Playwright
toMatchSnapshot. Fixture files live next to the test that owns them. Re-running a test extends or refreshes its own file. Other tests are not touched..Concretely something like:
One directory or file per test and/or provider
This would give me stable names I can grep for
And stable diffs in PRs to review
All it would take would to allow the specify the fixture name in the config e.g.
fixtures/recorded/[xTestId]/[provider].jsonand a way to merge fixturesHappy to try creating a PR if you like