The training agent has no real upstream layer — it satisfies AdCP requests in-process. So the 5 storyboards adopting `upstream_traffic` in #3962 grade `not_applicable` against it (training agent doesn't advertise `query_upstream_traffic` in `list_scenarios`, runner forward-compats correctly).
Two reviewer recommendations clashed on what should happen long-term:
- Protocol expert: implement `query_upstream_traffic` in the training agent with a fake-traffic recorder so the validations grade `pass` instead of `not_applicable`.
- Product expert: keep the training agent honest — making it advertise a scenario it can't truthfully model would corrupt the teaching surface. Reference adopter belongs elsewhere.
Going with product expert's framing. The training agent is a teaching surface; faking traffic to satisfy a contract it doesn't actually exercise teaches the wrong lesson — buyers learn to trust the check, not the adapter. A reference adopter that genuinely round-trips identifiers upstream is the right model.
What needs building
A small public reference adopter that:
- Implements one specialism (probably sales-social or audience-sync — most identifier-echo signal).
- Has a real outbound HTTP layer that genuinely POSTs to a mock platform endpoint with the buyer's hashed identifiers.
- Implements `query_upstream_traffic` on its `comply_test_controller` against the recorder middleware (adcp-client#1290 / adcp-client-python#347).
- Runs the relevant storyboards with `upstream_traffic` checks grading `pass`.
This becomes the canonical demonstration of the contract — buyers can run their own runners against it and see real traffic recorded.
Where it might live
- A standalone repo (`adcp-reference-sales-social` or similar)
- A subdirectory in adcontextprotocol/adcp under `examples/`
- The matrix-blind-fixtures harness from adcp-client#1225 (closed) had ~80% of this wiring; could be polished and exposed publicly
Probably the third — that work already exists empirically.
Compliance-report rendering caveat
Product expert also flagged that `not_applicable` reads as "didn't bother" to scanning buyers in compliance dashboards. That's a UI concern separate from this issue but related: the report copy needs to honor the opt-in framing or it'll silently coerce adoption by shaming. File separately if not already tracked.
References
The training agent has no real upstream layer — it satisfies AdCP requests in-process. So the 5 storyboards adopting `upstream_traffic` in #3962 grade `not_applicable` against it (training agent doesn't advertise `query_upstream_traffic` in `list_scenarios`, runner forward-compats correctly).
Two reviewer recommendations clashed on what should happen long-term:
Going with product expert's framing. The training agent is a teaching surface; faking traffic to satisfy a contract it doesn't actually exercise teaches the wrong lesson — buyers learn to trust the check, not the adapter. A reference adopter that genuinely round-trips identifiers upstream is the right model.
What needs building
A small public reference adopter that:
This becomes the canonical demonstration of the contract — buyers can run their own runners against it and see real traffic recorded.
Where it might live
Probably the third — that work already exists empirically.
Compliance-report rendering caveat
Product expert also flagged that `not_applicable` reads as "didn't bother" to scanning buyers in compliance dashboards. That's a UI concern separate from this issue but related: the report copy needs to honor the opt-in framing or it'll silently coerce adoption by shaming. File separately if not already tracked.
References