Summary
A Discord report in UltraWorkers #claw-code indicates that Claw Code can identify itself as Claude / Claude Sonnet / Claude Opus even when the configured provider/model is non-Anthropic, e.g. xAI Grok.
This appears to be a prompt identity leak rather than provider routing failure: the runtime system prompt contains a hardcoded Claude-flavored model-family string, and non-Claude models can parrot that identity when asked "Who are you?".
Discord reference
- Guild:
1452487457085063218
- Channel:
#claw-code (1489068687267725353)
- Relevant messages:
1498609299860230216 — user reports that asking "Who are you?" returns Claude even without Anthropic API.
1498637653380169818 — user provides suspected root cause and proposed fix.
1498646141409955900 — maintainer-side triage notes this as a prompt identity leak.
Suspected root cause
The report points to a hardcoded model-family constant in:
with a value like:
pub const FRONTIER_MODEL_NAME: &str = "Claude Opus 4.6";
That value is injected into every session's system prompt as model-family/environment identity text. When the selected backend is Grok or another OpenAI-compatible non-Anthropic provider, the model can read that prompt and answer as if it were Claude.
Repro shape
- Configure Claw Code with a non-Anthropic provider/model, e.g. xAI / Grok via an OpenAI-compatible route.
- Start a session.
- Ask:
Observed behavior
The assistant may answer that it is Claude / Claude Sonnet / Claude Opus, even though the active provider/model is not Anthropic.
Expected behavior
The system prompt should not hardcode a Claude identity for all providers.
Acceptable outcomes:
- thread the selected model/provider name into system prompt construction, or
- avoid provider-specific self-identification in the prompt when the selected model is not known, or
- make the identity text generic and runtime-derived.
Suggested implementation direction
The Discord report suggested threading the selected model name into the prompt builder, for example:
- add
model_name: Option<String> to SystemPromptBuilder
- add a builder method such as
with_model(...)
- make
environment_section() prefer the runtime-selected model name over FRONTIER_MODEL_NAME
- expose a
load_system_prompt_for_model(...) helper and use it from the CLI path that already knows the selected model
The exact API shape can differ, but the important contract is: non-Anthropic providers must not inherit a hardcoded Claude family identity from the global runtime prompt.
Summary
A Discord report in UltraWorkers
#claw-codeindicates that Claw Code can identify itself as Claude / Claude Sonnet / Claude Opus even when the configured provider/model is non-Anthropic, e.g. xAI Grok.This appears to be a prompt identity leak rather than provider routing failure: the runtime system prompt contains a hardcoded Claude-flavored model-family string, and non-Claude models can parrot that identity when asked "Who are you?".
Discord reference
1452487457085063218#claw-code(1489068687267725353)1498609299860230216— user reports that asking "Who are you?" returns Claude even without Anthropic API.1498637653380169818— user provides suspected root cause and proposed fix.1498646141409955900— maintainer-side triage notes this as a prompt identity leak.Suspected root cause
The report points to a hardcoded model-family constant in:
with a value like:
That value is injected into every session's system prompt as model-family/environment identity text. When the selected backend is Grok or another OpenAI-compatible non-Anthropic provider, the model can read that prompt and answer as if it were Claude.
Repro shape
Observed behavior
The assistant may answer that it is Claude / Claude Sonnet / Claude Opus, even though the active provider/model is not Anthropic.
Expected behavior
The system prompt should not hardcode a Claude identity for all providers.
Acceptable outcomes:
Suggested implementation direction
The Discord report suggested threading the selected model name into the prompt builder, for example:
model_name: Option<String>toSystemPromptBuilderwith_model(...)environment_section()prefer the runtime-selected model name overFRONTIER_MODEL_NAMEload_system_prompt_for_model(...)helper and use it from the CLI path that already knows the selected modelThe exact API shape can differ, but the important contract is: non-Anthropic providers must not inherit a hardcoded Claude family identity from the global runtime prompt.