Skip to content

System messages in prompts #1448

@michael-roe

Description

@michael-roe

In the protocol as currently specified, prompts can only contain user or assistant messages, not system messages. (And the SDKs enforce this; at least, the typescript SDK does; I haven’t checked the others). However, many LLMs (especially the open source ones) use system messages to tell the AI what kind of task it is performing, e,g. Whether it is doing back translation (it is given the answer and should come up with a suitable question) or normal question answering. So you might want the prompt in MCP to include a system prompt needed by the model.

As a matter of security policy, some inference providers may want to disallow system messages; if so, that should be enforced at the LLM server end, not in MCP. (The users can always write their own MCP server that does not perform security checks …)

Feature request: allow MCP prompts to contain system messages.

Without this feature, the work-around is for the MCP client to use some ad hoc method for telling if the message was really meant to be a system message, and change its type before sending it to the LLM.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions