Is your feature request related to a problem? Please describe.
Currently, Prompts are explicitly user-controlled and Resources are application-driven, which limits the flexibility of LLMs in dynamically discovering and utilizing them. This can lead to missed opportunities for automation, context-awareness, and intelligent suggestions, especially in complex workflows where the model could benefit from proactively surfacing relevant prompts or resources.
Reference https://modelcontextprotocol.io/docs/learn/server-concepts#core-server-features
Describe the solution you'd like
I’d like to propose making Prompts and Resources optionally LLM-controlled, similar to how Tools are handled. This would allow models to:
- Discover and suggest relevant prompts based on context.
- Dynamically retrieve and include resources to enrich responses.
- Use parameter completion and metadata to guide users intelligently.
- Maintain user oversight through approval mechanisms, visibility settings, and activity logs.
This hybrid control model would preserve user agency while unlocking more powerful and adaptive workflows.
Describe alternatives you've considered
Additional context
By extending LLM control to prompts and resources, we can enable richer interactions like:
- Auto-suggesting prompts in context menus or command palettes.
- Pre-fetching resources based on conversation history or inferred needs.
- Creating adaptive workflows that evolve with user input and model reasoning.
Similar ask: langchain-ai/langchain-mcp-adapters#62
Is your feature request related to a problem? Please describe.
Currently, Prompts are explicitly user-controlled and Resources are application-driven, which limits the flexibility of LLMs in dynamically discovering and utilizing them. This can lead to missed opportunities for automation, context-awareness, and intelligent suggestions, especially in complex workflows where the model could benefit from proactively surfacing relevant prompts or resources.
Reference https://modelcontextprotocol.io/docs/learn/server-concepts#core-server-features
Describe the solution you'd like
I’d like to propose making Prompts and Resources optionally LLM-controlled, similar to how Tools are handled. This would allow models to:
This hybrid control model would preserve user agency while unlocking more powerful and adaptive workflows.
Describe alternatives you've considered
Additional context
By extending LLM control to prompts and resources, we can enable richer interactions like:
Similar ask: langchain-ai/langchain-mcp-adapters#62