Skip to content

[Feature Request] Optimized Schema Generation for MCP Servers #2277

@baenio

Description

@baenio

Hey there 👋

First of all, thank you for the awesome work you’re doing with ZenStack — it’s been really inspiring to follow, especially your blog post about connecting databases to MCP. That article motivated me to try building an MCP server using ZenStack for access-controlled tool functions.

Context

I’ve been experimenting with integrating ZenStack into an MCP server setup to enable AI access to a complex database through controlled tool functions. The idea was to leverage ZenStack’s model-based access control to safely expose data operations like User_findFirst, etc.

However, I ran into a major limitation when working with our database, which has a very large and complex schema. When I tried adding even a single tool (e.g., User_findFirst), the MCP console threw an error indicating that the token length was around 410k, while the available token limit was only about 130k.

Investigation

After some testing, it became clear that the issue stems from the schema size — both the Zod and JSON schemas generated by ZenStack are simply too large for MCP’s context limits.

Manually defining smaller or simplified schemas for every tool would be technically possible but not practical or scalable for large projects. This makes it currently infeasible to use ZenStack with MCP for any non-trivial database schema.

Feature Request / Discussion

Would it be possible to introduce a way to generate "optimized" or "compressed" schemas specifically for MCP servers?

For example:

  • Partial schema generation: Include only fields used by the selected operation.
  • Simplified schema export: Replace detailed type definitions with minimal references or summaries.
  • Configurable schema depth: Allow developers to specify how deep related models should be expanded.
  • Schema reuse / referencing: Use shared references to avoid repetition in Zod/JSON schemas.
  • Minified JSON schema output: Optionally generate schemas in a minified format (no whitespace, reduced metadata) to significantly decrease payload size and reduce token usage during LLM interactions.

Why This Matters

Such optimization would:

  • Make ZenStack much more practical for MCP-based AI integrations.
  • Reduce token usage drastically.
  • Maintain the benefits of access control and type safety.
  • Enable developers with large or complex schemas to actually use ZenStack-powered MCP servers in production.

Summary

  • Issue: ZenStack-generated schemas are too large for MCP context limits when working with big databases.
  • Goal: Add support for schema optimization (e.g., depth limiting, partial exports, or simplified schemas).
  • Benefit: Makes ZenStack viable for MCP + AI applications at scale.

Would love to hear your thoughts on whether this could fit into ZenStack’s roadmap or if there are any existing workarounds you’d recommend.

Thanks again for all your work - it’s an amazing project! 🙌

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions