-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Description
Is your feature request related to a problem? Please describe.
The Problem: The "Hidden Tax" on AI Orchestration
As Go adoption grows for Agentic AI workflows (Gemini/GPT-4o), passing raw go-github structs to LLMs incurs a massive "Noise Tax."
Currently, a standard Issue or PullRequest struct contains HATEOAS links, internal NodeIDs, and redundant pointers. This wastes ~60% of the LLM context window.
For enterprise users building RAG/Agents on Google Cloud, this results in:
- High Latency: Slower ingestion times.
- High Cost: Wasted spend on input tokens for metadata the LLM doesn't need.
- Hallucinations: Lower Signal-to-Noise ratio distracts the model.
Describe the solution you'd like
The Solution: ToAgentContext() (The Silicon Diet)
I propose adding a high-density serialization method ToAgentContext() for core entities (Issue, PullRequest, Repository, Comment).
Proposed Behavior:
- Transform "Fat Structs" into
map[string]anyoptimized for RAG. - Strip all HATEOAS, URLs, and non-narrative metadata.
- Enforce strict token limits on body content to prevent context overflow.
Strategic Value:
This enables Go to compete directly with Python frameworks for high-efficiency AI agents by treating Context as a scarce resource.
Describe alternatives you've considered
Alternative: Users manually marshal/unmarshal structs and scrub data.
Downside: Error-prone, boilerplate-heavy, and prevents standardization across the ecosystem.
Additional context
Benchmarks (Verified in PR #3967):
- Standard Payload: ~2,000 bytes
- Agent Context: ~790 bytes
- DENSITY IMPROVEMENT: ~60% 📉
Implementation:
A full implementation is already submitted and passing all tests in PR #3967.