A symbolic communication protocol that turns natural language into queryable, reusable knowledge.
Core Principle: Structured symbols eliminate ambiguity. Precision is the goal. Token savings are the side effect.
Natural language is ambiguous and information disappears into text blobs. Whether it's an instruction, a policy document, or a strategic plan, you can't easily query it, reuse its parts, template it, or compose it with other information.
Vector Native solves this by turning text into structured data.
Before (Natural Language):
"Please analyze the Q4 2024 sales data and generate an executive summary report focusing on revenue trends."
Ambiguous: Which Q4? What format? What depth?
After (Vector Native):
●analyze|dataset:Q4_2024_sales|output:executive_summary|focus:revenue_trends|depth:high
Explicit: Every parameter is clear. No guessing.
●campaign_architecture|channel:LinkedIn|theme:speed_to_value
Apply ●campaign_architecture across different products. Don't rewrite from scratch.
Search a database of structured notes: "show all ●finding with confidence:high"
Combine a ●targeting_strategy from one workflow with a ●budget_allocation from another.
●update|section:timeline|field:deadline|old:Jan_15|new:Jan_20
Queryable trail of all changes. Not just text diffs.
Template says ●budget_allocation|total:$50K. Change to total:$100K. Everything else stays intact.
| Natural Language | Vector Native |
|---|---|
| Text blob | Structured operations |
| One-time use | Reusable components |
| Can't query | Database-ready |
| Can't compose | Mix and match |
| Lost after use | Knowledge asset |
●— Core Operation/Entity (Do this, This is an entity, This is a policy)|— Parameter Separator:— Key-value Binding⊕— Addition/Combination
→— Sequential flow○— Background/secondary≠— Block/reject
English: "Please give this maximum attention and add these values" (10 words, ~20 tokens)
Vector Native:
●⊕ (2 symbols, ~4 tokens)
Why this works: LLMs have learned symbol associations from training data (mathematical notation, programming syntax, configuration files). Vector Native leverages these pre-trained associations for precision.
Multi-agent coordination, internal APIs, background jobs
●execute|agent:analyzer|input:user_data|output:insights|priority:high
System prompts, tool configurations, workflow definitions
●assistant|mode:analytical|style:concise|depth:comprehensive
Knowledge bases, audit trails, team collaboration
●policy|type:remote_work|eligibility:all_employees|approval:manager
- Multi-agent systems needing precise coordination
- Knowledge management systems (legal, medical, research)
- Business operations with template-driven workflows
- System integrations requiring composable definitions
- Casual ChatGPT questions
- Creative writing
- Emotional support
- Exploratory conversations
- User-facing messages
Rule: If you need precision and reusability, use Vector Native. If a human reads it for the first time, use natural language.
Early testing (gpt-4o-mini, 5 scenarios):
| Variant | Compliance | Token Reduction |
|---|---|---|
| STRICT | 80% | 88.8% |
| BALANCED | 40% | 95.4% |
| MINIMAL | 40% | 95.7% |
When token savings matter:
- ✅ Production systems with thousands of agent messages
- ✅ Multi-agent coordination (compound costs)
- ✅ Background jobs (1000s of operations/day)
- ❌ One-off casual queries
Key insight: Token reduction (85-95%) is a side effect of precision. It matters for production cost scaling: $10,000/month vs $1,000/month.
Primary value: Precision, reusability, composability. Token savings are the bonus.
Before: "Can you create a presentation about our Q3 results? Include revenue charts, keep it concise."
After: ●create|type:presentation|topic:Q3_results|include:revenue_charts|style:concise
Before: "You are a helpful assistant. Always provide detailed responses. When analyzing data, be thorough."
After: ●assistant|mode:helpful|detail:high|reasoning:explicit
Before: "Please update the deadline in the project timeline section from January 15th to January 20th."
After: ●update|section:timeline|field:deadline|old:Jan_15|new:Jan_20
●workflow|id:content_review
⊕step_1|action:draft|owner:writer|deadline:monday
⊕step_2|action:review|owner:editor|deadline:wednesday
⊕step_3|action:publish|owner:admin|deadline:friday
Live translator: Vector-Native Gem
Say anything in natural language. Watch it become structured, reusable data.
Note: The optimal translation depends on your use case. Experiment with your own prompts.
📖 Implementation guides: docs/quickstart.md
Precise coordination with reusable patterns. Agents speak a common structured language.
Queryable research notes, legal documents, medical records. Structure makes information findable.
Template-driven project management, policy documentation, workflow definitions.
- Legal: Machine-readable contracts, clause libraries
- Medical: Structured clinical notes, treatment protocols
- Research: Queryable experiment logs, hypothesis tracking
- Engineering: Specification templates, requirement tracking
📖 Full catalog: docs/use-cases.md
Understand the symbols and their meanings: LANGUAGE_SPEC.md
Use the Gem to translate your own prompts and see what works.
Start with simple operations. Build your own symbol library over time.
Open an issue with your use case, examples, and learnings.
Vector Native is an open experiment. There's no single "correct" translation - it depends on your domain, use case, and model.
We need your perspective. Every domain has unique patterns. Your experiments help define what this protocol should be.
1. Share Translation Examples
- Take a verbose prompt from your domain
- Show your VN translation
- Explain your choices
- Share what you learned
2. Test in Your Domain
- Try VN for your specific use case
- Run experiments with different models
- Share results (positive or negative)
- Document what worked and what didn't
3. Build Variants
- Create your own interpretation
- Use different symbols or structures
- Test with your team/system
- Share your approach
Simple: Open a GitHub issue with your examples, results, or ideas.
Code: See CONTRIBUTING.md for technical guidelines.
Discussion: Questions? Open a discussion thread.
This is open research. We're discovering:
- Which symbols work best for different operations
- How much structure is optimal
- Where VN excels and where it falls short
- How different models interpret symbols
- What makes information truly reusable
- Domain-specific patterns and variations
Your contributions directly shape these answers.
No. VN leverages pre-trained symbol associations in LLMs. Symbols trigger statistical patterns from training data (math, programming, config files), not just shorter text.
JSON is verbose and LLMs aren't trained to "think" in JSON. VN uses symbols with strong semantic associations, achieving both precision and token efficiency.
Testing shows it works well with GPT-4, Claude, Gemini. Smaller models may need more explicit system prompts. Your testing helps us understand compatibility.
Casual conversations, creative writing, user-facing content, emotional support. VN is for precision and reusability, not warmth.
Yes! Build your own variant if your domain needs different symbols. Share your approach so others can learn.
📖 More questions: docs/faq.md
📖 Language Spec — Complete symbol definitions
🎯 Use Cases — Domain-specific applications
📈 Token Savings — When efficiency matters
🧠 Why It Works — Training data mechanics
💬 FAQ — Common questions
🚀 Quickstart — Get started fast
- GitHub Issues: Bug reports, feature requests, examples
- Discussions: Ideas, questions, research
- Discord: [Coming soon]
MIT License - see LICENSE
Vector Native is fully open source. We're defining this protocol together.
Maintained by PersistOS