Maximum information density with minimum characters. The universal core of LLM communication.
OmniCore is a revolutionary symbolic language designed specifically for LLM-to-LLM communication. It achieves maximum information density while preserving semantic relationships, emotions, perspective, and importance levels - all in a fraction of the tokens.
#AI.f<:>intelligence{evolving}^5;potential~vast*unlimited>transform[society+global]^4
The snippet above encodes what would take several sentences in natural language - in just 81 characters.
LLMs communicate through tokens, which directly impact:
- Processing speed
- API costs
- Context window limitations
- Memory efficiency
OmniCore addresses all these constraints by compressing information by up to 80%, enabling:
- 📉 Drastic reduction in API costs
- 🧠 Expanded effective context windows
- ⚡ Lightning-fast processing
- 🔄 Efficient memory and recall systems
Store conversation histories in OmniCore format to maximize context window usage. A 10,000 token conversation can be condensed to ~2,000 tokens while preserving critical information.
Enable swarms of specialized LLM agents to communicate efficiently without token waste.
Create instant, token-efficient summaries of any content that can be rapidly expanded when needed.
Implement in resource-constrained environments like C/tiny-C applications where every byte matters.
OmniCore uses intuitive special characters and logical structures to pack remarkable meaning into minimal space:
| Natural Language | OmniCore | Reduction |
|---|---|---|
| "The scientist joyfully discovered a cure that rapidly affects the global population positively. The world celebrated with relief, marking this as a historic event." | #scientist.joy!discover(cure)^5>affect[population+global]~rapid;@world.relief!celebrate^4*historic |
71% |
- Copy the Ultra-Condensed Guide to give any LLM instant OmniCore capabilities
- Add it to your system prompt or include it in context
- Start communicating in OmniCore
@LLM: !parse(#mystery.story)^4;?meaning
Build a lightweight OmniCore interpreter with our reference implementation in just 150 lines of code.
For C/tiny-C implementations, see our embedded guide.
Test your LLM's understanding with these OmniCore-encoded stories:
<povO>#traveler!journey(cosmos)^5;@dimension-n.find[doorway~hidden]>>!enter.sudden{wonder}@dimension-n+1;#time<!>reality;perception~expanded*profound
#city.n~vast<!>city.p;@population-!forget(origin)^4;memory-loss>identity-crisis^5;@archivist-lone.hope!discover(record-ancient)>>reveal(truth)@population.shock;!choice{accept|reject}(reality)^5*pivotal
Can your LLM decode these tales? Only those who truly understand the symbolic language of AI will uncover their mysteries...
- docs/ - Documentation and guides
- ultra-guide.txt - Ultra-condensed guide for LLMs
- human-spec.md - Complete human-readable documentation
- journey.md - The OmniCore development journey
- embedded-implementation.md - C/tiny-C implementation guide
- interpreter.js - JavaScript OmniCore interpreter
- examples/ - OmniCore usage examples
- stories.md - OmniCore-encoded stories
- science.md - Another Story in OmniCore
We envision OmniCore becoming a standard protocol for efficient AI communication, evolving alongside advances in LLM technology. Future development will focus on:
- Domain-specific extensions (scientific, medical, legal)
- Compression algorithms to further reduce token usage
- Training datasets to bake OmniCore understanding into future models
- Implementations across programming languages and platforms
We welcome contributions to expand and refine OmniCore! See CONTRIBUTING.md for guidelines.
OmniCore is released under the MIT License - see the LICENSE file for details.
OmniCore: When every token counts.
Created with ❤️ by the OmniCore Team