English / ไธญๆ
Website ยท GitHub ยท Issues ยท Docs
๐ Join our Community
๐ฑ Lark Group ยท WeChat ยท Discord ยท X
In the AI era, data is abundant, but high-quality context is hard to come by. When building AI Agents, developers often face these challenges:
- Fragmented Context: Memories are in code, resources are in vector databases, and skills are scattered, making them difficult to manage uniformly.
- Surging Context Demand: An Agent's long-running tasks produce context at every execution. Simple truncation or compression leads to information loss.
- Poor Retrieval Effectiveness: Traditional RAG uses flat storage, lacking a global view and making it difficult to understand the full context of information.
- Unobservable Context: The implicit retrieval chain of traditional RAG is like a black box, making it hard to debug when errors occur.
- Limited Memory Iteration: Current memory is just a record of user interactions, lacking Agent-related task memory.
OpenViking is an open-source Context Database designed specifically for AI Agents.
We aim to define a minimalist context interaction paradigm for Agents, allowing developers to completely say goodbye to the hassle of context management. OpenViking abandons the fragmented vector storage model of traditional RAG and innovatively adopts a "file system paradigm" to unify the structured organization of memories, resources, and skills needed by Agents.
With OpenViking, developers can build an Agent's brain just like managing local files:
- Filesystem Management Paradigm โ Solves Fragmentation: Unified context management of memories, resources, and skills based on a filesystem paradigm.
- Tiered Context Loading โ Reduces Token Consumption: L0/L1/L2 three-tier structure, loaded on demand, significantly saving costs.
- Directory Recursive Retrieval โ Improves Retrieval Effect: Supports native filesystem retrieval methods, combining directory positioning with semantic search to achieve recursive and precise context acquisition.
- Visualized Retrieval Trajectory โ Observable Context: Supports visualization of directory retrieval trajectories, allowing users to clearly observe the root cause of issues and guide retrieval logic optimization.
- Automatic Session Management โ Context Self-Iteration: Automatically compresses content, resource references, tool calls, etc., in conversations, extracting long-term memory, making the Agent smarter with use.
Before starting with OpenViking, please ensure your environment meets the following requirements:
- Python Version: 3.9 or higher
- Operating System: Linux, macOS, Windows
- Network Connection: A stable network connection is required (for downloading dependencies and accessing model services)
pip install openvikingOpenViking requires the following model capabilities:
- VLM Model: For image and content understanding
- Embedding Model: For vectorization and semantic retrieval
OpenViking supports various model services:
- OpenAI Models: Supports GPT-4V and other VLM models, and OpenAI Embedding models.
- Volcengine (Doubao Models): Recommended for low cost and high performance, with free quotas for new users. For purchase and activation, please refer to: Volcengine Purchase Guide.
- Other Custom Model Services: Supports model services compatible with the OpenAI API format.
Create a configuration file ov.conf:
{
"embedding": {
"dense": {
"api_base" : "<api-endpoint>", // API endpoint address
"api_key" : "<your-api-key>", // Model service API Key
"backend" : "<backend-type>", // Backend type (volcengine or openai)
"dimension": 1024, // Vector dimension
"model" : "<model-name>" // Embedding model name (e.g., doubao-embedding-vision-250615 or text-embedding-3-large)
}
},
"vlm": {
"api_base" : "<api-endpoint>", // API endpoint address
"api_key" : "<your-api-key>", // Model service API Key
"backend" : "<backend-type>", // Backend type (volcengine or openai)
"model" : "<model-name>" // VLM model name (e.g., doubao-seed-1-8-251228 or gpt-4-vision-preview)
}
}๐ Expand to see the configuration example for your model service:
Example 1: Using Volcengine (Doubao Models)
{
"embedding": {
"dense": {
"api_base" : "https://ark.cn-beijing.volces.com/api/v3",
"api_key" : "your-volcengine-api-key",
"backend" : "volcengine",
"dimension": 1024,
"model" : "doubao-embedding-vision-250615"
}
},
"vlm": {
"api_base" : "https://ark.cn-beijing.volces.com/api/v3",
"api_key" : "your-volcengine-api-key",
"backend" : "volcengine",
"model" : "doubao-seed-1-8-251228"
}
}Example 2: Using OpenAI Models
{
"embedding": {
"dense": {
"api_base" : "https://api.openai.com/v1",
"api_key" : "your-openai-api-key",
"backend" : "openai",
"dimension": 3072,
"model" : "text-embedding-3-large"
}
},
"vlm": {
"api_base" : "https://api.openai.com/v1",
"api_key" : "your-openai-api-key",
"backend" : "openai",
"model" : "gpt-4-vision-preview"
}
}After creating the configuration file, set the environment variable to point to it:
export OPENVIKING_CONFIG_FILE=ov.conf๐ก Tip: You can also place the configuration file in other locations, just specify the correct path in the environment variable.
๐ Prerequisite: Ensure you have completed the environment configuration in the previous step.
Now let's run a complete example to experience the core features of OpenViking.
Create example.py:
import openviking as ov
# Initialize OpenViking client with data directory
client = ov.SyncOpenViking(path="./data")
try:
# Initialize the client
client.initialize()
# Add resource (supports URL, file, or directory)
add_result = client.add_resource(
path="https://raw.githubusercontent.com/volcengine/OpenViking/refs/heads/main/README.md"
)
root_uri = add_result['root_uri']
# Explore the resource tree structure
ls_result = client.ls(root_uri)
print(f"Directory structure:\n{ls_result}\n")
# Use glob to find markdown files
glob_result = client.glob(pattern="**/*.md", uri=root_uri)
if glob_result['matches']:
content = client.read(glob_result['matches'][0])
print(f"Content preview: {content[:200]}...\n")
# Wait for semantic processing to complete
print("Wait for semantic processing...")
client.wait_processed()
# Get abstract and overview of the resource
abstract = client.abstract(root_uri)
overview = client.overview(root_uri)
print(f"Abstract:\n{abstract}\n\nOverview:\n{overview}\n")
# Perform semantic search
results = client.find("what is openviking", target_uri=root_uri)
print("Search results:")
for r in results.resources:
print(f" {r.uri} (score: {r.score:.4f})")
# Close the client
client.close()
except Exception as e:
print(f"Error: {e}")python example.pyDirectory structure:
...
Content preview: ...
Wait for semantic processing...
Abstract:
...
Overview:
...
Search results:
viking://resources/... (score: 0.8523)
...
Congratulations! You have successfully run OpenViking ๐
After running the first example, let's dive into the design philosophy of OpenViking. These five core concepts correspond one-to-one with the solutions mentioned earlier, together building a complete context management system:
We no longer view context as flat text slices but unify them into an abstract virtual filesystem. Whether it's memories, resources, or capabilities, they are mapped to virtual directories under the viking:// protocol, each with a unique URI.
This paradigm gives Agents unprecedented context manipulation capabilities, enabling them to locate, browse, and manipulate information precisely and deterministically through standard commands like ls and find, just like a developer. This transforms context management from vague semantic matching into intuitive, traceable "file operations". Learn more: Viking URI | Context Types
viking://
โโโ resources/ # Resources: project docs, repos, web pages, etc.
โ โโโ my_project/
โ โ โโโ docs/
โ โ โ โโโ api/
โ โ โ โโโ tutorials/
โ โ โโโ src/
โ โโโ ...
โโโ user/ # User: personal preferences, habits, etc.
โ โโโ memories/
โ โโโ preferences/
โ โ โโโ writing_style
โ โ โโโ coding_habits
โ โโโ ...
โโโ agent/ # Agent: skills, instructions, task memories, etc.
โโโ skills/
โ โโโ search_code
โ โโโ analyze_data
โ โโโ ...
โโโ memories/
โโโ instructions/
Stuffing massive amounts of context into a prompt all at once is not only expensive but also prone to exceeding model windows and introducing noise. OpenViking automatically processes context into three levels upon writing:
- L0 (Abstract): A one-sentence summary for quick retrieval and identification.
- L1 (Overview): Contains core information and usage scenarios for Agent decision-making during the planning phase.
- L2 (Details): The full original data, for deep reading by the Agent when absolutely necessary.
Learn more: Context Layers
viking://resources/my_project/
โโโ .abstract # L0 Layer: Abstract (~100 tokens) - Quick relevance check
โโโ .overview # L1 Layer: Overview (~2k tokens) - Understand structure and key points
โโโ docs/
โ โโโ .abstract # Each directory has corresponding L0/L1 layers
โ โโโ .overview
โ โโโ api/
โ โ โโโ .abstract
โ โ โโโ .overview
โ โ โโโ auth.md # L2 Layer: Full content - Load on demand
โ โ โโโ endpoints.md
โ โโโ ...
โโโ src/
โโโ ...
Single vector retrieval struggles with complex query intents. OpenViking has designed an innovative Directory Recursive Retrieval Strategy that deeply integrates multiple retrieval methods:
- Intent Analysis: Generate multiple retrieval conditions through intent analysis.
- Initial Positioning: Use vector retrieval to quickly locate the high-score directory where the initial slice is located.
- Refined Exploration: Perform a secondary retrieval within that directory and update high-score results to the candidate set.
- Recursive Drill-down: If subdirectories exist, recursively repeat the secondary retrieval steps layer by layer.
- Result Aggregation: Finally, obtain the most relevant context to return.
This "lock high-score directory first, then refine content exploration" strategy not only finds the semantically best-matching fragments but also understands the full context where the information resides, thereby improving the globality and accuracy of retrieval. Learn more: Retrieval Mechanism
OpenViking's organization uses a hierarchical virtual filesystem structure. All context is integrated in a unified format, and each entry corresponds to a unique URI (like a viking:// path), breaking the traditional flat black-box management mode with a clear hierarchy that is easy to understand.
The retrieval process adopts a directory recursive strategy. The trajectory of directory browsing and file positioning for each retrieval is fully preserved, allowing users to clearly observe the root cause of problems and guide the optimization of retrieval logic. Learn more: Retrieval Mechanism
OpenViking has a built-in memory self-iteration loop. At the end of each session, developers can actively trigger the memory extraction mechanism. The system will asynchronously analyze task execution results and user feedback, and automatically update them to the User and Agent memory directories.
- User Memory Update: Update memories related to user preferences, making Agent responses better fit user needs.
- Agent Experience Accumulation: Extract core content such as operational tips and tool usage experience from task execution experience, aiding efficient decision-making in subsequent tasks.
This allows the Agent to get "smarter with use" through interactions with the world, achieving self-evolution. Learn more: Session Management
The OpenViking project adopts a clear modular architecture design. The main directory structure is as follows:
OpenViking/
โโโ openviking/ # Core source code directory
โ โโโ core/ # Core modules: client, engine, filesystem, etc.
โ โโโ models/ # Model integration: VLM and Embedding model encapsulation
โ โโโ parse/ # Resource parsing: file parsing, detection, OVPack handling
โ โโโ retrieve/ # Retrieval module: semantic retrieval, directory recursive retrieval
โ โโโ storage/ # Storage layer: vector DB, filesystem queue, observers
โ โโโ session/ # Session management: history, memory extraction
โ โโโ message/ # Message processing: formatting, conversion
โ โโโ prompts/ # Prompt templates: templates for various tasks
โ โโโ utils/ # Utilities: config, helpers
โ โโโ bin/ # Command line tools
โโโ docs/ # Project documentation
โ โโโ zh/ # Chinese documentation
โ โโโ en/ # English documentation
โ โโโ images/ # Documentation images
โโโ examples/ # Usage examples
โโโ tests/ # Test cases
โ โโโ client/ # Client tests
โ โโโ engine/ # Engine tests
โ โโโ integration/ # Integration tests
โ โโโ session/ # Session tests
โ โโโ vectordb/ # Vector DB tests
โโโ src/ # C++ extensions (high-performance index and storage)
โ โโโ common/ # Common components
โ โโโ index/ # Index implementation
โ โโโ store/ # Storage implementation
โโโ third_party/ # Third-party dependencies
โโโ pyproject.toml # Python project configuration
โโโ setup.py # Setup script
โโโ LICENSE # Open source license
โโโ CONTRIBUTING.md # Contributing guide
โโโ AGENT.md # Agent development guide
โโโ README.md # Project readme
For more details, please visit our Full Documentation.
OpenViking is an open-source project initiated and maintained by the ByteDance Volcengine Viking Team.
The Viking team focuses on unstructured information processing and intelligent retrieval, accumulating rich commercial practical experience in context engineering technology:
- 2019: VikingDB vector database supported large-scale use across all ByteDance businesses.
- 2023: VikingDB sold on Volcengine public cloud.
- 2024: Launched developer product matrix: VikingDB, Viking KnowledgeBase, Viking MemoryBase.
- 2025: Created upper-layer application products like AI Search and Vaka Knowledge Assistant.
- Oct 2025: Open-sourced MineContext, exploring proactive AI applications.
- Jan 2026: Open-sourced OpenViking, providing underlying context database support for AI Agents.
For more details, please see: About Us
OpenViking is still in its early stages, and there are many areas for improvement and exploration. We sincerely invite every developer passionate about AI Agent technology:
- Light up a precious Star for us to give us the motivation to move forward.
- Visit our Website to understand the philosophy we convey, and use it in your projects via the Documentation. Feel the change it brings and give us feedback on your truest experience.
- Join our community to share your insights, help answer others' questions, and jointly create an open and mutually helpful technical atmosphere:
- ๐ฑ Lark Group: Scan the QR code to join โ View QR Code
- ๐ฌ WeChat Group: Scan the QR code to add assistant โ View QR Code
- ๐ฎ Discord: Join Discord Server
- ๐ฆ X (Twitter)๏ผFollow us
- Become a Contributor, whether submitting a bug fix or contributing a new feature, every line of your code will be an important cornerstone of OpenViking's growth.
Let's work together to define and build the future of AI Agent context management. The journey has begun, looking forward to your participation!
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.