You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: .github/AI_QUICK_START.md
+26-49Lines changed: 26 additions & 49 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,7 +42,7 @@ to directly interact with your codebase, run scripts, and access the local knowl
42
42
You can use any CLI-based AI agent that you are comfortable with. The following sections will show you how to set up
43
43
the knowledge base, and then provide an optional guide for configuring the Gemini CLI as one possible agent.
44
44
45
-
## 3. Setup & Build the Knowledge Base (Required)
45
+
## 3. Setup the AI Environment (Required)
46
46
47
47
This section covers the mandatory steps to create the local vector database. The embedding process uses a Google
48
48
Gemini model, so a Gemini API key is required regardless of which AI agent you choose for chatting.
@@ -73,87 +73,64 @@ The API key authenticates your requests to Google's Gemini models.
73
73
```
74
74
This file is already listed in `.gitignore` to prevent you from accidentally committing your key.
75
75
76
-
### Step 3.3: Build the Knowledge Base
76
+
### Step 3.3: Start the AI Servers
77
77
78
-
With the configuration in place, you can now build the local vector database.
78
+
The AI tooling now leverages Model Context Protocol (MCP) servers that automatically manage the ChromaDB instances for both the knowledge base and agent memories. You no longer need to manually build the knowledge base or initialize the memory collection.
79
79
80
-
1. **Start the AI Server**: In a new terminal window, start the ChromaDB server. Keep this process running in the background.
80
+
1. **Start the AI Servers**: In a new terminal window, run the following command. This will start all necessary MCP servers.
81
81
```bash
82
-
npm run ai:server
82
+
npm run ai:server-all
83
83
```
84
-
2. **Build the Knowledge Base**: In another terminal, run the following commands sequentially. This may take a few
85
-
minutes on the first run.
86
-
```bash
87
-
# 1. Generates docs/output/structure.json for codebase mapping
88
-
npm run generate-docs-json
89
-
90
-
# 2. Creates, embeds, and stores all content in the local ChromaDB
91
-
npm run ai:build-kb
92
-
```
93
-
94
-
## 4. Optional: Enable Agent Memory Core
84
+
Keep this process running in the background. The servers will automatically create and embed the knowledge base, and manage the memory core.
95
85
96
-
To give your AI agent a persistent memory of past interactions, you can enable the optional memory core. This allows the agent to recall previous conversations, decisions, and thought processes, leading to more informed and consistent responses.
97
-
98
-
### Step 4.1: Start the Memory Server
99
-
100
-
In a new terminal window, start the dedicated ChromaDB server for agent memories. Keep this process running in the background.
101
-
102
-
```bash
103
-
npm run ai:server-memory
104
-
```
86
+
**Note**: If you wish to run the ChromaDB instances in separate terminals to view their logs or errors, you can still use `npm run ai:server` (for the knowledge base) and `npm run ai:server-memory` (for the memory core) individually. However, this is optional, as `npm run ai:server-all` handles everything.
105
87
106
-
### Step 4.2: Initialize the Memory Collection
88
+
## 4. Configuring an Agent (Optional Example: Gemini CLI)
107
89
108
-
In another terminal, run the following command to create the necessary collection in the memory database.
90
+
Once the AI servers are running, you can interact with them using any terminal-based AI agent. This is an optional step showing how to install the Gemini CLI.
109
91
110
92
```bash
111
-
npm run ai:setup-memory-db
93
+
npm i -g @google/gemini-cli
112
94
```
113
95
114
-
## 5. Configuring an Agent (Optional Example: Gemini CLI)
96
+
## 5. The AI-Native Workflow: A Partnership
115
97
116
-
Once the knowledge base is built, you can interact with it using any terminal-based AI agent. This is an optional
117
-
step showing how to install the Gemini CLI.
118
-
119
-
```bash
120
-
npm i -g @google/gemini-cli
121
-
```
98
+
The goal of this system is to create a powerful partnership between you and the AI agent. To facilitate this, we use a system of configuration and guide files that automate the agent's setup and define its behavior.
122
99
123
-
## 6. The AI-Native Workflow: A Partnership
100
+
### Agent Configuration and Behavior
124
101
125
-
The goal of this system is to create a powerful partnership between you and the AI agent. To facilitate this, we use a dual-guide system:
102
+
The agent's behavior is primarily controlled by the following files:
126
103
127
-
- **`AGENTS.md`**: This is the rulebook for the **AI**. It contains the critical instructions, protocols, and constraints the agent must follow to be an effective contributor.
104
+
- **`.gemini/settings.json`**: This file is the entry point for the agent's configuration. It tells the agent which files to load into its context at the beginning of a session, and also defines the Model Context Protocol (MCP) servers to be used for the session.
105
+
- **`.gemini/GEMINI.md`**: This file provides the initial instructions for the agent, including the mandatory pre-response check to ensure it has completed the session initialization.
106
+
- **`AGENTS_STARTUP.md`**: This guide contains the detailed, step-by-step instructions for the agent's session initialization. It ensures the agent has a foundational understanding of the Neo.mjs architecture before it begins any task.
107
+
- **`AGENTS.md`**: This is the rulebook for the **AI**. It contains the critical "per-turn" instructions, protocols, and constraints the agent must follow to be an effective contributor. This includes the "Ticket-First" gate and the Memory Core protocol.
128
108
- **`WORKING_WITH_AGENTS.md`**: This is the playbook for **you**, the human developer. It provides essential strategies for guiding the agent, handling common issues, and maximizing its performance.
129
109
110
+
**Note on Agent Specificity**: The `.gemini/settings.json` and `.gemini/GEMINI.md` files are specific to the Gemini CLI. Other AI agents may use different configuration files or methods to achieve similar results.
111
+
130
112
Your first step before starting a session should be to familiarize yourself with the `WORKING_WITH_AGENTS.md` guide.
131
113
132
-
### The Workflow in Action
114
+
### The Automated Workflow in Action
133
115
134
-
Here’s how the intended workflow looks:
116
+
Here’s how the intended workflow looks with the automated setup:
135
117
136
118
1. **Start your AI Agent**: From the repo root, launch your AI agent of choice (e.g., Gemini CLI).
137
119
```bash
138
120
gemini
139
121
```
140
-
2. **Give the Initial Handshake**: This is the most critical step. Your first prompt **must** direct the agent to its instructions.
141
-
> **Your Prompt:** "follow the instructions inside @AGENTS.md"
142
-
143
-
3. **Give a High-Level Prompt**: Once the agent has completed its initialization, you can give it a goal.
122
+
2. **Give a High-Level Prompt**: The agent will automatically perform its initialization based on the configuration files. Once it's ready, you can give it a goal.
144
123
>**Your Prompt:**"Explain the Neo.mjs two-tier reactivity model and provide a simple code example."
145
124
146
-
4. **The AI Takes Over**: An agent following the `AGENTS.md` protocol will then:
125
+
3. **The AI Takes Over**: An agent following the `AGENTS.md` protocol will then:
147
126
***Formulate a query**: It will determine that "reactivity" is the key concept.
148
-
* **Execute the tool**: It will run `npm run ai:query -- -q "reactivity" -t guide` on its own.
127
+
***Execute the tool**: It will use the `neo.mjs-knowledge-base` MCP server's `query_documents` tool to query the knowledge base for "reactivity".
149
128
* **Synthesize the answer**: After getting the paths to the most relevant guides and source files, it will read them and use that fresh, accurate context to generate a comprehensive explanation.
150
129
151
130
This is the crucial difference: you are delegating the *research* task to the AI, making it a true partner that can autonomously navigate and understand your codebase.
152
131
153
-
## 7. Common Troubleshooting
132
+
## 6. Common Troubleshooting
154
133
- **API Key Errors**: If queries fail with authentication issues, try regenerating the key in Google AI Studio or
155
134
check your usage quotas.
156
135
- **`gemini` Command Not Found**: If you installed the Gemini CLI but the command isn't found, ensure your system's
157
136
PATH includes the global npm binaries directory. You can find this directory by running `npm bin -g`.
158
-
- **Memory Issues during Build**: The `ai:build-kb` script can be memory-intensive. If it fails, you can try increasing
159
-
the heap size available to Node.js: `node --max-old-space-size=4096 ./buildScripts/ai/embedKnowledgeBase.mjs`.
0 commit comments