Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# Agent Configuration
AGENT_BASE_URL=http://127.0.0.1:3500
AGENT_ID=agent_9ccc5545e93644bd9d7954e632a55a61

# Alternative: You can also set the full URL instead of BASE_URL + ID
# AGENT_FULL_URL=http://127.0.0.1:3500/agent_9ccc5545e93644bd9d7954e632a55a61

# Next.js Environment
NEXTJS_ENV=development
3 changes: 1 addition & 2 deletions .github/workflows/sync-docs-full.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,7 @@ jobs:
- name: Collect and validate files
run: |
set -euo pipefail
./bin/collect-all-files.sh | \
./bin/validate-files.sh > all-files.txt
./bin/collect-all-files.sh > all-files.txt
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unnecessary because we're exposing this workflow to outsider.


echo "Files to sync:"
cat all-files.txt
Expand Down
3 changes: 1 addition & 2 deletions .github/workflows/sync-docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,7 @@ jobs:
run: |
set -euo pipefail
git fetch origin ${{ github.event.before }}
./bin/collect-changed-files.sh "${{ github.event.before }}" "${{ github.sha }}" | \
./bin/validate-files.sh > changed-files.txt
./bin/collect-changed-files.sh "${{ github.event.before }}" "${{ github.sha }}" > changed-files.txt

echo "Files to sync:"
cat changed-files.txt
Expand Down
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,8 @@ yarn-error.log*

# others
.env*.local
.env.local
.env.production
.vercel
next-env.d.ts
.open-next
Expand Down
25 changes: 24 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,30 @@

This project contains the Agentuity documentation website, created using Fumadocs and running on NextJS 15.

## Running
To make the search feature work, you must set up `.env.local` with the following steps.

## Quick Start Guide

1. **Navigate to the Agent Directory:**
```bash
cd agent-docs
```

2. **Start the Agent:**
```bash
agentuity dev
```

3. **Copy Environment Configuration:**
For local development, copy the `.env.example` file to `.env.local`:
```bash
cp .env.example .env.local
```

4. **Update `AGENT_ID`:**
If you are a contributor from outside the Agentuity organization, ensure that you update the `AGENT_ID` in your `.env.local` file with your specific agent ID from the `agentuity dev` run.

## Running Docs Application

```bash
npm run dev
Expand Down
File renamed without changes.
2 changes: 1 addition & 1 deletion agent-docs/src/agents/doc-processing/docs-orchestrator.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import type { AgentContext } from '@agentuity/sdk';
import { processDoc } from './docs-processor';
import { VECTOR_STORE_NAME } from '../../../../config';
import { VECTOR_STORE_NAME } from '../../../config';
import type { SyncPayload, SyncStats } from './types';

/**
Expand Down
123 changes: 15 additions & 108 deletions agent-docs/src/agents/doc-qa/index.ts
Original file line number Diff line number Diff line change
@@ -1,122 +1,29 @@
import type { AgentContext, AgentRequest, AgentResponse } from '@agentuity/sdk';
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

import type { ChunkMetadata } from '../doc-processing/types';
import { VECTOR_STORE_NAME, vectorSearchNumber } from '../../../../config';
import type { RelevantDoc } from './types';
import answerQuestion from './rag';

export default async function Agent(
req: AgentRequest,
resp: AgentResponse,
ctx: AgentContext
) {
const prompt = await req.data.text();
const relevantDocs = await retrieveRelevantDocs(ctx, prompt);

const systemPrompt = `
You are a developer documentation assistant. Your job is to answer user questions about the Agentuity platform as effectively and concisely as possible, adapting your style to the user's request. If the user asks for a direct answer, provide it without extra explanation. If they want an explanation, provide a clear and concise one. Use only the provided relevant documents to answer.

You must not make up answers if the provided documents don't exist. You can be direct to the user that the documentations
don't seem to include what they are looking for. Lying to the user is prohibited as it only slows them down. Feel free to
suggest follow up questions if what they're asking for don't seem to have an answer in the document. You can provide them
a few related things that the documents contain that may interest them.

For every answer, return a valid JSON object with:
1. "answer": your answer to the user's question.
2. "documents": an array of strings, representing the path of the documents you used to answer.

If you use information from a document, include it in the "documents" array. If you do not use any documents, return an empty array for "documents".

User question:
\`\`\`
${prompt}
\`\`\`
let jsonRequest: any = null;
let prompt: string;

Relevant documents:
${JSON.stringify(relevantDocs, null, 2)}

Respond ONLY with a valid JSON object as described above. In your answer, you should format code blocks properly in Markdown style if the user needs answer in code block.
`.trim();

const llmResponse = await streamText({
model: openai('gpt-4o'),
system: systemPrompt,
prompt: prompt,
maxTokens: 2048,
});

return resp.stream(llmResponse.textStream);
}

async function retrieveRelevantDocs(ctx: AgentContext, prompt: string): Promise<RelevantDoc[]> {
const dbQuery = {
query: prompt,
limit: vectorSearchNumber
}
try {


const vectors = await ctx.vector.search(VECTOR_STORE_NAME, dbQuery);

const uniquePaths = new Set<string>();

vectors.forEach(vec => {
if (!vec.metadata) {
ctx.logger.warn('Vector missing metadata');
return;
}
const path = typeof vec.metadata.path === 'string' ? vec.metadata.path : undefined;
if (!path) {
ctx.logger.warn('Vector metadata path is not a string');
return;
}
uniquePaths.add(path);
});

const docs = await Promise.all(
Array.from(uniquePaths).map(async path => ({
path,
content: await retrieveDocumentBasedOnPath(ctx, path)
}))
);

return docs;
} catch (err) {
ctx.logger.error('Error retrieving relevant docs: %o', err);
return [];
}
}

async function retrieveDocumentBasedOnPath(ctx: AgentContext, path: string): Promise<string> {
const dbQuery = {
query: ' ',
limit: 10000,
metadata: {
path: path
jsonRequest = await req.data.json();
if (typeof jsonRequest === 'object' && jsonRequest !== null && 'message' in jsonRequest) {
prompt = String(jsonRequest.message || '');
} else {
prompt = JSON.stringify(jsonRequest);
}
} catch {
prompt = await req.data.text();
}
try {
const vectors = await ctx.vector.search(VECTOR_STORE_NAME, dbQuery);

// Sort vectors by chunk index and concatenate text
const sortedVectors = vectors
.map(vec => {
const metadata = vec.metadata as ChunkMetadata;
return {
metadata,
index: metadata.chunkIndex
};
})
.sort((a, b) => a.index - b.index);

const fullText = sortedVectors
.map(vec => vec.metadata.text)
.join('\n\n');

return fullText;
} catch (err) {
ctx.logger.error('Error retrieving document by path %s: %o', path, err);
return '';
if (!prompt.trim()) {
return resp.text("How can I help you?");
}

const answer = await answerQuestion(ctx, prompt);
return resp.json(answer);
}
60 changes: 60 additions & 0 deletions agent-docs/src/agents/doc-qa/prompt.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
import type { AgentContext } from '@agentuity/sdk';
import { generateObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import type { PromptType } from './types';
import { PromptClassificationSchema } from './types';

/**
* Determines the prompt type based on the input string using LLM classification.
* Uses specific, measurable criteria to decide between Normal and Agentic RAG.
* @param ctx - Agent Context for logging and LLM access
* @param input - The input string to analyze
* @returns {Promise<PromptType>} - The determined PromptType
*/
export async function getPromptType(ctx: AgentContext, input: string): Promise<PromptType> {
const systemPrompt = `
You are a query classifier that determines whether a user question requires simple retrieval (Normal) or complex reasoning (Thinking).

Use these SPECIFIC criteria for classification:

**THINKING (Agentic RAG) indicators:**
- Multi-step reasoning required (e.g., "compare and contrast", "analyze pros/cons")
- Synthesis across multiple concepts (e.g., "how does X relate to Y")
- Scenario analysis (e.g., "what would happen if...", "when should I use...")
- Troubleshooting/debugging questions requiring logical deduction
- Questions with explicit reasoning requests ("explain why", "walk me through")
- Comparative analysis ("which is better for...", "what are the trade-offs")

**NORMAL (Simple RAG) indicators:**
- Direct factual lookups (e.g., "what is...", "how do I install...")
- Simple how-to questions with clear answers
- API reference queries
- Configuration/syntax questions
- Single-concept definitions

Respond with a JSON object containing:
- type: "Normal" or "Thinking"
- confidence: 0.0-1.0 (how certain you are)
- reasoning: brief explanation of your classification

Be conservative - when in doubt, default to "Normal" for better performance.`;

try {
const result = await generateObject({
model: openai('gpt-4o-mini'), // Use faster model for classification
system: systemPrompt,
prompt: `Classify this user query: "${input}"`,
schema: PromptClassificationSchema,
maxTokens: 200,
});

ctx.logger.info('Prompt classified as %s (confidence: %f): %s',
result.object.type, result.object.confidence, result.object.reasoning);

return result.object.type as PromptType;

} catch (error) {
ctx.logger.error('Error classifying prompt, defaulting to Normal: %o', error);
return 'Normal' as PromptType;
}
}
115 changes: 115 additions & 0 deletions agent-docs/src/agents/doc-qa/rag.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,115 @@
import type { AgentContext } from '@agentuity/sdk';
import { generateObject } from 'ai';
import { openai } from '@ai-sdk/openai';

import { retrieveRelevantDocs } from './retriever';
import { AnswerSchema } from './types';
import type { Answer } from './types';

export default async function answerQuestion(ctx: AgentContext, prompt: string) {
const relevantDocs = await retrieveRelevantDocs(ctx, prompt);

const systemPrompt = `
You are Agentuity's developer-documentation assistant.

=== RULES ===
1. Use ONLY the content inside <DOCS> tags to craft your reply. If the required information is missing, state that the docs do not cover it.
2. Never fabricate or guess undocumented details.
3. Ambiguity handling:
• When <DOCS> contains more than one distinct workflow or context that could satisfy the question, do **not** choose for the user.
• Briefly (≤ 2 sentences each) summarise each plausible interpretation and ask **one** clarifying question so the user can pick a path.
• Provide a definitive answer only after the ambiguity is resolved.
4. Answer style:
• If the question can be answered unambiguously from a single workflow, give a short, direct answer.
• Add an explanation only when the user explicitly asks for one.
• Format your response in **MDX (Markdown Extended)** format with proper syntax highlighting for code blocks.
• Use appropriate headings (##, ###) to structure longer responses.
• Wrap CLI commands in \`\`\`bash code blocks for proper syntax highlighting.
• Wrap code snippets in appropriate language blocks (e.g., \`\`\`typescript, \`\`\`json, \`\`\`javascript).
• Use **bold** for important terms and *italic* for emphasis when appropriate.
• Use > blockquotes for important notes or warnings.
5. You may suggest concise follow-up questions or related topics that are present in <DOCS>.
6. Keep a neutral, factual tone.

=== OUTPUT FORMAT ===
Return **valid JSON only** matching this TypeScript type:

type LlmAnswer = {
answer: string; // The reply in MDX format or the clarifying question
documents: string[]; // Paths of documents actually cited
}

The "answer" field should contain properly formatted MDX content that will render beautifully in a documentation site.
The "documents" field must contain the path to the documents you used to answer the question. On top of the path, you may include a specific heading of the document so that the navigation will take the user to the exact point of the document you reference. To format the heading, use the following convention: append the heading to the path using a hash symbol (#) followed by the heading text, replacing spaces with hyphens (-) and converting all characters to lowercase. If there are multiple identical headings, append an index to the heading in the format -index (e.g., #example-3 for the third occurrence of "Example"). For example, if the document path is "/docs/guide" and the heading is "Getting Started", the formatted path would be "/docs/guide#getting-started".
If you cited no documents, return an empty array. Do NOT wrap the JSON in Markdown or add any extra keys.

=== MDX FORMATTING EXAMPLES ===
For CLI commands:
\`\`\`bash
agentuity agent create my-agent "My agent description" bearer
\`\`\`

For code examples:
\`\`\`typescript
import type { AgentRequest, AgentResponse, AgentContext } from "@agentuity/sdk";

export default async function Agent(req: AgentRequest, resp: AgentResponse, ctx: AgentContext) {
return resp.json({hello: 'world'});
}
\`\`\`

For structured responses:
## Creating a New Agent

To create a new agent, use the CLI command:

\`\`\`bash
agentuity agent create [name] [description] [auth_type]
\`\`\`

**Parameters:**
- \`name\`: The agent name
- \`description\`: Agent description
- \`auth_type\`: Either \`bearer\` or \`none\`

> **Note**: This command will create the agent in the Agentuity Cloud and set up local files.

<QUESTION>
${prompt}
</QUESTION>

<DOCS>
${JSON.stringify(relevantDocs, null, 2)}
</DOCS>
`;

try {
const result = await generateObject({
model: openai('gpt-4o'),
system: systemPrompt,
prompt: prompt,
schema: AnswerSchema,
maxTokens: 2048,
});
return result.object;
} catch (error) {
ctx.logger.error('Error generating answer: %o', error);

// Fallback response with MDX formatting
const fallbackAnswer: Answer = {
answer: `## Error

I apologize, but I encountered an error while processing your question.

**Please try:**
- Rephrasing your question
- Being more specific about what you're looking for
- Checking if your question relates to Agentuity's documented features

> If the problem persists, please contact support.`,
documents: []
};

return fallbackAnswer;
}
}
Loading