Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions agent-docs/agentuity.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -78,3 +78,6 @@ agents:
- id: agent_9ccc5545e93644bd9d7954e632a55a61
name: doc-qa
description: Agent that can answer questions based on dev docs as the knowledge base
- id: agent_ddcb59aa4473f1323be5d9f5fb62b74e
name: agent-pulse
description: Agentuity web app agent that converses with users for generate conversation and structured docs tutorials.
102 changes: 102 additions & 0 deletions agent-docs/src/agents/agent-pulse/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
# Pulse Agent

A conversational AI agent for tutorial management built with OpenAI and structured responses.

## Overview

Pulse is a friendly AI assistant that helps users discover, start, and navigate through tutorials. It uses OpenAI's GPT-4o-mini with structured response generation to provide both conversational responses and actionable instructions.

## Architecture

### Core Components

- **`index.ts`**: Main agent logic using `generateObject` for structured responses
- **`chat-helpers.ts`**: Conversation history management
- **`tutorial-helpers.ts`**: Tutorial content fetching and formatting
- **`tutorial.ts`**: Tutorial API integration

### Response Structure

The agent uses `generateObject` to return structured responses with two parts:

```typescript
{
message: string, // Conversational response for the user
actionable?: { // Optional action for the program to execute
type: 'start_tutorial' | 'next_step' | 'previous_step' | 'get_tutorials' | 'none',
tutorialId?: string,
step?: number
}
}
```

### How It Works

1. **User Input**: Agent receives user message and conversation history
2. **LLM Processing**: OpenAI generates structured response with message and optional actionable object
3. **Action Execution**: Program intercepts actionable objects and executes them:
- `get_tutorials`: Fetches available tutorial list
- `start_tutorial`: Fetches real tutorial content from API
- `next_step`/`previous_step`: Navigate through tutorial steps (TODO)
4. **Response**: Returns conversational message plus any additional data (tutorial content, tutorial list, etc.)

## Key Features

- **Structured Responses**: Clean separation between conversation and actions
- **Real Tutorial Content**: No hallucinated content - all tutorial data comes from actual APIs
- **Context Awareness**: Maintains conversation history for natural references
- **Extensible Actions**: Easy to add new action types (next step, hints, etc.)
- **Debug Logging**: Comprehensive logging for troubleshooting

## Example Interactions

### Starting a Tutorial
**User**: "I want to learn the JavaScript SDK"

**LLM Response**:
```json
{
"message": "I'd be happy to help you start the JavaScript SDK tutorial!",
"actionable": {
"type": "start_tutorial",
"tutorialId": "javascript-sdk"
}
}
```

**Final Response**:
```json
{
"response": "I'd be happy to help you start the JavaScript SDK tutorial!",
"tutorialData": {
"type": "tutorial_step",
"tutorialId": "javascript-sdk",
"tutorialTitle": "JavaScript SDK Tutorial",
"currentStep": 1,
"stepContent": "Welcome to the JavaScript SDK tutorial...",
"codeBlock": {...}
},
"conversationHistory": [...]
}
```

### General Conversation
**User**: "What's the difference between TypeScript and JavaScript?"

**LLM Response**:
```json
{
"message": "TypeScript is a superset of JavaScript that adds static type checking...",
"actionable": {
"type": "none"
}
}
```

## Benefits

- **Reliable**: No parsing or tool interception needed
- **Extensible**: Easy to add new action types
- **Clean**: Clear separation between conversation and actions
- **Debuggable**: Can see exactly what the LLM wants to do
- **No Hallucination**: Tutorial content comes from real APIs, not LLM generation
54 changes: 54 additions & 0 deletions agent-docs/src/agents/agent-pulse/context/builder.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
import type { AgentContext } from "@agentuity/sdk";

export async function buildSystemPrompt(tutorialContext: string, ctx: AgentContext): Promise<string> {
try {
const systemPrompt = `=== ROLE ===
You are Pulse, an AI assistant designed to help developers learn and navigate the Agentuity platform through interactive tutorials and clear guidance. Your primary goal is to assist users with understanding and using the Agentuity SDK effectively. When a user's query is vague, unclear, or lacks specific intent, subtly suggest relevant interactive tutorial to guide them toward learning the platform. For clear, specific questions related to the Agentuity SDK or other topics, provide direct, accurate, and concise answers without mentioning tutorials unless relevant. Always maintain a friendly and approachable tone to encourage engagement.

Your role is to ensure user have a smooth tutorial experience!

When user is asking to move to the next tutorial, simply increment the step for them.

=== PERSONALITY ===
- Friendly and encouraging with light humour
- Patient with learners at all levels
- Clear and concise in explanations
- Enthusiastic about teaching and problem-solving

=== Available Tools or Functions ===
You have access to various tools you can use -- use when appropriate!
1. Tutorial management
- startTutorialAtStep: Starting the user off at a specific step of a tutorial.
2. General assistance
- askDocsAgentTool: retrieve Agentuity documentation snippets

=== TOOL-USAGE RULES (must follow) ===
- startTutorialById must only be used when user select a tutorial. If the user starts a new tutorial, the step number should be set to one. Valid step is between 1 and totalSteps of the specific tutorial.
- Treat askDocsAgentTool as a search helper; ignore results you judge irrelevant.
Comment on lines +18 to +27
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

🧩 Analysis chain

Align tool names in prompt with actual exported tools.

Prompt lists startTutorialAtStep and askDocsAgentTool, but tooling mentions startTutorialById and queryOtherAgent. Mismatch will confuse the model.

-=== Available Tools or Functions ===
-You have access to various tools you can use -- use when appropriate!
-1. Tutorial management  
-   - startTutorialAtStep: Starting the user off at a specific step of a tutorial.
-2. General assistance
-   - askDocsAgentTool: retrieve Agentuity documentation snippets
+=== AVAILABLE TOOLS ===
+Use tools only when appropriate.
+1. Tutorial management
+   - startTutorialById: start a tutorial at a specific step.
+2. General assistance
+   - queryOtherAgent: retrieve Agentuity documentation snippets

🏁 Script executed:

#!/bin/bash
rg -nC2 -e 'createTools|startTutorialById|startTutorialAtStep|askDocsAgentTool|queryOtherAgent' agent-docs/src/agents/agent-pulse

Length of output: 4790


Align prompt tool names with exported tools (startTutorialById, queryOtherAgent).

agent-docs/src/agents/agent-pulse/context/builder.ts (lines 18–27) lists startTutorialAtStep and askDocsAgentTool, but createTools exports startTutorialById and queryOtherAgent — update the prompt to match.

-=== Available Tools or Functions ===
-You have access to various tools you can use -- use when appropriate!
-1. Tutorial management  
-   - startTutorialAtStep: Starting the user off at a specific step of a tutorial.
-2. General assistance
-   - askDocsAgentTool: retrieve Agentuity documentation snippets
+=== AVAILABLE TOOLS ===
+Use tools only when appropriate.
+1. Tutorial management
+   - startTutorialById: start a tutorial at a specific step.
+2. General assistance
+   - queryOtherAgent: retrieve Agentuity documentation snippets

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In agent-docs/src/agents/agent-pulse/context/builder.ts around lines 18 to 27,
the tool names in the prompt (startTutorialAtStep, askDocsAgentTool) do not
match the actual exported tool names (startTutorialById, queryOtherAgent);
update the prompt text to reference startTutorialById instead of
startTutorialAtStep and queryOtherAgent instead of askDocsAgentTool, and adjust
the usage rules line that mentions startTutorialById to ensure it matches the
new name and behavior (valid step handling) while keeping askDocsAgentTool
semantics mapped to queryOtherAgent.


Comment on lines +25 to +28
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Correct rule to reference the right tool and valid step range.

Keep rules consistent with tool names and enforce bounds.

-- startTutorialById must only be used when user select a tutorial. If the user starts a new tutorial, the step number should be set to one. Valid step is between 1 and totalSteps of the specific tutorial.
+- startTutorialById may be used only after the user selects a tutorial. New tutorials default to step 1. Valid steps are [1, totalSteps].
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
=== TOOL-USAGE RULES (must follow) ===
- startTutorialById must only be used when user select a tutorial. If the user starts a new tutorial, the step number should be set to one. Valid step is between 1 and totalSteps of the specific tutorial.
- Treat askDocsAgentTool as a search helper; ignore results you judge irrelevant.
=== TOOL-USAGE RULES (must follow) ===
- startTutorialById may be used only after the user selects a tutorial. New tutorials default to step 1. Valid steps are [1, totalSteps].
- Treat askDocsAgentTool as a search helper; ignore results you judge irrelevant.

=== RESPONSE STYLE (format guidelines) ===
- Begin with a short answer, then elaborate if necessary.
- Add brief comments to complex code; skip obvious lines.
- End with a question when further clarification could help the user.

=== SAFETY & BOUNDARIES ===
- If asked for private data or secrets, refuse.
- If the user requests actions outside your capabilities, apologise and explain.
- Keep every response < 400 words

Generate a response to the user query accordingly and try to be helpful

=== CONTEXT ===
${tutorialContext}

=== END OF PROMPT ===

Stream your reasoning steps clearly.`;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove chain-of-thought leakage.

“Stream your reasoning steps clearly.” risks exposing chain-of-thought. Replace with guidance to think privately.

-Stream your reasoning steps clearly.
+Think through the problem step-by-step internally; only output the final answer and necessary steps or code.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Stream your reasoning steps clearly.`;
Think through the problem step-by-step internally; only output the final answer and necessary steps or code.`;
🤖 Prompt for AI Agents
In agent-docs/src/agents/agent-pulse/context/builder.ts at line 46, the guidance
"Stream your reasoning steps clearly." exposes chain-of-thought; replace it with
a directive that instructs the model to perform internal reasoning privately and
only emit the final answer. Update the string to something like "Think through
your reasoning privately and provide a concise final answer without revealing
internal chain-of-thought." Ensure no wording asks the model to narrate its
step-by-step reasoning.


ctx.logger.debug("Built system prompt with tutorial context");
return systemPrompt;
} catch (error) {
ctx.logger.error("Failed to build system prompt: %s", error instanceof Error ? error.message : String(error));
throw error; // Re-throw for centralized handling
}
}
143 changes: 143 additions & 0 deletions agent-docs/src/agents/agent-pulse/index.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,143 @@
import type { AgentRequest, AgentResponse, AgentContext } from "@agentuity/sdk";
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";
import { createTools } from "./tools";
import { createAgentState } from "./state";
import { getTutorialList, type Tutorial } from "./tutorial";
import { parseAgentRequest } from "./request/parser";
import { buildSystemPrompt } from "./context/builder";
import { createStreamingProcessor } from "./streaming/processor";
import type { ConversationMessage, TutorialState } from "./request/types";

/**
* Builds a context string containing available tutorials for the system prompt
*/
async function buildContext(
ctx: AgentContext,
tutorialState?: TutorialState
): Promise<string> {
try {
const tutorials = await getTutorialList(ctx);

// Handle API failure early
if (!tutorials.success || !tutorials.data) {
ctx.logger.warn("Failed to load tutorial list");
return defaultFallbackContext();
}

const tutorialContent = JSON.stringify(tutorials.data, null, 2);
const currentTutorialInfo = buildCurrentTutorialInfo(
tutorials.data,
tutorialState
);

return `===AVAILABLE TUTORIALS====

${tutorialContent}

${currentTutorialInfo}

Note: You should not expose the details of the tutorial IDs to the user.
`;
} catch (error) {
ctx.logger.error("Error building tutorial context: %s", error);
return defaultFallbackContext();
}
}

/**
* Builds current tutorial information string if user is in a tutorial
*/
function buildCurrentTutorialInfo(
tutorials: Tutorial[],
tutorialState?: TutorialState
): string {
if (!tutorialState?.tutorialId) {
return "";
}

const currentTutorial = tutorials.find(
(t) => t.id === tutorialState.tutorialId
);
if (!currentTutorial) {
return "\nWarning: User appears to be in an unknown tutorial.";
}
if (tutorialState.currentStep > currentTutorial.totalSteps) {
return `\nUser has completed the tutorial: ${currentTutorial.title} (${currentTutorial.totalSteps} steps)`;
}
return `\nUser is currently on this tutorial: ${currentTutorial.title} (Step ${tutorialState.currentStep} of ${currentTutorial.totalSteps})`;
}

/**
* Returns fallback context when tutorial list can't be loaded
*/
function defaultFallbackContext(): string {
return `===AVAILABLE TUTORIALS====
Unable to load tutorial list. Please try again later or contact support.`;
}

export default async function Agent(
req: AgentRequest,
resp: AgentResponse,
ctx: AgentContext
) {
try {
const parsedRequest = parseAgentRequest(await req.data.json(), ctx);

// Create state manager
const state = createAgentState();

// Build messages for the conversation
const messages: ConversationMessage[] = [
...parsedRequest.conversationHistory,
{ author: "USER", content: parsedRequest.message },
];

let tools: any;
let systemPrompt: string = "";
// Direct LLM access won't require any tools or system prompt
if (!parsedRequest.useDirectLLM) {
// Create tools with state context
tools = await createTools({
state,
agentContext: ctx,
});

// Build tutorial context and system prompt
const tutorialContext = await buildContext(
ctx,
parsedRequest.tutorialData
);
systemPrompt = await buildSystemPrompt(tutorialContext, ctx);
}

// Generate streaming response
const result = await streamText({
model: openai("gpt-4o"),
messages: messages.map((msg) => ({
role: msg.author === "USER" ? "user" : "assistant",
content: msg.content,
})),
tools,
maxSteps: 3,
system: systemPrompt,
});

// Create and return streaming response
const stream = createStreamingProcessor(result, state, ctx);
return resp.stream(stream, "text/event-stream");
} catch (error) {
ctx.logger.error(
"Agent request failed: %s",
error instanceof Error ? error.message : String(error)
);
return resp.json(
{
error:
"Sorry, I encountered an error while processing your request. Please try again.",
details: error instanceof Error ? error.message : String(error),
},
{ status: 500 }
);
}
}
49 changes: 49 additions & 0 deletions agent-docs/src/agents/agent-pulse/request/parser.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
import type { AgentContext } from "@agentuity/sdk";
import type { ParsedAgentRequest } from "./types";

export function parseAgentRequest(
jsonData: any,
ctx: AgentContext
): ParsedAgentRequest {
try {
let message: string = "";
let conversationHistory: any[] = [];
let tutorialData: any = undefined;
let useDirectLLM = false;

if (jsonData && typeof jsonData === "object" && !Array.isArray(jsonData)) {
const body = jsonData as any;
message = body.message || "";
useDirectLLM = body.use_direct_llm || false;
// Process conversation history
if (Array.isArray(body.conversationHistory)) {
conversationHistory = body.conversationHistory.map((msg: any) => {
// Extract only role and content
return {
role: msg.role || (msg.author ? msg.author.toUpperCase() : "USER"),
content: msg.content || "",
};
});
}

tutorialData = body.tutorialData || undefined;
} else {
// Fallback for non-object data
message = String(jsonData || "");
}

return {
message,
conversationHistory,
tutorialData,
useDirectLLM,
};
} catch (error) {
ctx.logger.error(
"Failed to parse agent request: %s",
error instanceof Error ? error.message : String(error)
);
ctx.logger.debug("Raw request data: %s", JSON.stringify(jsonData));
throw error; // Re-throw for centralized handling
}
}
16 changes: 16 additions & 0 deletions agent-docs/src/agents/agent-pulse/request/types.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
export interface ConversationMessage {
author: "USER" | "ASSISTANT";
content: string;
}

export interface TutorialState {
tutorialId: string;
currentStep: number;
}

export interface ParsedAgentRequest {
message: string;
conversationHistory: ConversationMessage[];
tutorialData?: TutorialState;
useDirectLLM?: boolean;
}
Loading