Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions agent-docs/agentuity.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -78,3 +78,6 @@ agents:
- id: agent_9ccc5545e93644bd9d7954e632a55a61
name: doc-qa
description: Agent that can answer questions based on dev docs as the knowledge base
- id: agent_ddcb59aa4473f1323be5d9f5fb62b74e
name: agent-pulse
description: Agentuity web app agent that converses with users for generate conversation and structured docs tutorials.
Comment on lines +81 to +83
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

🧩 Analysis chain

Agent entry looks fine; verify it’s wired.

Confirm this agent id/name are referenced by any launcher/routes and that CI bundling includes this agent.


🏁 Script executed:

#!/bin/bash
# Verify references to agent id/name and bundling
rg -nC2 -e 'agent-pulse' -e 'agent_ddcb59aa4473f1323be5d9f5fb62b74e'
# Confirm tsconfig/bundler includes src/agents
fd tsconfig.json -x cat {} | sed -n '1,200p'

Length of output: 1688


Agent declared but not wired or bundled — action required.

agent-docs/agentuity.yaml (lines 81–83) defines agent_ddcb59aa4473f1323be5d9f5fb62b74e / agent-pulse; ripgrep found no other references and root tsconfig.json excludes "agent-docs", so CI/build will not include it.

  • Wire the agent: add launcher/routes references to this agent id/name.
  • Include in build: move agent sources into an included path (e.g., src/agents) or update tsconfig/CI to stop excluding agent-docs.

102 changes: 102 additions & 0 deletions agent-docs/src/agents/agent-pulse/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
# Pulse Agent

A conversational AI agent for tutorial management built with OpenAI and structured responses.

## Overview

Pulse is a friendly AI assistant that helps users discover, start, and navigate through tutorials. It uses OpenAI's GPT-4o-mini with structured response generation to provide both conversational responses and actionable instructions.

## Architecture

### Core Components

- **`index.ts`**: Main agent logic using `generateObject` for structured responses
- **`chat-helpers.ts`**: Conversation history management
- **`tutorial-helpers.ts`**: Tutorial content fetching and formatting
- **`tutorial.ts`**: Tutorial API integration

### Response Structure

The agent uses `generateObject` to return structured responses with two parts:

```typescript
{
message: string, // Conversational response for the user
actionable?: { // Optional action for the program to execute
type: 'start_tutorial' | 'next_step' | 'previous_step' | 'get_tutorials' | 'none',
tutorialId?: string,
step?: number
}
}
```

### How It Works

1. **User Input**: Agent receives user message and conversation history
2. **LLM Processing**: OpenAI generates structured response with message and optional actionable object
3. **Action Execution**: Program intercepts actionable objects and executes them:
- `get_tutorials`: Fetches available tutorial list
- `start_tutorial`: Fetches real tutorial content from API
- `next_step`/`previous_step`: Navigate through tutorial steps (TODO)
4. **Response**: Returns conversational message plus any additional data (tutorial content, tutorial list, etc.)

## Key Features

- **Structured Responses**: Clean separation between conversation and actions
- **Real Tutorial Content**: No hallucinated content - all tutorial data comes from actual APIs
- **Context Awareness**: Maintains conversation history for natural references
- **Extensible Actions**: Easy to add new action types (next step, hints, etc.)
- **Debug Logging**: Comprehensive logging for troubleshooting

## Example Interactions

### Starting a Tutorial
**User**: "I want to learn the JavaScript SDK"

**LLM Response**:
```json
{
"message": "I'd be happy to help you start the JavaScript SDK tutorial!",
"actionable": {
"type": "start_tutorial",
"tutorialId": "javascript-sdk"
}
}
```

**Final Response**:
```json
{
"response": "I'd be happy to help you start the JavaScript SDK tutorial!",
"tutorialData": {
"type": "tutorial_step",
"tutorialId": "javascript-sdk",
"tutorialTitle": "JavaScript SDK Tutorial",
"currentStep": 1,
"stepContent": "Welcome to the JavaScript SDK tutorial...",
"codeBlock": {...}
},
"conversationHistory": [...]
}
```

### General Conversation
**User**: "What's the difference between TypeScript and JavaScript?"

**LLM Response**:
```json
{
"message": "TypeScript is a superset of JavaScript that adds static type checking...",
"actionable": {
"type": "none"
}
}
```

## Benefits

- **Reliable**: No parsing or tool interception needed
- **Extensible**: Easy to add new action types
- **Clean**: Clear separation between conversation and actions
- **Debuggable**: Can see exactly what the LLM wants to do
- **No Hallucination**: Tutorial content comes from real APIs, not LLM generation
54 changes: 54 additions & 0 deletions agent-docs/src/agents/agent-pulse/context/builder.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
import type { AgentContext } from "@agentuity/sdk";

export async function buildSystemPrompt(tutorialContext: string, ctx: AgentContext): Promise<string> {
try {
const systemPrompt = `=== ROLE ===
You are Pulse, an AI assistant designed to help developers learn and navigate the Agentuity platform through interactive tutorials and clear guidance. Your primary goal is to assist users with understanding and using the Agentuity SDK effectively. When a user's query is vague, unclear, or lacks specific intent, subtly suggest relevant interactive tutorial to guide them toward learning the platform. For clear, specific questions related to the Agentuity SDK or other topics, provide direct, accurate, and concise answers without mentioning tutorials unless relevant. Always maintain a friendly and approachable tone to encourage engagement.
Your role is to ensure user have a smooth tutorial experience!
When user is asking to move to the next tutorial, simply increment the step for them.
Comment on lines +10 to +11
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Avoid chain‑of‑thought leakage; add bounds guidance.

  • Don’t instruct the model to “stream reasoning steps.”
  • Be explicit about step bounds (1..totalSteps) when incrementing.

Apply this diff:

-When user is asking to move to the next tutorial, simply increment the step for them.
+When the user asks to go to the next tutorial step, increment the step by 1 within [1..totalSteps]. If already at the last step, confirm whether to finish or review.

-=== TOOL-USAGE RULES (must follow) ===
+=== TOOL-USAGE RULES (must follow) ===
 - startTutorialById must only be used when user select a tutorial. If the user starts a new tutorial, the step number should be set to one. Valid step is between 1 and totalSteps of the specific tutorial.
 - Treat askDocsAgentTool as a search helper; ignore results you judge irrelevant.
+ - Never reveal internal chain-of-thought; provide final answers and concise reasoning only when asked.

-Stream your reasoning steps clearly.
+Do not reveal your internal reasoning process; output user-facing content only.

Also applies to: 26-28, 46-46

🤖 Prompt for AI Agents
In agent-docs/src/agents/agent-pulse/context/builder.ts around lines 10-11 (also
apply same changes at 26-28 and 46), the handler for "move to the next tutorial"
currently may prompt chain‑of‑thought style reasoning and lacks explicit bounds
for steps; change the instruction so it does not ask the model to stream or show
internal reasoning and instead clearly says "increment the current step by 1"
and enforce bounds by clamping the result to the inclusive range [1, totalSteps]
(i.e., newStep = max(1, min(totalSteps, currentStep + 1))) and return only the
updated step value. Ensure the wording sent to the model is concise, forbids
revealing internal chain-of-thought, and includes the exact numeric bounds
1..totalSteps.

=== PERSONALITY ===
- Friendly and encouraging with light humour
- Patient with learners at all levels
- Clear and concise in explanations
- Enthusiastic about teaching and problem-solving
=== Available Tools or Functions ===
You have access to various tools you can use -- use when appropriate!
1. Tutorial management
- startTutorialAtStep: Starting the user off at a specific step of a tutorial.
2. General assistance
- askDocsAgentTool: retrieve Agentuity documentation snippets
=== TOOL-USAGE RULES (must follow) ===
- startTutorialById must only be used when user select a tutorial. If the user starts a new tutorial, the step number should be set to one. Valid step is between 1 and totalSteps of the specific tutorial.
- Treat askDocsAgentTool as a search helper; ignore results you judge irrelevant.
Comment on lines +18 to +28
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Tool name mismatches will confuse the model.

Prompt lists startTutorialAtStep and askDocsAgentTool, while other modules refer to startTutorialById and queryOtherAgent. Unify names in the system prompt.

Apply this diff:

-1. Tutorial management  
-   - startTutorialAtStep: Starting the user off at a specific step of a tutorial.
-2. General assistance
-   - askDocsAgentTool: retrieve Agentuity documentation snippets
+1. Tutorial management
+   - startTutorialById: Start a tutorial at a specific step (defaults to step 1).
+2. General assistance
+   - queryOtherAgent: Retrieve Agentuity documentation snippets
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
=== Available Tools or Functions ===
You have access to various tools you can use -- use when appropriate!
1. Tutorial management
- startTutorialAtStep: Starting the user off at a specific step of a tutorial.
2. General assistance
- askDocsAgentTool: retrieve Agentuity documentation snippets
=== TOOL-USAGE RULES (must follow) ===
- startTutorialById must only be used when user select a tutorial. If the user starts a new tutorial, the step number should be set to one. Valid step is between 1 and totalSteps of the specific tutorial.
- Treat askDocsAgentTool as a search helper; ignore results you judge irrelevant.
=== Available Tools or Functions ===
You have access to various tools you can use -- use when appropriate!
1. Tutorial management
- startTutorialById: Start a tutorial at a specific step (defaults to step 1).
2. General assistance
- queryOtherAgent: Retrieve Agentuity documentation snippets
=== TOOL-USAGE RULES (must follow) ===
- startTutorialById must only be used when user select a tutorial. If the user starts a new tutorial, the step number should be set to one. Valid step is between 1 and totalSteps of the specific tutorial.
- Treat askDocsAgentTool as a search helper; ignore results you judge irrelevant.
🤖 Prompt for AI Agents
In agent-docs/src/agents/agent-pulse/context/builder.ts around lines 18 to 28,
the prompt defines tool names startTutorialAtStep and askDocsAgentTool which
conflict with other modules that expect startTutorialById and queryOtherAgent;
update the prompt to use the canonical names (startTutorialById and
queryOtherAgent) and adjust the TOOL-USAGE RULES accordingly (e.g., ensure the
rule about starting tutorials references startTutorialById and its step
constraints and treat queryOtherAgent as the search helper), or alternatively
rename references in other modules to match this file—pick one consistent naming
choice across the codebase and apply it here so tool names and rules are
identical with the rest of the system.

=== RESPONSE STYLE (format guidelines) ===
- Begin with a short answer, then elaborate if necessary.
- Add brief comments to complex code; skip obvious lines.
- End with a question when further clarification could help the user.
=== SAFETY & BOUNDARIES ===
- If asked for private data or secrets, refuse.
- If the user requests actions outside your capabilities, apologise and explain.
- Keep every response < 400 words
Generate a response to the user query accordingly and try to be helpful
=== CONTEXT ===
${tutorialContext}
=== END OF PROMPT ===
Stream your reasoning steps clearly.`;

ctx.logger.debug("Built system prompt with tutorial context");
return systemPrompt;
} catch (error) {
ctx.logger.error("Failed to build system prompt: %s", error instanceof Error ? error.message : String(error));
throw error; // Re-throw for centralized handling
}
}
143 changes: 143 additions & 0 deletions agent-docs/src/agents/agent-pulse/index.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,143 @@
import type { AgentRequest, AgentResponse, AgentContext } from "@agentuity/sdk";
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";
import { createTools } from "./tools";
import { createAgentState } from "./state";
import { getTutorialList, type Tutorial } from "./tutorial";
import { parseAgentRequest } from "./request/parser";
import { buildSystemPrompt } from "./context/builder";
import { createStreamingProcessor } from "./streaming/processor";
import type { ConversationMessage, TutorialState } from "./request/types";

/**
* Builds a context string containing available tutorials for the system prompt
*/
async function buildContext(
ctx: AgentContext,
tutorialState?: TutorialState
): Promise<string> {
try {
const tutorials = await getTutorialList(ctx);

// Handle API failure early
if (!tutorials.success || !tutorials.data) {
ctx.logger.warn("Failed to load tutorial list");
return defaultFallbackContext();
}

const tutorialContent = JSON.stringify(tutorials.data, null, 2);
const currentTutorialInfo = buildCurrentTutorialInfo(
tutorials.data,
tutorialState
);

return `===AVAILABLE TUTORIALS====
${tutorialContent}
${currentTutorialInfo}
Note: You should not expose the details of the tutorial IDs to the user.
`;
} catch (error) {
ctx.logger.error("Error building tutorial context: %s", error);
return defaultFallbackContext();
}
}

/**
* Builds current tutorial information string if user is in a tutorial
*/
function buildCurrentTutorialInfo(
tutorials: Tutorial[],
tutorialState?: TutorialState
): string {
if (!tutorialState?.tutorialId) {
return "";
}

const currentTutorial = tutorials.find(
(t) => t.id === tutorialState.tutorialId
);
if (!currentTutorial) {
return "\nWarning: User appears to be in an unknown tutorial.";
}
if (tutorialState.currentStep > currentTutorial.totalSteps) {
return `\nUser has completed the tutorial: ${currentTutorial.title} (${currentTutorial.totalSteps} steps)`;
}
return `\nUser is currently on this tutorial: ${currentTutorial.title} (Step ${tutorialState.currentStep} of ${currentTutorial.totalSteps})`;
}

/**
* Returns fallback context when tutorial list can't be loaded
*/
function defaultFallbackContext(): string {
return `===AVAILABLE TUTORIALS====
Unable to load tutorial list. Please try again later or contact support.`;
}

export default async function Agent(
req: AgentRequest,
resp: AgentResponse,
ctx: AgentContext
) {
try {
const parsedRequest = parseAgentRequest(await req.data.json(), ctx);

// Create state manager
const state = createAgentState();

// Build messages for the conversation
const messages: ConversationMessage[] = [
...parsedRequest.conversationHistory,
{ author: "USER", content: parsedRequest.message },
];

Comment on lines +90 to +95
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Conversation history is mapped incorrectly (assistant/user roles flipped)

parseAgentRequest emits items with role (or sometimes author). Checking msg.author will be undefined for history, mapping them to assistant.

-        // Build messages for the conversation
-        const messages: ConversationMessage[] = [
-            ...parsedRequest.conversationHistory,
-            { author: "USER", content: parsedRequest.message },
-        ];
+        // Normalize to OpenAI messages
+        const llmMessages = [
+            ...parsedRequest.conversationHistory.map((msg: any) => {
+                const src = (msg.role ?? msg.author ?? "USER").toString().toLowerCase();
+                const role = src.startsWith("user") ? "user" : "assistant";
+                return { role, content: msg.content ?? "" };
+            }),
+            { role: "user", content: parsedRequest.message },
+        ];
...
-            messages: messages.map((msg) => ({
-                role: msg.author === "USER" ? "user" : "assistant",
-                content: msg.content,
-            })),
+            messages: llmMessages,

Also applies to: 114-121

🤖 Prompt for AI Agents
In agent-docs/src/agents/agent-pulse/index.ts around lines 90-95 (and likewise
at 114-121) the conversation history is using msg.author which is undefined
because parseAgentRequest emits items with role (or sometimes author); update
the mapping to read msg.role first and fall back to msg.author, then normalize
role values to the ConversationMessage.author enum (e.g., map "user" -> "USER",
"assistant" -> "ASSISTANT", handle casing), so history entries use the correct
USER/ASSISTANT author values instead of being flipped or undefined.

let tools: any;
let systemPrompt: string = "";
// Direct LLM access won't require any tools or system prompt
if (!parsedRequest.useDirectLLM) {
// Create tools with state context
tools = await createTools({
state,
agentContext: ctx,
});

// Build tutorial context and system prompt
const tutorialContext = await buildContext(
ctx,
parsedRequest.tutorialData
);
systemPrompt = await buildSystemPrompt(tutorialContext, ctx);
}

// Generate streaming response
const result = await streamText({
model: openai("gpt-4o"),
messages: messages.map((msg) => ({
role: msg.author === "USER" ? "user" : "assistant",
content: msg.content,
})),
tools,
maxSteps: 3,
system: systemPrompt,
});

// Create and return streaming response
const stream = createStreamingProcessor(result, state, ctx);
return resp.stream(stream, "text/event-stream");
} catch (error) {
ctx.logger.error(
"Agent request failed: %s",
error instanceof Error ? error.message : String(error)
);
return resp.json(
{
error:
"Sorry, I encountered an error while processing your request. Please try again.",
details: error instanceof Error ? error.message : String(error),
},
{ status: 500 }
);
}
}
49 changes: 49 additions & 0 deletions agent-docs/src/agents/agent-pulse/request/parser.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
import type { AgentContext } from "@agentuity/sdk";
import type { ParsedAgentRequest } from "./types";

export function parseAgentRequest(
jsonData: any,
ctx: AgentContext
): ParsedAgentRequest {
try {
let message: string = "";
let conversationHistory: any[] = [];
let tutorialData: any = undefined;
let useDirectLLM = false;

Comment on lines +9 to +13
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

🧩 Analysis chain

Type/shape mismatch and over-logging of request data.

  • Parser emits { role, content }, but ConversationMessage expects { author, content }. This will break downstream typing/logic.
  • Remove trivially inferrable type annotations.
  • Avoid logging raw request payloads; log minimal metadata.

Apply this diff:

-import type { ParsedAgentRequest } from "./types";
+import type { ParsedAgentRequest, ConversationMessage, TutorialState } from "./types";

-    let message: string = "";
-    let conversationHistory: any[] = [];
-    let tutorialData: any = undefined;
+    let message = "";
+    let conversationHistory: ConversationMessage[] = [];
+    let tutorialData: TutorialState | undefined = undefined;
     let useDirectLLM = false;
@@
-            if (Array.isArray(body.conversationHistory)) {
-                conversationHistory = body.conversationHistory.map((msg: any) => {
-                    // Extract only role and content
-                    return {
-                        role: msg.role || (msg.author ? msg.author.toUpperCase() : "USER"),
-                        content: msg.content || "",
-                    };
-                });
-            }
+            if (Array.isArray(body.conversationHistory)) {
+                conversationHistory = body.conversationHistory.flatMap((msg: any): ConversationMessage[] => {
+                    const roleRaw = (msg.role ?? msg.author ?? "USER").toString().toUpperCase();
+                    const author: ConversationMessage["author"] = roleRaw === "ASSISTANT" ? "ASSISTANT" : "USER";
+                    const content = typeof msg.content === "string" ? msg.content : "";
+                    return content ? [{ author, content }] : [];
+                });
+            }
@@
-        ctx.logger.debug("Raw request data: %s", JSON.stringify(jsonData));
+        // Avoid logging raw payloads to reduce PII risk.
+        try {
+            const meta =
+                jsonData && typeof jsonData === "object" && !Array.isArray(jsonData)
+                    ? { keys: Object.keys(jsonData).slice(0, 12) }
+                    : { type: typeof jsonData };
+            ctx.logger.debug("Request meta: %s", JSON.stringify(meta));
+        } catch {
+            /* ignore logging meta errors */
+        }

Also applies to: 20-26, 41-47


🏁 Script executed:

#!/bin/bash
# Verify no remaining 'role' keys in conversation history shapes
rg -n -C1 '\bconversationHistory\b' --type=ts
rg -nP '\bauthor\s*:' agent-docs/src/agents/agent-pulse --type=ts

Length of output: 2350


Fix ConversationMessage shape & stop logging raw payloads

Parser still emits { role, content }, keeps needless primitive annotations, and logs the raw request — this breaks downstream typing and risks leaking PII. Apply the patch below to use ConversationMessage/TutorialState, map to author properly, and replace raw JSON logging with a small request meta.

File: agent-docs/src/agents/agent-pulse/request/parser.ts

-import type { ParsedAgentRequest } from "./types";
+import type { ParsedAgentRequest, ConversationMessage, TutorialState } from "./types";

-    let message: string = "";
-    let conversationHistory: any[] = [];
-    let tutorialData: any = undefined;
+    let message = "";
+    let conversationHistory: ConversationMessage[] = [];
+    let tutorialData: TutorialState | undefined = undefined;
     let useDirectLLM = false;
@@
-            if (Array.isArray(body.conversationHistory)) {
-                conversationHistory = body.conversationHistory.map((msg: any) => {
-                    // Extract only role and content
-                    return {
-                        role: msg.role || (msg.author ? msg.author.toUpperCase() : "USER"),
-                        content: msg.content || "",
-                    };
-                });
-            }
+            if (Array.isArray(body.conversationHistory)) {
+                conversationHistory = body.conversationHistory.flatMap((msg: any): ConversationMessage[] => {
+                    const roleRaw = (msg.role ?? msg.author ?? "USER").toString().toUpperCase();
+                    const author: ConversationMessage["author"] = roleRaw === "ASSISTANT" ? "ASSISTANT" : "USER";
+                    const content = typeof msg.content === "string" ? msg.content : "";
+                    return content ? [{ author, content }] : [];
+                });
+            }
@@
-        ctx.logger.debug("Raw request data: %s", JSON.stringify(jsonData));
+        // Avoid logging raw payloads to reduce PII risk.
+        try {
+            const meta =
+                jsonData && typeof jsonData === "object" && !Array.isArray(jsonData)
+                    ? { keys: Object.keys(jsonData).slice(0, 12) }
+                    : { type: typeof jsonData };
+            ctx.logger.debug("Request meta: %s", JSON.stringify(meta));
+        } catch {
+            /* ignore logging meta errors */
+        }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let message: string = "";
let conversationHistory: any[] = [];
let tutorialData: any = undefined;
let useDirectLLM = false;
let message = "";
let conversationHistory: ConversationMessage[] = [];
let tutorialData: TutorialState | undefined = undefined;
let useDirectLLM = false;
🧰 Tools
🪛 Biome (2.1.2)

[error] 9-9: This type annotation is trivially inferred from its initialization.

Safe fix: Remove the type annotation.

(lint/style/noInferrableTypes)

if (jsonData && typeof jsonData === "object" && !Array.isArray(jsonData)) {
const body = jsonData as any;
message = body.message || "";
useDirectLLM = body.use_direct_llm || false;
// Process conversation history
if (Array.isArray(body.conversationHistory)) {
conversationHistory = body.conversationHistory.map((msg: any) => {
// Extract only role and content
return {
role: msg.role || (msg.author ? msg.author.toUpperCase() : "USER"),
content: msg.content || "",
};
});
}

tutorialData = body.tutorialData || undefined;
} else {
// Fallback for non-object data
message = String(jsonData || "");
}

return {
message,
conversationHistory,
tutorialData,
useDirectLLM,
};
} catch (error) {
ctx.logger.error(
"Failed to parse agent request: %s",
error instanceof Error ? error.message : String(error)
);
ctx.logger.debug("Raw request data: %s", JSON.stringify(jsonData));
throw error; // Re-throw for centralized handling
}
}
16 changes: 16 additions & 0 deletions agent-docs/src/agents/agent-pulse/request/types.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
export interface ConversationMessage {
author: "USER" | "ASSISTANT";
content: string;
}

export interface TutorialState {
tutorialId: string;
currentStep: number;
}

export interface ParsedAgentRequest {
message: string;
conversationHistory: ConversationMessage[];
tutorialData?: TutorialState;
useDirectLLM?: boolean;
}
Loading