Skip to content

monchewharry/TopicAIChatBot

Repository files navigation

multi-modal+RAG ai chatbot for Topic with tools

Dev Note

domain/topic knowledge RAG

  • resources
    • user chat attachment
    • database domain/topic knowledge base
  • indexing
    • from RAW markdown to JSON. (_scripts)
    • leverage obsidian community plugin dataviewjs
    • embedding database storage (postgres vector plugin)

DataBase

  • Table Schema: schema.ts
  • Save Chat
    • Response message: assistant messages, server tool call and result,
    • Request message: user message, client tool result,

Note

Save Client side Tool's result message from request message The client side tool's tool-call is in assistant response message, while too-result will be in the next assistant request message if user send more questions upon the tool's result.
And message id are the same for the two assistant message (onToolCall). So when saving the two message to the database, we need to generate a new message id for the tool-result.

API Routes

  • Request/Response for steamText() at \app\(chat)\api\chat
    • update chat saving logic to include topic metadata
  • Sent attachment along with message at \app\(chat)\api\files\upload and store the file in Blob Store (Only images and not RAG).
    • expand allowedFileTypes to include document attachment as user end RAG resource.

Not touched:

  • get Chats By UserId at \app\(chat)\api\history
  • user rating at \app\(chat)\api\vote
  • Block Document at \app\(chat)\api\document

chat

  • /app/api/chat/route.ts

Note

sendExtraMessageFields when enable sendExtraMessageFields: true in useChat, extra fields like id, createdAt will be added to the message. Especially, the assistant role message might be added with the revisionId.

Chat UI

  • Display Message from interactive session
  • Display Message from DataBase
  • Chat Header: Model Selection, Topic Selection
  • Chat TopicInputValues
  • Suggest Action: append

Tool

  • Server side tool: execute:
  • Client side tool: onToolCall

Note

onToolCall triggered by append The onToolCall function uses topicInputValues directly, but because onToolCall is passed to the useChat hook, it does not automatically update when topicInputValues changes due to React’s closure behavior.

Originally version (issue)

const [topicInputValues, setTopicInputValues] = useState<TopicInputs>(() => {....});

async function onToolCall(
  { toolCall }: { toolCall: ToolCall<string, unknown> }
): Promise<string | undefined | NatalChartDataOriginal[]> {

  if (topicInputValues.topicId === 'topic-numerology') {
    console.log('topicInputValues.solarDateStr', topicInputValues.solarDateStr);
    const astrolabeData = bySolar(
      topicInputValues.solarDateStr,
      topicInputValues.timeIndex,
      topicInputValues.gender
    );
    ...
  }
};

Fixed version (Use useRef to Track State Changes)

const [topicInputValues, setTopicInputValues] = useState<TopicInputs>(() => {....});
const topicInputValuesRef = useRef(topicInputValues);

useEffect(() => {
  topicInputValuesRef.current = topicInputValues;
}, [topicInputValues]);

async function onToolCall(
  { toolCall }: { toolCall: ToolCall<string, unknown> }
): Promise<string | undefined | NatalChartDataOriginal[]> {
  const currentTopicInputValues = topicInputValuesRef.current; // Get the latest value

  if (currentTopicInputValues.topicId === 'topic-numerology') {
    console.log('topicInputValues.solarDateStr', currentTopicInputValues.solarDateStr);
    const astrolabeData = bySolar(
      currentTopicInputValues.solarDateStr,
      currentTopicInputValues.timeIndex,
      currentTopicInputValues.gender
    );
    ...
  }
};

Client Tools inspired by those authors

Server tools

  • weather
  • coding
  • document editing

vscode editor setting

// .vscode/settings.json
{
    // mute tailwind unknownAtRules warning 
    "files.associations": {
        "*.css": "tailwindcss"
    }
}

structure

tree -L 4 --gitignore


Following is from the original template's README.

Features

  • Next.js App Router
    • Advanced routing for seamless navigation and performance
    • React Server Components (RSCs) and Server Actions for server-side rendering and increased performance
  • AI SDK
    • Unified API for generating text, structured objects, and tool calls with LLMs
    • Hooks for building dynamic chat and generative user interfaces
    • Supports OpenAI (default), Anthropic, Cohere, and other model providers
  • shadcn/ui
  • Data Persistence
  • NextAuth.js
    • Simple and secure authentication

Model Providers

This template ships with OpenAI gpt-4o as the default. However, with the AI SDK, you can switch LLM providers to OpenAI, Anthropic, Cohere, and many more with just a few lines of code.

Deploy Your Own

You can deploy your own version of the Next.js AI Chatbot to Vercel with one click:

Deploy with Vercel

Running locally

You will need to use the environment variables defined in .env.example to run Next.js AI Chatbot. It's recommended you use Vercel Environment Variables for this, but a .env file is all that is necessary.

Note: You should not commit your .env file or it will expose secrets that will allow others to control access to your various OpenAI and authentication provider accounts.

  1. Install Vercel CLI: npm i -g vercel
  2. Link local instance with Vercel and GitHub accounts (creates .vercel directory): vercel link
  3. Download your environment variables: vercel env pull
pnpm install
pnpm dev

Your app template should now be running on localhost:3000.

Note

Highlights information that users should take into account, even when skimming.

Tip

Optional information to help a user be more successful.

Important

Crucial information necessary for users to succeed.

Warning

Critical content demanding immediate user attention due to potential risks.

Caution

Negative potential consequences of an action.

Releases

No releases published

Packages

 
 
 

Contributors