This repository contains a demonstration project illustrating how to build and integrate an autonomous AI agent into a web application using Vue 3 for the frontend, Node.js with Express for the backend, and LangChain.js for the AI agent orchestration.
The example showcases a "Proactive E-commerce Assistant" capable of understanding user queries, leveraging custom tools (like a product search), and providing intelligent, personalized responses. This project serves as a practical companion to the InfoQ article: "[Your Article Title on InfoQ]" (Link will be added here once published).
- Features
- Architecture
- Prerequisites
- Setup and Installation
- Running the Application
- Key Code Concepts
- Understanding the AI Agent
- Contributing
- License
- About the Author
- Interactive Chat Interface: A simple Vue 3 chat UI for user-agent interaction.
- LangChain.js Agent: A backend AI agent powered by an LLM (e.g., OpenAI's GPT models) orchestrated with LangChain.js.
- Custom Tools: Demonstrates how to define and integrate custom tools (e.g.,
product_search) that the agent can autonomously use. - Agent Memory: Basic implementation of conversational memory for context.
- Express.js API: Secure API endpoint for frontend-backend communication.
- Mock Data: Uses a local JSON file for product data simulation.
The application follows a client-server architecture:
- Frontend (Vue 3): Provides the user interface, sends user messages to the backend, and displays agent responses.
- Backend (Node.js with Express):
- Exposes a
/api/agent/messageendpoint. - Hosts the LangChain.js AI agent.
- Manages the agent's tools and memory.
- Orchestrates LLM calls and tool execution.
- Exposes a
- Large Language Model (LLM): The core intelligence, used by the LangChain.js agent for reasoning and response generation (e.g., OpenAI API).
- Tools: Functions or services (e.g.,
productSearchTool) that the agent can call to interact with external data or perform actions. - Memory (Optional for this demo): Could be extended to include a vector database (like ChromaDB or Pinecone) for long-term memory or retrieval-augmented generation (RAG).
+----------------+ +-------------------+ +-----------------+ +----------------+ | | | | | | | | | Vue 3 Frontend |<---->| Node.js Backend |<----->| LangChain.js |<----->| LLM Provider | | (AgentChatInterface) | | (Express API) | | (AI Agent) | | (OpenAI API) | | | | | | | | | +----------------+ +---------^---------+ +--------^--------+ +----------------+ | | | | +------+------+ +----------+-----------+ | Custom Tools | | Agent Memory (Buffer) | | (e.g., product_search) | (Optional: Vector DB) | +-------------+ +---------------------+
Before you begin, ensure you have the following installed:
- Node.js (LTS version recommended)
- npm (comes with Node.js) or yarn
- An OpenAI API Key
-
Clone the repository (if you downloaded, just navigate):
# If you created the structure manually, just be in the root directory. # If you cloned, then: # git clone [https://github.com/](https://github.com/)[YourUsername]/vue-nodejs-ai-agent-example.git # cd vue-nodejs-ai-agent-example
-
Set up environment variables:
- Create a
.envfile in thebackenddirectory based on.env.example. - Add your OpenAI API key:
# backend/.env OPENAI_API_KEY=your_openai_api_key_here - (Important: Never commit your
.envfile to Git.)
- Create a
-
Install backend dependencies:
cd backend npm install cd ..
-
Install frontend dependencies:
cd frontend npm install cd ..
-
Start the backend server:
cd backend npm start # Or node src/server.js if you don't add the script
The backend will typically run on
http://localhost:3000. -
Start the frontend development server:
cd frontend npm run devThe frontend will typically run on
http://localhost:5173. -
Open your browser and navigate to the frontend URL to interact with the AI E-commerce Assistant.
backend/src/agents/ecommerceAgent.js: This file defines the core AI agent using LangChain.js. It orchestrates the LLM, defines the agent's prompt, sets up memory, and registers the tools the agent can use.backend/src/tools/productSearchTool.js: An example of a custom tool. The agent can "choose" to call this tool when it determines a product search is necessary. This abstracts complex logic (like querying a database) from the agent's core reasoning.backend/src/services/productService.js: A mock service demonstrating how theproductSearchToolmight interact with a data source (here, a local JSON file).backend/src/routes/agentRoutes.js: The Express.js route that receives messages from the frontend and passes them to the LangChain.js agent for processing.frontend/src/components/AgentChatInterface.vue: The Vue 3 component handling the chat UI, sending user messages, and displaying agent responses (including its thought process, ifverboseis enabled on the agent).
The LangChain.js agent in this project operates on a "plan and execute" loop. When you send a message:
- The agent receives your input.
- It uses the LLM to reason about your request and determine if it needs to use any of its available tools.
- If a tool is needed (e.g.,
product_searchfor "find me a blue shirt"), the agent calls that tool. - The observation (result) from the tool call is fed back to the LLM.
- The LLM then generates a final response to the user, potentially incorporating information from the tool's output.
- This process continues until the agent believes it has fulfilled the user's request.
Feel free to fork this repository, open issues, and submit pull requests.
This project is licensed under the MIT License - see the LICENSE file for details.
[Your Name] is a [Your Title/Expertise]. You can connect with me on [LinkedIn Profile Link] or read my other articles on InfoQ, including the related piece: "[Your Article Title on InfoQ]" (Link will be added here once published).