本项目现已支持完整的浅色/深色主题切换功能,为用户提供更好的视觉体验:
- 🌞 浅色主题: 干净明亮的界面,适合白天使用
- 🌙 深色主题: 护眼的深色界面,适合夜间使用
- 🔄 一键切换: 点击右上角的太阳/月亮图标即可切换主题
- 💾 记忆功能: 自动保存用户的主题偏好设置
- 🎯 智能检测: 首次访问时自动检测系统主题偏好
- ⚡ 实时切换: 无需刷新页面,所有组件实时响应主题变化
- React Context: 使用Context API管理全局主题状态
- TypeScript: 完整的类型安全支持
- Tailwind CSS: 基于CSS变量的主题系统
- 本地存储: 主题设置持久化保存
- 无障碍支持: 符合可访问性标准的主题切换按钮
This project demonstrates a fullstack application using a React frontend and a LangGraph-powered backend agent. The agent is designed to perform comprehensive research on a user's query by dynamically generating search terms, querying the web using Google Search, reflecting on the results to identify knowledge gaps, and iteratively refining its search until it can provide a well-supported answer with citations. This application serves as an example of building research-augmented conversational AI using LangGraph and Google's Gemini models.
- 💬 Fullstack application with a React frontend and LangGraph backend.
- 🧠 Powered by a LangGraph agent for advanced research and conversational AI.
- 🔍 Dynamic search query generation using Google Gemini models.
- 🌐 Integrated web research via Google Search API.
- 🤔 Reflective reasoning to identify knowledge gaps and refine searches.
- 📄 Generates answers with citations from gathered sources.
- 🎨 浅色/深色主题切换 - 支持一键切换主题,自动保存用户偏好
- 🔄 Hot-reloading for both frontend and backend development during development.
The project is divided into two main directories:
frontend/
: Contains the React application built with Vite.backend/
: Contains the LangGraph/FastAPI application, including the research agent logic.
Follow these steps to get the application running locally for development and testing.
1. Prerequisites:
- Node.js and npm (or yarn/pnpm)
- Python 3.8+
GEMINI_API_KEY
: The backend agent requires a Google Gemini API key.- Navigate to the
backend/
directory. - Create a file named
.env
by copying thebackend/.env.example
file. - Open the
.env
file and add your Gemini API key:GEMINI_API_KEY="YOUR_ACTUAL_API_KEY"
- Navigate to the
2. Install Dependencies:
Backend:
cd backend
pip install .
Frontend:
cd frontend
npm install
3. Run Development Servers:
Backend & Frontend:
make dev
This will run the backend and frontend development servers. Open your browser and navigate to the frontend development server URL (e.g., http://localhost:5173/app
).
Alternatively, you can run the backend and frontend development servers separately. For the backend, open a terminal in the backend/
directory and run langgraph dev
. The backend API will be available at http://127.0.0.1:2024
. It will also open a browser window to the LangGraph UI. For the frontend, open a terminal in the frontend/
directory and run npm run dev
. The frontend will be available at http://localhost:5173
.
The core of the backend is a LangGraph agent defined in backend/src/agent/graph.py
. It follows these steps:
- Generate Initial Queries: Based on your input, it generates a set of initial search queries using a Gemini model.
- Web Research: For each query, it uses the Gemini model with the Google Search API to find relevant web pages.
- Reflection & Knowledge Gap Analysis: The agent analyzes the search results to determine if the information is sufficient or if there are knowledge gaps. It uses a Gemini model for this reflection process.
- Iterative Refinement: If gaps are found or the information is insufficient, it generates follow-up queries and repeats the web research and reflection steps (up to a configured maximum number of loops).
- Finalize Answer: Once the research is deemed sufficient, the agent synthesizes the gathered information into a coherent answer, including citations from the web sources, using a Gemini model.
In production, the backend server serves the optimized static frontend build. LangGraph requires a Redis instance and a Postgres database. Redis is used as a pub-sub broker to enable streaming real time output from background runs. Postgres is used to store assistants, threads, runs, persist thread state and long term memory, and to manage the state of the background task queue with 'exactly once' semantics. For more details on how to deploy the backend server, take a look at the LangGraph Documentation. Below is an example of how to build a Docker image that includes the optimized frontend build and the backend server and run it via docker-compose
.
Note: For the docker-compose.yml example you need a LangSmith API key, you can get one from LangSmith.
Note: If you are not running the docker-compose.yml example or exposing the backend server to the public internet, you update the apiUrl
in the frontend/src/App.tsx
file your host. Currently the apiUrl
is set to http://localhost:8123
for docker-compose or http://localhost:2024
for development.
1. Build the Docker Image:
Run the following command from the project root directory:
docker build -t gemini-fullstack-langgraph -f Dockerfile .
2. Run the Production Server:
GEMINI_API_KEY=<your_gemini_api_key> LANGSMITH_API_KEY=<your_langsmith_api_key> docker-compose up
Open your browser and navigate to http://localhost:8123/app/
to see the application. The API will be available at http://localhost:8123
.
- React (with Vite) - For the frontend user interface.
- Tailwind CSS - For styling.
- Shadcn UI - For components.
- LangGraph - For building the backend research agent.
- Google Gemini - LLM for query generation, reflection, and answer synthesis.
This project is licensed under the Apache License 2.0. See the LICENSE file for details.