This project implements an AI code assistant with real-time streaming responses using Server-Sent Events (SSE). It consists of a FastAPI backend server and a simple HTML/JavaScript frontend.
sse_server.py
- Backend FastAPI server that handles agent communicationstatic/index.html
- Frontend interface for interacting with the code agent
The backend is built with FastAPI and provides:
- SSE streaming endpoint (
/sse
) for real-time agent responses - Integration with LangGraph and CodeAct for code execution capabilities
- Remote code evaluation through a sandbox environment
- CORS support for cross-origin requests
- Static file serving for the frontend
Key components:
- LangChain OpenAI integration for LLM capabilities
- LangGraph for agent state management
- Async streaming response handling
- Server-Sent Events (SSE) for real-time communication
The frontend provides a simple chat interface with:
- Real-time message streaming using the EventSource API
- Markdown rendering for code blocks and formatted text
- Clean, responsive UI for chat interactions
- Error handling for connection issues
- Python 3.8+
- Required environment variables:
ARK_API_BASE
- Base URL for the LLM APIARK_API_KEY
- API key for authenticationSANDBOX_URL
- URL for the code execution sandboxAUTH_KEY
- Authentication key for the sandbox
- Clone the repository
- Install dependencies:
pip install fastapi uvicorn langchain-openai langgraph-codeact
- Start the server:
python sse_server.py
- Open a browser and navigate to
http://localhost:8000
- Start chatting with the code agent!
- User sends a message through the frontend
- The message is sent to the backend via SSE connection
- The backend processes the message using the LangGraph agent
- Responses are streamed back to the frontend in real-time
- The frontend renders the responses with Markdown formatting
- Real-time streaming responses
- Code execution capabilities
- Markdown rendering for code blocks
- Simple and intuitive UI