-
Couldn't load subscription status.
- Fork 0
Add chatbot with LangGraph, OpenAI integration, and streaming responses #11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Co-authored-by: lambdakris <15176003+lambdakris@users.noreply.github.com>
|
@copilot this is looking good. Let's go ahead and extract the logic in get_assistant_response function into the FastAPI app in the backend and wire it up to the frontend. |
Co-authored-by: lambdakris <15176003+lambdakris@users.noreply.github.com>
Done! The chatbot logic has been extracted to a new Changes in commit 4f7f154:
Warning Firewall rules blocked me from connecting to one or more addresses (expand for details)I tried to connect to the following addresses, but was blocked by firewall rules:
If you need me to access, download, or install something from one of these locations, you can either:
|
|
@copilot Let's take it one step further. Move the chat function in backend/main.py into a dedicated FastAPI router and let's replace the stub LLM call with a real but very simple LangGraph based chatbot that uses an OpenAI LLM. |
Co-authored-by: lambdakris <15176003+lambdakris@users.noreply.github.com>
Co-authored-by: lambdakris <15176003+lambdakris@users.noreply.github.com>
Done! The chat functionality has been refactored into a dedicated FastAPI router and now uses a real LangGraph-based chatbot with OpenAI's GPT-4o-mini model. Changes in commits 38f0ca1 and 265f645: New Structure:
Architecture: Configuration Required: Benefits:
The LangGraph implementation provides a solid foundation for future enhancements like conversation persistence, retrieval-augmented generation, or tool calling. |
|
@copilot I made a few changes but overall things look good. Let's try one more adjustment to this PR. Modify the chat frontend and backend to work with streaming responses based on the LangGraph support for streaming and the [Streamlit support for streaming[(https://docs.streamlit.io/develop/api-reference/write-magic/st.write_stream). |
Co-authored-by: lambdakris <15176003+lambdakris@users.noreply.github.com>
Implemented! The chatbot now supports real-time streaming responses using LangGraph's Changes in commit 0a5b835: Backend Streaming:
Frontend Streaming:
Benefits:
The implementation follows the LangGraph streaming documentation and Streamlit's |

Implements a full-stack chatbot interface with the UI in Streamlit and AI chat logic using LangGraph and OpenAI in a dedicated FastAPI router, as requested in issue #10. The chatbot provides a conversational UI/UX similar to the Streamlit chat tutorial, powered by OpenAI's GPT-4.1-mini model through a LangGraph-based architecture with real-time streaming responses.
Changes
New Files
frontend/pages/chat.py: A new page implementing the chatbot interface with a modular, well-factored structure:initialize_chat_history(): Manages session state initializationdisplay_chat_messages(): Renders chat historyget_assistant_response_stream(): Streams responses from backend API in real-timehandle_user_input(): Processes user input and displays streaming responsesshow(): Main page rendering functionbackend/routers/chat.py: Dedicated FastAPI router with LangGraph + OpenAI streaming integration:backend/routers/__init__.py: Router module initializationbackend/routers/README.md: Comprehensive documentation of the chat architecture, streaming implementation, extension points, and configurationModified Files
frontend/main.py: Added chatbot page to the navigation menubackend/main.py: Refactored to use router pattern:app.include_router(chat.router)backend/pyproject.toml: Added dependencies:langgraph: For conversation flow managementlangchain-openai: For OpenAI integration.env: AddedOPENAI_API_KEYconfiguration with documentationArchitecture
The chatbot follows a professional full-stack architecture with industry-standard patterns and real-time streaming:
/chatrouter with LangGraph + OpenAI streaming integrationSTART → [chatbot node] → (OpenAI LLM streaming) → ENDLangGraph Streaming Implementation
The chatbot uses LangGraph's streaming capabilities with Streamlit's streaming display:
add_messages.astream()withstream_mode="messages"for token-by-token streamingstreaming=TrueStreamingResponsesends tokens as Server-Sent Eventsst.write_stream()renders tokens in real-timeThis separation of concerns provides several benefits:
Features
The chatbot implementation includes:
st.chat_message()andst.chat_input()components for a native chat experiencest.write_stream()for better UXConfiguration
Users need to set the
OPENAI_API_KEYin their.envfile:API keys can be obtained from: https://platform.openai.com/api-keys
Implementation Details
The code follows best practices for full-stack applications:
backend/routers/chat.py).astream()+ FastAPI'sStreamingResponse+ Streamlit'sst.write_stream()Streaming Flow
Extension Points
The LangGraph implementation makes it easy to add:
Screenshots
Initial chatbot interface:
Multi-turn conversation with streaming OpenAI integration:
Backend successfully processing streaming chat requests:
Testing
The chatbot has been manually tested and verified to:
/chatAPI endpoint with streamingFixes #10
Original prompt
💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.