A FastAPI WebSocket server for interacting with Raspberry Pi robots. Supports face recognition, LLM-based conversation, and handles multiple robot clients simultaneously.
- WebSocket-based communication
- Face recognition using
face_recognition
library - LLM integration (OpenAI API, with mock mode for testing)
- Async support for multiple robot clients
- Comprehensive test suite with mock clients
-
Create a virtual environment and install dependencies:
python -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate pip install -r requirements.txt
-
(Optional) Set up environment variables in
.env
:OPENAI_API_KEY=your_api_key_here KNOWN_FACES_DIR=path/to/faces/directory
If these are not set, the server will use mock responses.
-
Start the server:
python app.py
Server runs on
ws://localhost:8000/ws/robot
-
Run mock tests:
pytest tests/test_mock_client.py -v
{
"text": "Optional text from speech",
"frame": "Optional base64 encoded JPEG frame",
"status": "Optional robot status info"
}
Success:
{
"speech": "Text for the robot to speak"
}
Error:
{
"error": "Error message"
}
app.py
- FastAPI application and WebSocket endpointsschemas.py
- Pydantic models for message validationvision.py
- Face recognition functionalityllm.py
- LLM integration and response generationtests/test_mock_client.py
- Mock client tests