A project that combines IoT sensor data analysis with edge-deployed large language models for intelligent engine vibration analysis.
This project serves as the code foundation for a two-part blog series:
- The Power of LLM models at the Edge - AWS IoT - Part 1
- The Power of LLM models at the Edge - AWS IoT - Part 2
IoT-Edge-LLM demonstrates how to deploy large language models to the edge.
The project consists of two main components:
- Modern React application with Cloudscape Design components
- Interactive parameter controls for simulation
- Visualization of vibration data
- Real-time AI analysis results and metrics
- FastAPI backend for data processing and API endpoints
- LLM integration via Ollama for local model deployment
- WebSocket communication for streaming responses
- Data processing utilities for vibration analysis
- Node.js 16+ for frontend
- Python 3.9+ for backend
- Ollama installed for local LLM serving
- A preferred LLM model (default: gemma3:1b or qwen3:1.7b)
-
Navigate to the edge-llm directory:
cd edge-llm
-
Install dependencies:
pip install -r requirements.txt
-
Start the backend server:
python main.py
The server will run on port 8081 by default.
-
Navigate to the frontend directory:
cd frontend
-
Install dependencies:
npm install
-
Start the development server:
npm run dev
The frontend will be available at http://localhost:5173