Skip to content

Turn old Android devices into distributed IP cameras with AI monitoring (Flask server + Web dashboard)

Notifications You must be signed in to change notification settings

suzuran0y/Sentinel

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

12 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Sentinel Real-Time Monitoring System

Version Python Android AI License

🌐 Language --- πŸ‡ΊπŸ‡Έ English | πŸ‡¨πŸ‡³ δΈ­ζ–‡

Sentinel is a distributed real-time vision system framework for local area networks (LAN).

♻️ It repurposes unused Android devices as network camera nodes, enabling:

  • Distributed image acquisition
  • Real-time PC-side video streaming
  • AI-driven monitoring and analysis

It adopts a layered architecture of β€œmobile capture + PC processing + browser control”, supporting real-time image preview, local video recording, and structured event analysis, and can be extended to integrate multimodal AI models.

  • The system consists of a PC Dashboard and an Android client CamFlow.

This project can be used both as a lightweight local monitoring system and as an engineering prototype platform for visual data acquisition and intelligent analysis.

πŸš€ Before first use, it is strongly recommended to read the chapters in order. Click to jump:
β‘  Project Overview β†’ β‘‘ Project Deployment β†’ β‘’ Run the Project β†’ β‘£ Dashboard Guide

Table of Contents


1. Project Overview βŒƒ

Sentinel is a real-time monitoring system that runs on a LAN, and it can also serve as a tool for data acquisition and analysis. It consists of the following two parts:

  • CamFlow (Android client): captures the phone camera feed and uploads it to the server as single-frame JPEG images.
  • PC Dashboard (Flask + Web UI): receives image frames and provides live preview, video recording, screenshot saving, log viewing, and optional AI triggered monitoring.

The system supports running in a local LAN environment without relying on cloud services. However, to run multimodal models, it is recommended to use an online model inference service.


1.1. Core Capabilities βŒƒ

Sentinel is not designed as a single-purpose monitoring tool. Instead, it aims to build an extensible real-time vision system framework of β€œmobile capture + PC processing + browser control”. Its core capabilities include:

  • πŸ“± Self-developed Android camera client (CamFlow)
    A regular smartphone can be used as a real-time camera endpoint, without purchasing dedicated IP cameras or extra hardware.

  • πŸ“‘ Real-time MJPEG video preview in the browser
    Based on HTTP streaming output; can be viewed directly in a browser without plugins.

  • πŸ“€ Standardized image frame upload interface (HTTP POST)
    The Android client continuously uploads single-frame JPEG images; the interface is clear and extensible.

  • πŸŽ₯ Segmented local video recording (MP4) and real-time screenshots
    Supports time-based segmentation for writing video/image files, suitable for long-term operation and archival management.

  • 🧩 Layered, trigger-based visual processing mechanism
    A two-stage architecture of β€œtraditional CV algorithms β†’ model inference” to improve real-time performance and reduce compute/inference costs.

  • 🧠 Structured multimodal visual cognition
    After trigger conditions are met, a vision model is called for semantic analysis and structured outputs, supporting risk grading and event management.

  • 🌐 UDP-based automatic server discovery
    The Android client can automatically discover the server address within the LAN, reducing manual configuration.

  • πŸ“‚ Structured logging and configuration management
    Generates local runtime logs and AI event records to support traceability and data analysis.


1.2. System Architecture βŒƒ

The system consists of three structural layers:

  1. Data Acquisition Layer (Android App)
  2. Service Processing Layer (PC Server)
  3. Presentation & Control Layer (Browser Dashboard)
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                         Android Client (CamFlow)
β”‚
β”‚          Camera Capture β†’ Single JPEG Frame β†’ HTTP POST /upload
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                     β”‚
                                     β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                           PC-side Flask Server
β”‚
β”‚   β‘  FrameBuffer (Latest Frame Cache)
β”‚         β”œβ”€β”€ Provides MJPEG Stream (/stream)
β”‚         β”œβ”€β”€ Provides Snapshot
β”‚         └── Provides Recorder Access
β”‚
β”‚   β‘‘ Recorder Module
β”‚         └── Writes Segmented Video Files by FPS
β”‚
β”‚   β‘’ AI Monitor (Optional Module)
β”‚         β”œβ”€β”€ Motion Trigger (Traditional CV Detection)
β”‚         β”œβ”€β”€ Vision Model Interface (Pluggable)
β”‚         └── Event Logging / Real-time Feedback (Local / Web Access)
β”‚
β”‚   β‘£ Config & Log Management
β”‚         β”œβ”€β”€ config.json
β”‚         └── server.log
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                     β”‚
                                     β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                            Browser Dashboard
β”‚
β”‚ Live Preview | Recording Control | Parameter Configuration | Log Viewer
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
  • Architecture Design

    • All image data flows only within the local area network (LAN);
    • FrameBuffer serves as the core shared data structure to avoid repeated decoding;
    • Recording and AI analysis both read from FrameBuffer without interfering with each other;
    • The AI module adopts an interface-based design, allowing flexible integration of different vision models;
    • The Dashboard functions purely as the control and presentation layer, and does not participate in image processing.

1.3. Practical Applications βŒƒ

Sentinel is not only a real-time monitoring tool, but also an extensible platform for visual data acquisition and analysis. Its LAN-based localized operation gives it practical value in the following scenarios:

🏠 Local Privacy-Oriented Monitoring Solution
  • All data is fully stored on the local LAN PC

  • No reliance on cloud storage or third-party platforms

  • No need for additional storage cards or subscription fees

  • Suitable for private environments such as homes, laboratories, and studios

🧠 AI Behavior Analysis Experimental Platform

  • Supports integration of multimodal vision models

  • Output structure can be controlled through Prompt engineering

  • Outputs person count, activity, risk level, and confidence

  • Suitable for behavior recognition and risk analysis research

πŸ“Š Data Acquisition & Analysis Prototype System
  • Automatically generates structured JSON event records

  • Local video and logs are traceable

  • Convenient for subsequent statistical analysis and model optimization

  • Can serve as a small-scale visual data collection prototype

🧩 Distributed Vision System Architecture Example

  • Android-side acquisition + PC-side processing

  • State machine–driven trigger-based AI analysis

  • FrameBuffer decoupled architecture design

  • Suitable for teaching and system architecture demonstrations


2. Implemented Features βŒƒ

2.1. Real-time Video Preview βŒƒ

Sentinel provides browser-based real-time video preview capability. The camera feed captured by the Android device is continuously uploaded to the server as single JPEG frames, and the server pushes real-time images to the browser via an MJPEG stream.

This mechanism has the following characteristics:

  • Implemented based on MJPEG streaming β€” the browser can parse and refresh frames in real time without any plugins.

  • No additional client software required β€” users only need to access the Dashboard address through a browser to view the live video.

  • Low-latency transmission within LAN β€” under the same Wi-Fi network, end-to-end latency is typically at the millisecond level.

  • Decoupled architectural design β€” real-time preview reads the latest frame from FrameBuffer and does not interfere with recording or AI analysis modules.

  • Supports dynamic parameter adjustment β€” stream FPS and JPEG compression quality can be modified via configuration to adapt to different device performance and network conditions.

  • Fully localized storage and data control β€” unlike traditional monitoring solutions that rely on SD cards or cloud storage, Sentinel saves video data directly on the LAN PC. Local disk storage avoids cloud service fees and third-party data risks, and also facilitates subsequent data analysis, model training, or secondary processing.


Figure 1 - Dashboard Live Preview Interface (Full-screen Mode)

2.2. System Parameter Customization βŒƒ

Sentinel is not a fixed-behavior β€œblack-box monitoring tool,” but a highly configurable real-time vision system. Users can finely control video streaming, recording strategies, and AI behavior through the Dashboard to adapt to different hardware environments and application scenarios.

System parameters are mainly divided into three categories:

β‘  Video Stream Parameters

  • Stream FPS (real-time preview frame rate), JPEG Quality (image compression quality), Upload FPS (upload rate)......

Users can balance image quality and network bandwidth, making it suitable for weak network conditions or low-performance device environments.

β‘‘ Recording Strategy Parameters

  • Record FPS (recording frame rate), Segment Seconds (segment duration), Codec (video encoding format)......

Configures segmented writing and video encoding to avoid oversized single files and facilitate long-term operation and archival management.

β‘’ AI Behavior Parameters

  • OBSERVE Interval (detection interval), Motion Threshold (motion trigger threshold), Prompt Template / Scene Profile ......

These parameters allow adjustments to β€œAI trigger sensitivity” and β€œmodel decision logic” via the Settings page, optimizing token usage and enabling personalized monitoring scenarios.


Figure 2 - Dashboard Parameter Control Interface (Expanded View)

code line

2.3. AI Trigger-based Monitoring and Controllable Cognitive Output βŒƒ

With the support of multimodal vision models, Sentinel is capable not only of β€œseeing the scene,” but also of performing structured understanding and risk assessment. The system adopts a layered mechanism of β€œmotion trigger + model analysis”:

  • Uses β€œtraditional computer vision algorithms + large model inference” for layered processing, reducing token consumption and API costs;
  • When trigger conditions are met, the system calls the model and enters the OBSERVE state;
  • Invokes the vision model for semantic analysis and outputs structured JSON results;
  • Records observation results and displays them in real time on the Dashboard.

AI module capabilities include:

β‘  Structured Semantic Output

Under the current configuration, model analysis outputs follow a unified structure: whether a person is present (has_person), risk level (risk_level), confidence (confidence), scene summary (summary)......

Structured outputs facilitate subsequent rule engine processing or statistical data analysis.


β‘‘ Event-level State Management

The system records the complete lifecycle of a defined event:

SLEEP β†’ OBSERVE state machine, trigger/duration statistics, AI call health monitoring......

This allows the system not only to determine whether an event has occurred, but also to manage the evaluation criteria of such decisions.


β‘’ Flexible Model Rule Configuration

Users can modify multiple prompt-related parameters:

Prompt Template (role definition), Scene Profile (long-term scene context), Session Focus (short-term task focus), Extra Rules (additional constraints)

to make model behavior programmable and controllable.


Figure 3 - AI Enabled Status

2.4. Mobile Camera Capture Application (CamFlow) βŒƒ

Sentinel does not rely on dedicated surveillance cameras. Instead, it provides a self-developed Android client CamFlow to handle image acquisition and data upload.

CamFlow transforms an ordinary smartphone into a real-time camera endpoint and:

β‘  Real-time Image Capture and Upload

  • Invokes the device’s native camera
  • Continuously uploads frames in single JPEG format
  • Supports customizable upload frame rate and image quality (code-level interface)

β‘‘ Automatic Discovery and Connection

  • Supports UDP-based automatic server discovery or manual input
  • Provides real-time debugging information
  • Allows specifying a particular device camera (code-level interface)

β‘’ Runtime Control

  • Supports start/stop capture
  • Allows preview to be disabled for power saving
  • Provides runtime status feedback

CamFlow and the PC side together form a complete system architecture of β€œmobile acquisition + local processing,”
enabling the construction of a real-time vision system without additional hardware devices.


Figure 4 - CamFlow Application Interface

πŸ“˜ For detailed functionality and usage instructions, please refer to CamFlow User Guide.


3. Project Deployment βŒƒ

Please complete the basic project deployment in the following order:

  • Obtain the project source code
  • Deploy the PC-side server
  • Install the Android client APK

πŸ“‚ Project Structure

Sentinel/
β”‚
β”œβ”€β”€ server.py                     # Program entry point (starts Flask, initializes runtime environment, launches threads)
β”œβ”€β”€ README.md                     # Project documentation
β”œβ”€β”€ CamFlow_UserGuide.md          # Android user guide
β”œβ”€β”€ requirements.txt              # Dependency installation list
β”œβ”€β”€ .gitignore                    # Git ignore rules
β”œβ”€β”€ LICENSE                       # MIT License
β”‚
β”œβ”€β”€ app/                          # Main application package
β”‚   β”‚
β”‚   β”œβ”€β”€ __init__.py               # Package initialization
β”‚   β”‚
β”‚   β”œβ”€β”€ ai/                       # AI-related modules (optional)
β”‚   β”‚   β”œβ”€β”€ ai_ark.py             # Vision model interface implementation (replaceable)
β”‚   β”‚   β”œβ”€β”€ ai_monitor_worker.py  # AI monitoring thread (SLEEP/OBSERVE state machine)
β”‚   β”‚   β”œβ”€β”€ ai_store.py           # AI event writing and persistence
β”‚   β”‚   β”œβ”€β”€ motion_trigger.py     # Low-cost motion detection module
β”‚   β”‚   └── __init__.py
β”‚   β”‚
β”‚   β”œβ”€β”€ config/                   # Configuration management module
β”‚   β”‚   β”œβ”€β”€ config.json           # Automatically generated after saving in Dashboard
β”‚   β”‚   β”œβ”€β”€ config_store.py       # Configuration validation and read/write logic
β”‚   β”‚   └── __init__.py
β”‚   β”‚
β”‚   β”œβ”€β”€ core/                     # Core runtime modules
β”‚   β”‚   β”œβ”€β”€ frame_buffer.py       # Latest frame cache (system data sharing center)
β”‚   β”‚   β”œβ”€β”€ logger.py             # Logging initialization
β”‚   β”‚   β”œβ”€β”€ runtime.py            # Global runtime state management
β”‚   β”‚   β”œβ”€β”€ upload_stats.py       # Upload statistics
β”‚   β”‚   └── __init__.py
β”‚   β”‚
β”‚   β”œβ”€β”€ net/                      # Networking modules
β”‚   β”‚   β”œβ”€β”€ net_discovery.py      # UDP auto-discovery service
β”‚   β”‚   └── __init__.py
β”‚   β”‚
β”‚   β”œβ”€β”€ recorder/                 # Video recording module
β”‚   β”‚   β”œβ”€β”€ recorder.py           # Video writing logic
β”‚   β”‚   β”œβ”€β”€ recorder_worker.py    # Recording thread controller
β”‚   β”‚   └── __init__.py
β”‚   β”‚
β”‚   └── web/                      # Web interface and API layer
β”‚       β”œβ”€β”€ webapp.py             # Flask routes and APIs
β”‚       β”œβ”€β”€ __init__.py
β”‚       β”‚
β”‚       β”œβ”€β”€ static/               # Frontend static assets
β”‚       β”‚   β”œβ”€β”€ dashboard.js
β”‚       β”‚   └── style.css
β”‚       β”‚
β”‚       └── templates/            # HTML templates
β”‚           β”œβ”€β”€ dashboard.html
β”‚           └── dashboard.txt
β”‚
β”œβ”€β”€ PhoneCamSender/               # Android client source code (Android Studio project)
β”‚   β”œβ”€β”€ app/                      # Android application module
β”‚   β”œβ”€β”€ gradle/                   # Gradle configuration
β”‚   β”œβ”€β”€ build.gradle
β”‚   β”œβ”€β”€ settings.gradle
β”‚   └── ...
β”‚   └── CamFlow-v1.0.0-beta.apk   # Precompiled APK
β”‚
β”œβ”€β”€ assets/                       # Images used in README
β”‚   β”œβ”€β”€ app_main_page.jpg
β”‚   β”œβ”€β”€ app_setting_page.jpg
β”‚   └── app_failed_hint.jpg
β”‚
β”œβ”€β”€ log/                          # Runtime log directory (empty at first run)
β”‚   β”œβ”€β”€ server.log                # System runtime log
β”‚   └── ai_events.jsonl           # AI event records
β”‚
└── recordings/                   # Video and snapshot output directory (empty at first run)
    β”œβ”€β”€ snapshots/
    └── videos/

3.1. Environment Requirements βŒƒ

PC Side Android Side
Python 3.9 or above Android 8.0 or above
Windows / macOS / Linux Allow installation from unknown sources
Git installed (for cloning repository) Connected to the same local area network as the PC

3.2. Obtain the Project Source Code βŒƒ

Execute in the cmd terminal:

git clone https://github.com/suzuran0y/Sentinel.git
cd Sentinel

3.3. PC-side Deployment βŒƒ

3.3.1. Create a Virtual Environment (Recommended)

Windows System

python -m venv venv            # Create virtual environment
venv\Scripts\activate          # Activate virtual environment

macOS / Linux System

python3 -m venv venv
source venv/bin/activate

3.3.2. Install Dependencies

pip install -r requirements.txt

The dependency list includes:

flask>=2.2                     # Web service
numpy>=1.23                    # Image data processing
opencv-python>=4.8             # Image decoding and video writing
volcenginesdkarkruntime        # AI module dependency

3.4. Android-side Deployment βŒƒ

The Android application CamFlow is responsible for camera capture and image upload.
The application source code is located in the repository path:

PhoneCamSender/

The precompiled APK file is located at:

PhoneCamSender/CamFlow-v1.0.0-beta.apk

3.4.1. Quick Installation via APK (Recommended)

For general users, installing via the APK file is the most convenient method:

  1. Use the APK file included in the cloned repository or download it from the project's Release page;

  2. Open the APK file and install CamFlow on the Android device;

  3. If prompted by the system, allow installation from unknown sources, then complete installation and launch the application.

3.4.2. Build and Install via Android Studio (Optional)

For developers who want to debug or modify features, the app can be built from source:

  1. Open Android Studio;

  2. Open the project directory: Sentinel/PhoneCamSender;

  3. Connect an Android device;

  4. Wait for Gradle synchronization to complete, then run the project.

πŸ“˜ For detailed Android-side functionality and usage instructions, please refer to CamFlow User Guide.


4. Run the Project βŒƒ

After completing project deployment, verify the entire pipeline in the order of Start PC β†’ Connect Mobile β†’ Dashboard Displays Video to ensure the full chain is functioning correctly.


You should observe the following success indicators:

  • The terminal outputs something like http://127.0.0.1:<PORT>/ and http://<LAN_IP>:<PORT>/
  • The browser successfully opens the Dashboard page
  • The mobile app shows successful Test connection / Ping (or a connected status message)

Note: After the PC side starts, certain switches will reset to default safe states (for example, ingest / recording are OFF by default).
This is intentional design to prevent unintended recording or data reception.


4.1. Start the PC Side (Server + Dashboard) βŒƒ

Run the following command in the project root directory:

python server.py

After successful startup, you should see output similar to:

===========================================================
PhoneCam Server Started
Local:   http://127.0.0.1:<PORT>/        for dashboard web  # Recommended address for browser
LAN:     http://<LAN_IP>:<PORT>/         for CamFlow link   # Address for CamFlow device connection
Default ingest: OFF (enable in dashboard)
===========================================================

* Serving Flask app 'app.web.webapp'
* Debug mode: off

⚠️ Important: The camera device must use the LAN address. If the server starts successfully but ingest is not enabled, the Live View being empty is normal behavior.

At this point, the PC-side service has started successfully.


4.2. Start CamFlow (Android App) βŒƒ

4.2.1. Prerequisites

  • The PC and the mobile device must be connected to the same local area network (same Wi-Fi);
  • The PC-side service must already be running, and the Dashboard must be accessible in the browser.

4.2.2. Connection Methods

  • Automatic Discovery: CamFlow supports attempting to discover the server within the LAN. If discovery is successful, the recognized Server Address will be displayed on the app’s settings page or main page.

  • Manual Entry: If automatic discovery fails, manually enter the Server Address.

In CamFlow settings, enter the LAN IP address printed when running server.py on the PC:

<LAN_IP> or <LAN_IP>:<PORT>             # Extract from http://<LAN_IP>:<PORT>/

4.2.3. Start Uploading (Push Frames)

  • Once the Server Address connection is successful, the app will automatically begin capturing and uploading camera data.

  • Tap the upper-right corner of the app’s main interface to enter the settings page for further configuration.

4.2.4. PC-side Verification

  • After confirming successful connection in CamFlow, open the browser on the PC and enter the Dashboard address printed by server.py.

  • Since Ingest is OFF by default, the Live View section will initially display Ingest OFF - enable in dashboard instead of the camera feed.

  • Click the Enable Ingest button to check whether the Live View displays video.
    If no video appears, expand the Logs section under Live View (collapsed by default) and check for messages such as ingest disabled, then adjust settings accordingly.

  • Under normal conditions, once Ingest is switched to ON, the Live View section in the Dashboard will immediately update with the camera feed from the device.

At this point, the CamFlow service and data transmission to the PC have been successfully started.
For detailed usage instructions of CamFlow, please refer to the repository documentation: CamFlow User Guide.


4.3. Dashboard Guide βŒƒ

The Dashboard page consists of dashboard.html + dashboard.js + style.css, and its core data comes from backend APIs.


4.3.1. Page Structure

The layout of the Dashboard is divided into three sections: top bar + left monitoring panel + right settings panel.

  • Top Bar
    • Title: Sentinel System Dashboard & Subtitle: Multi-Device Vision Monitoring & Risk Detection System
    • Button: Shutdown
  • Left Panel
    • Live View: real-time video preview + quick control buttons
    • Monitor Status: system status summary + Logs: system logs
  • Right Panel
    • Settings: all configurable parameter input fields
    • AI Monitoring: AI module switches and configuration
    • Apply / Save / Load: configuration application and persistence

The divider between panels is draggable, allowing adjustment of the display ratio;
Click the Collapse button to hide the right-side Settings panel and enlarge the Live View area.
Later, click Expanding Settings to reopen the collapsed section.


4.3.2. Live View Section

The latest frame uploaded from the mobile device enters the PC-side FrameBuffer, and the browser continuously refreshes the display in Live View via /stream using MJPEG.

β‘  Display Logic

The video shown in Live View depends on whether CamFlow is uploading frames and whether PC-side Ingest (allowing /upload to write into FrameBuffer) is enabled.

  • When Ingest is OFF: displays Ingest OFF - enable in dashboard;
  • When Ingest is ON but no frame has been received yet: displays Waiting for frames....

β‘‘ Core Button Area

Below Live View, there are three buttons. All are OFF by default:
Enable Ingest, Start Recording, Snapshot.

  1. Enable / Disable Ingest

    • Function: controls whether the server accepts image uploads from CamFlow;
    • Feature: when clicking Disable Ingest, the preview immediately returns to placeholder state, allowing easy observation of data flow;
    • Usage: click to toggle Ingest ON/OFF.
  2. Start / Stop Recording

    • Function: controls whether the recording thread writes FrameBuffer data into segmented video files;
    • Feature: recording and preview functions are independent. Recording only requires Ingest to be ON;
    • Output path structure: recordings/videos/YYYYMMDD/<cam_name>_YYYYMMDD_HHMMSS.mp4, configurable in Settings;
    • Usage: click to toggle Recording ON/OFF.
  3. Snapshot

    • Function: saves the current frame as an image.
    • Output path structure: recordings/snapshots/YYYYMMDD/snapshot_YYYYMMDD_HHMMSS.jpg, configurable in Settings;
    • Usage: click to trigger screenshot.

4.3.3. Monitor Status Section

Monitor Status is an area that aggregates system runtime information and refreshes dynamically according to current system settings.

The table below provides a structured explanation of core fields:

Field Format Meaning Usage & Troubleshooting
Ingest Boolean (ON / OFF) Whether camera uploads are being accepted If OFF, /upload will be rejected. If no video appears, check this first
Recording Boolean (ON / OFF) Whether video recording is currently active If ON but no file is generated β†’ check Output Root, codec, or recording thread
Last frame age Number (seconds) Time since the last successfully written frame Should remain at small fractions of a second. If continuously increasing β†’ upload interrupted or rejected
Upload FPS Number (frames/sec) Estimated upload frame rate Close to 0 β†’ CamFlow not uploading or Ingest is OFF
Stream FPS Number (frames/sec) Browser MJPEG refresh rate Reduce if browser lags; too high increases CPU usage
JPEG Quality Number (0–100) Preview image compression quality Higher quality = clearer image but higher load; high quality + high FPS may cause performance pressure
Record FPS Number (frames/sec) Recording frame rate Too high increases disk pressure; too low results in choppy video
Segment Seconds Number (seconds) Video segmentation duration Reduce if files become too large; too small generates many files
Recording file String (file path) Current video file being written If Recording = ON but empty β†’ recording thread not functioning properly
Rec elapsed Time string (HH:MM:SS) Duration of current recording session If not increasing β†’ recording not actually started
Seg remaining Number (seconds) Remaining time for current segment Abnormal jumps β†’ time configuration or segmentation logic error
Upload counts Structured counter fields Upload result statistics Increases when Ingest is ON; contains multiple subfields

Upload counts Field Breakdown:

Subfield Meaning Normal Trend Abnormal Explanation
200_ok Successful upload and write count Should continuously increase If not increasing, no valid uploads
400_missing_image Request missing image field Should remain 0 Client field name error
400_decode_failed Data received but failed to decode Should remain 0 Data corruption or non-JPEG content
503_ingest_disabled Rejected uploads due to Ingest OFF Increases when Ingest = OFF Ingest not enabled

4.3.4. Logs Section

Below Monitor Status, there is a collapsible Logs area used to view real-time server runtime logs.

  • Collapsed by default; click the Logs row to expand;
  • Logs are read from log/server.log (generated during project runtime);
  • When the system encounters issues, open the Logs section to inspect runtime information.

Note: Logs only display the most recent 50 entries.
To review earlier logs, open the log/server.log file directly to view the complete content.


4.3.5. Settings Section

The right-side Settings panel contains system parameter input fields and configuration buttons.
When modifying system configuration, parameters must first be adjusted in the Settings area and then applied to the system.

β‘  Settings Fields

The table below explains configuration options available in the Dashboard.

It is recommended to understand the β€œpurpose” and β€œrecommended range rationale” before modifying parameters to avoid performance or stability issues.

Configuration Format Purpose Recommended Range
Stream FPS Number (frames/sec) Controls MJPEG refresh rate in browser Recommended 8–15. Too high increases CPU and bandwidth usage; below 5 causes noticeable lag.
JPEG Quality Number Controls JPEG compression quality Recommended 60–80. Too low causes blur; too high significantly increases CPU load at high FPS.
Record FPS Number (frames/sec) Controls recording frame rate Recommended 10–15. Too high increases disk pressure and file size; below 8 reduces smoothness.
Segment Secs Number (seconds) Controls video segmentation duration Recommended 60–3600. Too large creates huge files; too small creates many fragments.
Output Root String (file path) Video and snapshot output directory Recommended default recordings; avoid system root directories or unauthorized paths.
Cam Name String (identifier) Camera identifier written into filenames Recommended simple identifiers (e.g., phone1); avoid spaces or special characters.
Codec String (FourCC) Video encoding format, e.g., avc1 / mp4v / XVID Recommended avc1 or XVID (better compatibility). Encoder support varies by system; switch if recording fails.
Autosave Boolean (true / false) Whether to automatically write to config.json after clicking Apply Recommended enabled during development; disable during frequent experimentation to avoid overwriting configurations.

Note: Parameter modifications do not take effect automatically.
Changes must be applied using the buttons below.
It is recommended to stop current system tasks before applying new parameters to ensure full effect.

β‘‘ AI Monitoring Section

Under default configuration, the AI module is disabled.
To enable it, click the Enable button on the right side of the AI Monitoring panel.
Once enabled, AI settings and status sections will appear on the Dashboard.

For detailed information, refer to 4.4. AI Monitoring Features.

β‘’ Application Buttons

  • Difference between Apply / Save / Load

    • Apply

      • Submits current input parameters to backend /api/config and immediately updates runtime configuration.
      • If parameters are invalid (out of range / wrong type), the backend will reject and return an error message.
    • Save

      • Writes the current configuration to local file (app/config/config.json).
    • Load

      • Loads config.json from disk and refreshes the page form.
  • On first system startup, a pre-filled configuration set is used;
  • To customize system configuration, modify parameters in the Settings panel and click Apply;
  • (If Autosave is not enabled) click Save to persist configuration locally;
  • Later, click Load to reload saved configuration into the form without re-entering values.

4.3.6. Shutdown

⚠️ This is the β€œterminate service” button.
The top Shutdown button triggers the system shutdown procedure: stop recording, disable ingest, stop the Python service, and attempt to close the page.


4.4. AI Monitoring Features (Optional Feature) βŒƒ

The AI module is an optional enhancement module:
Even without configuring any API Key, the system can fully operate basic functions such as video upload / live preview / recording / snapshot / logs.

Sentinel’s AI module adopts an event-driven layered trigger-based visual cognition architecture.
Instead of performing inference on every frame, the system uses lightweight motion detection, a dual-state machine control mechanism, and structured event management to achieve lower-cost long-term intelligent monitoring.

The modular design supports pluggable models, structured output contracts, and system decoupling, ensuring scalability, interpretability, and maintainability.


AI Module Main Workflow Framework (From Frame to Dashboard)

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                            Data Input Layer                    
β”‚  CamFlow β†’ POST /upload β†’ FrameBuffer (Latest Frame Cache)  
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                               β”‚
                               β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                Lightweight Motion Trigger Layer (Cost Reduction)    
β”‚  - Compute frame difference ratio                            
β”‚  - If motion_ratio > threshold                                
β”‚  - Trigger frequency controlled by Motion Min Interval        
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                               β”‚
                               β–Ό If triggered, STATE: SLEEP β†’ OBSERVE
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                     AI State Machine Control Layer            
β”‚    STATE ∈ { SLEEP, OBSERVE }                                
β”‚  SLEEP:                                                       
β”‚    - Only runs motion trigger                                 
β”‚    - Does not call model                                      
β”‚  OBSERVE:                                                     
β”‚    - Calls model every AI Interval                            
β”‚    - Manages dwell timing                                     
β”‚    - Calculates event_duration                                
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                               β”‚
                               β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                      Vision Model Interface Layer              
β”‚  - Construct Prompt (Role + Scene + Session + Extra)         
β”‚  - Compress JPEG (AI JPEG Quality)                            
β”‚  - Call third-party vision model API                          
β”‚  - Receive JSON output                                        
β”‚  - Perform Output Contract validation                         
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                               β”‚
                               β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                 Event Lifecycle Management Layer              
β”‚  - dwell β‰₯ threshold β†’ confirmed                              
β”‚  - No target β‰₯ End Grace β†’ end event                          
β”‚  - Write to ai_events.jsonl                                   
β”‚  - Update AI runtime status                                   
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                               β”‚
                               β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                      Visualization & Display Layer            
β”‚  - /api/ai_status                                             
β”‚  - Display current AI Result                                  
β”‚  - Display Info: Times, Metrics                               
β”‚  - Display History List                                       
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

4.5. Dashboard AI Module Guide βŒƒ

When the Dashboard page is first opened, the AI module is disabled by default.
Click the Enable button on the right side of the AI Monitoring panel to activate the AI module.

The AI module on the Dashboard consists of two parts:

  • AI Settings (Configuration Area): determines whether AI is enabled, call frequency, thresholds, etc.;
  • AI Status (Information Area): displays state machine status, health status, and AI monitoring history.

4.5.1. AI Settings Fields

It is recommended to understand the purpose of each parameter before enabling the AI module, in order to control model call sensitivity and reasonably tune for different scenarios.

Configuration Format Purpose Recommended Value
Model / Endpoint String (Model ID or Endpoint) Specifies the vision model to call Use a model that supports vision analysis, e.g., doubao-seed-2-0-mini-260215.
API Key String (key) Credential for third-party model service access Must be obtained from the provider. See 4.6. Third-party Model Integration.
OBSERVE Interval (sec) Number (seconds) Minimum interval between model calls in OBSERVE state Recommended 2–3 seconds. Too small increases token consumption.
Dwell Threshold (sec) Number (seconds) Duration a target must persist before confirming an event Recommended 3–5 seconds. Increasing reduces occasional false positives.
End Grace (sec) Number (seconds) Delay before ending event after target disappears Recommended 2–3 seconds to avoid interruption due to brief occlusion.
AI JPEG Quality Number (50–95) JPEG compression quality sent to model Recommended 75–90. Too high increases bandwidth and encoding cost.
Motion Threshold (ratio) Decimal (ratio) Frame change ratio threshold for motion detection Recommended 0.1–0.3. Lower values increase sensitivity; raise if AI triggers too frequently.
Motion Min Interval (sec) Number (seconds) Minimum time between motion triggers Recommended 1–3 seconds to prevent excessive triggering.
Prompt Template (Role) Long text (role definition) Defines model role and output format specification Recommend explicitly requiring β€œJSON-only output”.
Scene Profile (Long-term) Long text (long-term context) Provides fixed environmental background description Describe camera position, scene type, etc.
Session Focus (Short-term) Long text (temporary focus) Temporary monitoring objective Example: β€œFocus on whether strangers enter the area.”
Extra Prompt / Rules Long text (additional rules) Additional output format, language, or risk evaluation rules Can specify output language or restrict fields.

Parameters usually require tuning according to lighting conditions, camera angle, and target types.
After modification, observe 1–2 complete event cycles before evaluating effectiveness.


β‘  Detailed Explanation of Text-type Parameters (Recommended Reading)

The following four fields belong to Prompt engineering parameters and directly affect model output structure and semantic understanding.

  • Prompt Template (Role): defines the model’s role and output structure.

Example:

You are a video surveillance assistant. You will receive a monitoring image and background information. Only output a JSON object including whether a person is present, number of people, activity, risk level, and a brief summary.
  • Scene Profile (Long-term): provides long-term fixed background context to help the model determine what is considered β€œabnormal”.

Example:

Home scenario: the camera faces the living room.
Office scenario: the camera faces the workspace area. Multiple people during working hours are normal.
  • Session Focus (Short-term): temporary task configuration that can change according to short-term requirements.

Example:

Currently focus on whether children approach the balcony area.
  • Extra Prompt / Rules: supplementary output and language constraints.

Example:

Use Chinese for the summary.
If uncertain, keep has_person=false.

β‘‘ Output Contract and Automatic Structure Constraint Mechanism

To ensure stable parsing of model outputs in the Dashboard, the backend implements an Output Contract mechanism.

Regardless of how the task is described in the Prompt Template, the model’s final output must conform to the predefined JSON structure required by the system.


4.5.2. AI Status Section

This area visualizes the runtime status of the backend AI state machine in real time.

β‘  Status Header

Located at the top of the AI Status panel, displaying the current state machine runtime status:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
| STATE: SLEEP / OBSERVE | Event: evt_xxxxx | Dwell: Yes / No | Health: Fine / Error |
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
Field Format Meaning
STATE SLEEP / OBSERVE Indicates whether AI is currently active
Event String (evt_xxxxx) Current active event ID
Dwell Yes / No Whether dwell confirmation threshold has been reached
Health Fine / Error Whether the most recent model call succeeded
  • State Machine Description

πŸ”Ή SLEEP: only runs traditional vision algorithms; does not call the vision model; low resource consumption.

πŸ”Ή OBSERVE: periodically calls the vision model, analyzes structured fields, and performs event recording.


β‘‘ Last AI Result

To stabilize output structure, the system enforces an Output Contract mechanism.
The model must output a JSON object containing at least the following fields:

{
  "has_person": bool,
  "person_count": int,
  "activity": str,
  "risk_level": "info|warn|critical",
  "confidence": float,
  "summary": str
}

After structured extraction, the Dashboard presents a human-readable format (example):

Person: No | Count: 0 | Activity: unknown | Risk: info | Conf: 99%
Summary: This indoor CCTV scene contains no people...
Field JSON Key Format Description
Person has_person Boolean (Yes / No) Whether a person is detected
Count person_count Number Number of people
Activity activity String Behavior description
Risk risk_level info / warn / critical Risk level
Conf confidence 0–100% Model confidence
Summary summary String Natural language summary

The system includes Output Contract safeguards.
If the model response is incomplete, it will be processed to ensure the Dashboard always receives valid output.


β‘’ Time Information (Times)

Calculates runtime timing information for the event monitoring module (example):

Last trigger: 2026-02-24 01:48:37 (2m ago)
Last AI call: 2026-02-24 01:51:05 (6s ago)
Event start:  2026-02-24 01:48:37 (2m ago)
Field Meaning
Last trigger Most recent motion trigger time
Last AI call Most recent model call time
Event start Start time of current event

β‘£ Event Metrics (Metrics) (Example)

Person present (acc): 115.0 s
Event duration: 153.9 s
Metric Meaning
Person present (acc) Accumulated time a person was confirmed present during event
Event duration Total duration of the current event

Note: Only after dwell_confirmed (confirmed persistent presence) does the system begin accumulating effective person presence time.


β‘€ Raw JSON Output

Below Metrics, there is a collapsible Raw JSON section used to view the latest original JSON output from the model.

  • Collapsed by default; click Raw JSON to expand.

Useful for debugging Prompt configuration and checking field completeness.


β‘₯ History List

When the system runs for the first time, this section is empty.
After an event ends, it is written into log/ai_events.jsonl and displayed in chronological order on the page (example):

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ 2026-02-24 01:51:03   AI   πŸ”΄ critical                    
β”‚ Person: Yes | Count: 1 | Activity: lingering              
β”‚ Risk Level: critical | Confidence: 99%                    
β”‚ Summary: A single person is lying prone...                
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ 2026-02-24 01:50:48   AI   🟒 info                        
β”‚ Person: Yes | Count: 1 | Activity: passing                
β”‚ Risk Level: info | Confidence: 98%                        
β”‚ Summary: One person is walking through the indoor space…  
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                            ...
                            ...
                            ...

Note: The History List only displays the most recent 20 records.
To review earlier entries, open log/ai_events.jsonl directly to view the complete content.


4.6. Third-party Model Integration (Replaceable) βŒƒ

The current AI module is built around the vision model API provided by the Volcengine platform.
To use the AI functionality, you must first obtain model access permission and an API Key, then enter them into the Dashboard settings.

Support for additional platforms will be added in future updates.
If you wish to integrate other platforms or locally deployed vision models in the current version, modify the analyze_frame() function in ai_ark.py, and ensure the output structure complies with the Output Contract.


4.6.1. Account Registration and Login

  1. Visit the platform login page: https://console.volcengine.com/
    If the page language is not preferred, switch via the δΈ­ζ–‡ / EN button in the upper-right corner;

  2. Complete registration and log in to the console;

  3. Enter the Ark page;

  4. Go to Config - Model activation, and under the β€œLarge Language Model” category, locate the vision model.
    It is recommended to select Doubao-Seed-2.0-mini, then click the corresponding β€œActivate” button;

  • Reminder: Also enable the β€œFree Credits Only Mode” service on the Model activation page.
    The model provides a free quota (500,000 tokens). After activation, only the free quota is consumed.
    Once exhausted, API access will automatically be disabled.
  1. Go to the Config - API keys page in the console;

  2. Click β€œCreate API Key” and configure as needed;

  3. After creation, copy and securely store the generated API Key.

As the platform UI and free quota policies may change over time, the steps above are for reference.
However, the core process remains:

  1. Register and log in to the platform console;
  2. Activate a vision-capable model;
  3. Create an API Key;
  4. Enter the Model ID and API Key into the Dashboard.

4.6.2. Model Information

Based on development testing, Doubao-Seed-2.0-mini currently provides the best cost-performance balance.

  • Doubao-Seed-2.0-mini:
    • Lightweight model designed for low-latency and high-concurrency scenarios;
    • Supports up to 256k context length;
    • Four reasoning depth levels (minimal / low / medium / high);
    • Supports multimodal image-text understanding and function calling;
    • In non-reasoning mode, token consumption is only 1/10 of reasoning mode, offering excellent cost efficiency for simple scenarios.

After entering the model ID doubao-seed-2-0-mini-260215 and your API Key into the Dashboard and applying the settings, the AI module will function properly.


4.6.3. Replacing the Vision Model

If you wish to use other models (e.g., OpenAI Vision or a locally deployed multimodal model), modify the following files:

  • ai_ark.py

    • Modify the analyze_frame() function
    • Replace the API call logic
    • Ensure JSON output structure remains consistent
  • config_store.py (optional)

    • Add new model parameters
    • Set default values

The Dashboard and AI Status panels rely on the Output Contract for rendering.
If field names change, you must also update:

  • ai_monitor_worker.py
  • dashboard.js

⚠️ Important: Regardless of the model used, the output structure must conform to the predefined Output Contract.
Otherwise, the Dashboard will not be able to correctly parse the results.


5. Version Information & Notes βŒƒ

5.1. System Version βŒƒ

The Sentinel system consists of PC-side service program + Web Dashboard + Android CamFlow client.
The current version information is as follows:

  • Sentinel (PC + Dashboard) Version: v1.0.0-beta
  • CamFlow (Android) Version: v1.0.0-beta
  • Documentation Version: v1.0.0
  • Last Updated: 2026-02-26

5.2. Test Environment βŒƒ

The system has been tested and validated under the following environments:

  • PC Side (Server + Dashboard)

    • Operating System: Windows 10 / Windows 11
    • Python Version: Python 3.9+
    • Main Dependencies: Flask, OpenCV, NumPy, volcenginesdkarkruntime
    • Network Environment: Same local area network (LAN / same Wi-Fi)
  • Android Side (CamFlow)

    • System Version: Android 8.0+
    • Test Device: nova 8 SE Vitality Edition (HarmonyOS 3.0.0)
    • Network Environment: Same Wi-Fi / same LAN as PC
    • Required Permission: Camera (requested at first launch)

5.3. Roadmap βŒƒ

Possible future improvements include:

  • Security Mechanisms

    • Add Token / API Key authentication (for public network deployment)
  • Transmission Performance

    • Replace some HTTP polling or MJPEG mechanisms with WebSocket / WebRTC (reduce latency and resource consumption)
    • Adaptive frame rate / resolution control (dynamic tuning based on network and CPU conditions)
  • AI Integration

    • Expand pluggable model interfaces (OpenAI / local vision models / other platforms)
    • More refined event aggregation and rule engines (reduce false positives / false negatives)

5.4. Usage & License βŒƒ

Sentinel is released under the MIT License.

Copyright Β© 2026 Suzuran0y

  • This project is intended for learning, research, and technical validation purposes and is a prototype system.
  • If deploying the current version in a real production or commercial environment, you must implement additional measures:
    • Access authentication
    • Transmission encryption
    • Log desensitization and privacy compliance policies
    • Stability and resource limitation mechanisms

Note: CamFlow involves image data capture and transmission.
Ensure compliance with local laws and obtain appropriate authorization before use.