Offline, physics-first vibration intelligence system for SME industrial machinery. Detects bearing wear, misalignment, cavitation, and imbalance before catastrophic failure.
Built on AMD Ryzen AI edge hardware. Zero cloud dependency. Alerts in Hindi, Marathi, English.
Host Machine
└── Node A (C++ DSP)
Subscribes: USB Audio (44.1kHz)
Publishes: ZMQ tcp://*:5555 [1024×64 spectrogram tensor]
Docker Network
├── Node B (Python Inference)
│ Subscribes: ZMQ :5555
│ Publishes: ZMQ :5557 {mse, rms, severity, alert}
│ Runtime: ONNX Runtime (CPU dev) · Vitis AI (NPU target)
│
├── Web (Node.js + React Dashboard)
│ Subscribes: ZMQ :5557
│ Serves: http://localhost:3001
│
└── LLM (Llama 3.1 8B — Optional)
Serves: :11434 via Ollama (local only, no cloud API)
Resonance/
├── src/ # Node A — C++ DSP Engine
│ ├── main.cpp
│ ├── ear.cpp # PortAudio input capture
│ ├── fft.cpp # FFTW3 FFT processing
│ ├── filters.cpp # High-pass / low-pass filters
│ ├── spectrogram.cpp # Log-magnitude spectrogram
│ ├── broadcaster.cpp # ZMQ PUB :5555
│ └── safety.cpp # RMS safety gate
│
├── include/resonance/ # C++ header files
│ ├── ear.hpp
│ ├── fft.hpp
│ ├── filters.hpp
│ ├── spectrogram.hpp
│ ├── broadcaster.hpp
│ └── safety.hpp
│
├── python/
│ ├── inference/
│ │ └── main.py # Node B — ONNX inference + LLM alerts
│ ├── llm/
│ │ └── handler.py # LLMProvider → LocalLLM → LLMHandler
│ ├── training/
│ │ ├── collect_data.py # Healthy baseline collection
│ │ ├── dataset.py # Spectrogram dataset loader
│ │ ├── model.py # ConvAutoencoder architecture
│ │ ├── train.py # Training loop
│ │ └── export_onnx.py # PyTorch → ONNX export
│ ├── onnx/
│ │ ├── autoencoder.onnx # Deployed model
│ │ └── model_hash.txt # SHA-256 integrity hash
│ ├── weights/
│ │ ├── autoencoder.pth # PyTorch checkpoint
│ │ └── norm_stats.json # Normalization parameters
│ ├── utils/
│ │ ├── config.py # Thresholds, ZMQ endpoints, LLM config
│ │ ├── zmq_receiver.py # ZMQ subscriber
│ │ ├── rms_monitor.py # Standalone RMS visualizer
│ │ └── dsp.py # DSP utilities
│ ├── tests/
│ │ ├── mock_node_a.py # ZMQ mock publisher for testing
│ │ ├── evaluate_model.py # MSE threshold evaluation
│ │ └── verify_e2e.py # End-to-end pipeline verifier
│ ├── run_inference.py # Entry point
│ ├── requirements.txt
│ ├── Dockerfile
│ └── README.md
│
├── web/ # Node C — React Dashboard
│ ├── app/ # React components + hooks
│ ├── styles/ # CSS + Tailwind
│ ├── server.js # Node.js relay (ZMQ → Socket.IO)
│ ├── index.html
│ ├── package.json
│ ├── vite.config.ts
│ └── Dockerfile
│
├── schema/ # Data schemas
├── docker-compose.yml
├── runall.sh # Start all services locally
├── CMakeLists.txt
└── README.md
- Tier 1 — Raspberry Pi Zero 2W · USB Audio Adapter (C-Media CM108 compatible) · Piezo disc 35mm
- Tier 2 — AMD Ryzen AI Mini PC · XDNA NPU · 50 TOPS · Vitis AI Runtime
- Tier 3 — Ryzen 5 Edge Mini PC · Node.js dashboard · PostgreSQL archive
- Any Linux machine with Docker installed
- USB audio adapter or built-in microphone
- Node A validated on x86 — Pi Zero 2W deployment target
| Component | Spec | Cost |
|---|---|---|
| Raspberry Pi Zero 2W | Quad-core ARM 1GHz 512MB | ₹1,299 |
| USB Audio Adapter | C-Media CM108 44.1kHz | ₹299 |
| Piezo Disc 35mm | PZT ceramic contact | ₹80 |
| 1.2MΩ Bias Resistor | 1/4W carbon film | ₹2 |
| 1µF Capacitor | 25V electrolytic DC block | ₹8 |
| 2N3819 JFET Buffer | N-ch TO-92 impedance match | ₹20 |
| Total BOM | ₹2,491 |
git clone https://github.com/HSKCTA/Resonance.git
cd Resonance
mkdir build && cd build
cmake ..
make
./resonance_node_aNode A begins publishing spectrograms on ZMQ tcp://*:5555.
cd python
pip install -r requirements.txt
python run_inference.pyNode B subscribes to :5555 and publishes results on :5557.
cd web
npm install
npm run devDashboard available at http://localhost:5173
./runall.sh # starts all 5 services
./runall.sh stop # stops everythingNode A must run on host to access USB audio hardware:
cd build
./resonance_node_adocker compose up --buildThis starts Node B (inference), Web (dashboard), and the Ollama LLM container.
http://localhost:3001
Pull Llama 3.1 8B for local multilingual fault explanations:
docker exec -it resonance_ollama ollama pull llama3.1:8bIf not pulled, system runs with fallback rule-based alert text. LLM delivers fault explanations in English, Hindi, and Marathi.
Piezo Sensor
│
▼
PortAudio (44.1kHz capture)
│
▼
High-Pass Filter (100Hz) + Low-Pass Filter (12kHz)
│
├── RMS Amplitude → Safety Gate (ISO 10816 threshold)
│ └── HARDWARE ALARM if RMS > threshold (bypasses AI)
│
▼
FFTW3 (2048-pt FFT · 75% overlap)
│
▼
Log-Magnitude Spectrogram [1024 × 64]
│
▼
ZMQ PUB :5555
│
▼
ConvAutoencoder (ONNX Runtime · CPU / AMD Ryzen AI NPU)
│
▼
MSE Reconstruction Error
│
├── MSE > 0.180 → ANOMALY DETECTED
│ └── Llama 3.1 8B LLM → alert in Hindi / Marathi / English
│
└── MSE ≤ 0.180 → NORMAL
Industrial faults produce specific spectral signatures detectable weeks before failure:
| Fault Type | Spectral Signature | Detection Method |
|---|---|---|
| Bearing Wear | High-frequency harmonics >5kHz | Autoencoder MSE |
| Misalignment | Strong 2× 3× shaft frequency peaks | FFT harmonic analysis |
| Looseness | Frequency sidebands | Spectral analysis |
| Imbalance | Large 1× shaft frequency peak | RMS safety gate |
Standards compliance: ISO 10816-3:2009 · ISO 13373-1:2002
| Property | Value |
|---|---|
| Architecture | Convolutional Autoencoder |
| Input | 1024×64 log-magnitude spectrogram |
| Training data | Healthy vibration only (unsupervised) |
| Loss function | Mean Squared Error (MSE) |
| Anomaly threshold | 0.180 (calibrated on healthy baseline data) |
| Export format | ONNX FP32 |
| Runtime | ONNX Runtime (CPU) · Vitis AI (NPU target) |
| Inference latency | 3ms median CPU · ~1.4ms projected AMD NPU |
| Parameters | ~180K |
Min-max normalization applied before inference.
Parameters stored in python/weights/norm_stats.json.
Must be regenerated if sensor hardware changes — run collect_data.py then train.py.
- Trained on small healthy dataset — retrain on target machine for best results
- Single fault type detection — fault classifier roadmap Q2 2026
- Threshold calibrated manually — adaptive threshold planned
cd python
python tests/evaluate_model.pyReports MSE distribution on training data and recommended threshold.
| Parameter | Value |
|---|---|
| Sample rate | 44,100 Hz |
| FFT size | 2048 points |
| Overlap | 75% |
| Frequency bins | 1024 |
| Time steps | 64 |
| Output shape | [1, 1024, 64] |
Model: ConvAutoencoder FP32 ONNX · Input: 1024×64 · Runs: 1000 · Warmup: 50
| Metric | CPU (x86 dev) | NPU Projection* |
|---|---|---|
| Mean | 7.0 ms | ~1.4 ms |
| Median (P50) | 3.0 ms | ~0.6 ms |
| P95 | 21.7 ms | ~4.3 ms |
| P99 | 27.7 ms | ~5.5 ms |
| Min | 1.3 ms | — |
| Max | 34.7 ms | — |
*NPU projection: AMD Ryzen AI XDNA 50 TOPS · ~5× CPU speedup for FP32 ONNX via Vitis AI Runtime · Ryzen AI 9 HX 370 · validated benchmarks pending target hardware.
| Stage | Latency |
|---|---|
| FFT (64 hops · 2048-pt) | 1.5 ms |
| Spectrogram build | 0.09 ms |
| Audio buffer fill* | ~743 ms |
*Audio buffer fill requires 64 frames at 44.1kHz to construct one spectrogram. This is a physics constraint identical across all vibration monitoring systems.
Tested: mock_node_a → ZMQ :5555 → Node B (ONNX) → ZMQ :5557 → verifier
| Metric | Result |
|---|---|
| Frames tested | 10/10 successful |
| Pipeline mean (ZMQ + inference) | 30.9 ms |
| Pipeline worst case | 51.0 ms |
| Audio buffer fill | ~743 ms (physics) |
| Full sensor-to-alert | ~774 ms mean |
| MSE on healthy data | ~0.003 (consistent) |
| ZMQ transport | ✓ verified |
| LLM fallback | ✓ graceful — no crash on unavailable model |
Pipeline target: <100ms inference ✓ (30.9ms measured)
- CPU: AMD Ryzen 5 (Yoga 6 13ARE05)
- OS: Linux
- Runtime: ONNX Runtime · CPUExecutionProvider
- DSP: C++17 · FFTW3 · PortAudio
- Inference runs: 1000 · Warmup: 50
- No audio data leaves the local network
- Human voice range (80Hz–3kHz) filtered before AI layer — conversations never processed
- No cloud dependency for core inference
- Air-gapped factory deployment supported
- LLM runs locally via Ollama — no external API calls in production
- Read-only system — never sends control commands to machinery
| Tier | Hardware | BOM Cost | Deploy Price |
|---|---|---|---|
| Tier 1 — Sensor Node | Pi Zero 2W + Piezo | ₹2,491 | ₹3,499 |
| Tier 2 — NPU Zone | AMD Ryzen AI Mini PC | ₹81,296 | ₹89,999 |
| Tier 3 — Master Node | Ryzen 5 Edge Mini PC | ₹64,996 | ₹72,999 |
| 20-machine SME | 2 zones + 1 master | — | ₹3,47,977 |
SKF equivalent: ₹9,00,000 hardware + ₹2,00,000/yr cloud. Resonance: 85% cheaper. Zero cloud cost.
| Phase | Timeline | Feature |
|---|---|---|
| Q1 2026 | Now | ConvAutoencoder · batched NPU inference · 50 sensors/zone |
| Q2 2026 | 3 months | Fault type classifier — bearing vs imbalance vs looseness |
| Q3 2026 | 6 months | Remaining useful life estimator — LSTM on MSE trend |
| Q4 2026 | 12 months | Multi-factory dashboard · SCADA integration · AMD EPYC |
Hitesh Khare — Systems Engineering · C++ DSP Core Tanmay Bhole — AI/ML Architecture · Model Training · GenAI
AMD Slingshot 2026 · Team DataNOtfOund

