Lightweight Visual Re-Identification & Template Tracker
CPU-friendly Re-ID using color histograms and template matching β fast enough for edge devices.
| Feature | ghostfinder | torchreid | deep-person-reid |
|---|---|---|---|
| CPU-only (no GPU needed) | β | β | β |
| Edge device ready (RPi, Jetson) | β | β | β |
| < 0.5ms per comparison | β | β | β |
| Template matching fallback | β | β | β |
| Multi-feature scoring | β | β | β |
| ROI-constrained search | β | β | β |
| No training data needed | β | β | β |
| Lightweight (~300 lines total) | β | β | β |
Deep learning Re-ID solutions are powerful but require GPUs and large models. ghostfinder is designed for real-time systems on resource-constrained hardware β where you need to re-identify a target in under 1ms using only CPU.
pip install ghostfinderOr install from source:
git clone https://github.com/ByIbos/ghostfinder.git
cd ghostfinder
pip install -e .import cv2
from ghostfinder import TargetReID
reid = TargetReID(similarity_threshold=0.55)
# While tracking the target:
reid.update_fingerprint(frame, (x1, y1, x2, y2))
# When target is lost and new detections appear:
best_id, score = reid.find_best_match(frame, all_boxes, all_ids)
if best_id is not None:
print(f"Target re-identified as ID {best_id} (score: {score:.2f})")from ghostfinder import TemplateTracker
tracker = TemplateTracker(match_threshold=0.45, search_margin=150)
# While detector is working:
tracker.update_template(frame, (x1, y1, x2, y2))
# When YOLO/detector fails to find the target:
result = tracker.search(frame, predicted_center=(320, 240))
if result:
center = result['center']
score = result['score']
print(f"Template found at {center} (confidence: {score:.0%})")from ghostfinder import TargetReID, TemplateTracker
reid = TargetReID()
template = TemplateTracker()
def handle_target_lost(frame, boxes, ids, kalman_prediction):
"""
3-layer recovery strategy:
1. Try Re-ID (color fingerprint matching)
2. Try Template Matching (visual search)
3. Use Kalman prediction (coast on momentum)
"""
# Layer 1: Re-ID
match_id, score = reid.find_best_match(frame, boxes, ids)
if match_id is not None:
return ("REID", match_id, score)
# Layer 2: Template
result = template.search(frame, predicted_center=kalman_prediction)
if result:
return ("TEMPLATE", result['center'], result['score'])
# Layer 3: Kalman (handled externally)
return ("PREDICT", kalman_prediction, 0.0)A visual fingerprinting system that stores the target's color histogram, aspect ratio, and area for comparison against new detections.
| Method | Returns | Description |
|---|---|---|
update_fingerprint(frame, box) |
None | Store/update fingerprint from current crop |
compare(frame, box) |
float | Compare one box against fingerprint (0-1) |
find_best_match(frame, boxes, ids) |
(id, score) | Find best matching detection |
reset() |
None | Clear all stored data |
| Component | Weight | Metric |
|---|---|---|
| Color similarity | 60% | HSV histogram Bhattacharyya distance |
| Shape similarity | 20% | Aspect ratio difference |
| Size similarity | 20% | Area ratio (min/max) |
A fallback tracker that stores the last known visual appearance and
searches for it using cv2.matchTemplate within a constrained ROI.
| Method | Returns | Description |
|---|---|---|
update_template(frame, box) |
None | Store current target appearance |
search(frame, predicted_center) |
dict or None | Search for template in frame |
reset() |
None | Clear all stored data |
{
'center': (cx, cy), # Match center point
'box': (x1, y1, x2, y2), # Bounding box
'score': 0.72, # Correlation score (0-1)
'search_region': (sx1, sy1, sx2, sy2) # ROI that was searched
}ββββββββββββββββββββββββββββββββββββββββββββββββββββ
β ghostfinder Pipeline β
β β
β ββββ TRACKING PHASE ββββββββββββββββββββββββββ β
β β Target visible β update fingerprint β β
β β β’ HSV histogram (30Γ32 bins) β β
β β β’ Aspect ratio (w/h) β β
β β β’ Area (wΓh) β β
β β β’ Template crop (grayscale) β β
β βββββββββββββββββββββββββββββββββββββββββββββββ β
β β β
β Target Lost! β
β β β
β ββββ RECOVERY PHASE ββββββββββββββββββββββββββ β
β β β β
β β Layer 1: Re-ID (TargetReID) β β
β β Compare all detections vs fingerprint β β
β β Score = 0.6Γcolor + 0.2Γshape + 0.2Γsize β β
β β Match if score > threshold (0.55) β β
β β β β
β β Layer 2: Template (TemplateTracker) β β
β β Search ROI = predicted_center Β± margin β β
β β cv2.matchTemplate (TM_CCOEFF_NORMED) β β
β β Match if correlation > threshold (0.45) β β
β β β β
β βββββββββββββββββββββββββββββββββββββββββββββββ β
ββββββββββββββββββββββββββββββββββββββββββββββββββββ
Benchmarked on a Raspberry Pi 4B (no GPU):
| Operation | Time | Notes |
|---|---|---|
update_fingerprint() |
~0.3ms | Per frame |
compare() (single box) |
~0.4ms | Per candidate |
find_best_match() (10 boxes) |
~3ms | Worst case |
search() (template) |
~1-3ms | Depends on ROI size |
- π Drone Tracking β Re-acquire target after occlusion
- πΉ Surveillance β Track individuals across camera blind spots
- π€ Robot Vision β Persistent object following
- π Industrial β Track parts on conveyor belts
- π― Sports Analytics β Player tracking through crowds
MIT License β use it anywhere, commercially or personally.