Face recognition and people memory for AI assistants.
Give your AI assistant a real face memory. Enroll known people with reference photos, then automatically identify faces in inbound images β with names, confidence scores, and bounding box coordinates β ready to inject as context into any LLM.
Built by Sam Cox as part of the OpenClaw ecosystem.
Feed any photo and get back labeled bounding boxes with confidence scores:
Works across group photos, identifying everyone it knows:
Every face is reduced to a unique 128-dimensional mathematical fingerprint.
No two people produce the same pattern β this is what makes identification possible:
The system compares new faces against all stored encodings using Euclidean distance.
Confidence = 1 - distance, with a default match threshold of 0.55 (45%+ confidence).
- π§ SQLite people database β scales from a handful of family members to thousands of faces
- πΈ Multi-encoding per person β enroll multiple photos per person for better accuracy across angles, lighting, and years
- π Structured JSON output β bounding boxes, confidence scores, position descriptions, and an
llm_contextstring ready to pass to any LLM - π€ Unknown candidate tracking β unrecognized faces are saved with cropped images for later enrollment
- π 100% local β no cloud APIs, no data leaves your machine
- π€ LLM-ready β output designed to enrich image analysis with identity context
# Install system dependencies (Ubuntu/Debian)
sudo apt-get install -y cmake python3-dev
# Install Python packages
pip install face_recognition pillow numpyNote:
face_recognitioncompilesdlibfrom source. This takes 5β10 minutes on first install. Be patient.
python -m sam_faces.enroll_face --name "Jane Smith" --photo jane.jpg --note "Office headshot 2026"If multiple faces are detected, you'll be prompted to choose which one to enroll.
python -m sam_faces.identify_faces --photo group_photo.jpgOutput:
{
"face_count": 2,
"faces": [
{
"name": "Jane Smith",
"confidence": 0.94,
"unknown": false,
"bounding_box": {"top": 120, "right": 340, "bottom": 280, "left": 180},
"center": [260, 200],
"position_desc": "upper-left"
},
{
"name": "Unknown",
"confidence": null,
"unknown": true,
"unknown_id": "a1b2c3d4",
"bounding_box": {"top": 80, "right": 600, "bottom": 240, "left": 450},
"center": [525, 160],
"position_desc": "upper-right"
}
],
"llm_context": "2 faces detected: Jane Smith (upper-left, 94% confidence); Unknown person (upper-right)."
}Pass the llm_context string alongside the image to any vision model:
from sam_faces.identify_faces import identify
result = identify("photo.jpg")
prompt = f"Describe this image. People identified: {result['llm_context']}"
# β "Describe this image. People identified: 2 faces detected: Jane Smith (upper-left, 94% confidence); Unknown person (upper-right)."python -m sam_faces.face_db --listpython -m sam_faces.face_db --unknownsfrom sam_faces.identify_faces import identify
from sam_faces.face_db import init_db, add_person, add_encoding, list_people
import face_recognition
# Initialize DB
init_db()
# Enroll programmatically
image = face_recognition.load_image_file("photo.jpg")
encodings = face_recognition.face_encodings(image)
person_id = add_person("Jane Smith")
add_encoding(person_id, encodings[0], note="Office 2026")
# Identify
result = identify("group_photo.jpg")
print(result["llm_context"])Set the SAM_FACES_DB environment variable to use a custom database location:
export SAM_FACES_DB=/path/to/your/people.dbDefault: ./faces/people.db relative to the package root.
people(id TEXT, name TEXT, created_at TEXT)
encodings(id TEXT, person_id TEXT, vector BLOB, note TEXT, added_at TEXT)
unknown_candidates(id TEXT, image_path TEXT, face_crop_path TEXT,
detected_at TEXT, resolved INTEGER, resolved_as TEXT)Vectors are stored as raw float64 binary blobs (128 dimensions from dlib's face encoding model).
Enroll the same person from multiple photos to improve accuracy:
python -m sam_faces.enroll_face --name "Jane Smith" --photo jane_2020.jpg --note "2020 β longer hair"
python -m sam_faces.enroll_face --name "Jane Smith" --photo jane_2026.jpg --note "2026 β current"The system matches against all encodings for a person and uses the best score. This handles aging, hairstyle changes, glasses, and different lighting conditions.
When an unrecognized face is detected:
- It's saved to
unknown_candidatesin the database - A cropped face image is saved to
faces/unknown/ - Later, you can enroll it:
python -m sam_faces.enroll_face --name "Bob" --photo faces/unknown/unknown_photo_120_80.jpg
Default threshold: 0.55 (distance). Lower = stricter matching.
# Stricter (fewer false positives)
python -m sam_faces.identify_faces --photo photo.jpg --threshold 0.45
# More lenient (better recall for difficult angles)
python -m sam_faces.identify_faces --photo photo.jpg --threshold 0.65- All face data stays 100% local β no API calls, no cloud uploads
- The database contains only face encodings (128-dimensional vectors), not raw photos
- Add
faces/people.dbandfaces/unknown/to your.gitignore
MIT License β Copyright (c) 2026 Sam Cox
See LICENSE for details.
Built on top of Adam Geitgey's face_recognition library and dlib by Davis King.
Part of the OpenClaw AI assistant ecosystem.


