Skip to content

zjacobsdev/code_adam_ai_system

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Video Player demo
Haven Safe

(🚧 Under Construction)

A lightweight FastAPI demo that serves a Tailwind-styled video player with a centered chat-style prompt area to request identification of content in the playing video. This repository is a starting point for integrating server-side video analysis (OpenCV/FFmpeg/ML) or connecting to external video-intelligence services.

  • Built with FastAPI · Jinja2 templates · Tailwind CDN
  • Client JS: small player + chat UI at /static/player.js
  • Chat POSTs to /identify (a stub in this demo) which echoes a simulated identification result

Motivation to Build:

As of of the Creation of this app, I currently work At the Museam of IceCream where I am A guest specectist ensuring all of our guest has a good time and being safe. one of our must off calls over the radio is COde adams since the museam is geared toward chidren adults alike and is a pretty open concept so occurances of this happen quite frequnly. I figured since the msueam already has sercity cameras I figure adding Computer Vision to help qickly idenify missing children in high traffic area such as my work place can help reduce time, human resorce, better communcation, and parents peace of mind wastes f

I see this more part of a camer serity ai system for detection of vuneralbe populations

  • LLM prompt option to input a description of the missing perso clothing and other idenify items to help app located which area and on what camera they are in,
  • manger can then quick communcate to assigned staff to see they can confirm that they child has been found.
  • TimeStamps of when the camera detects person who matches clothing description

--Live Demo button here--

Table of contents

Quick overview

This project serves a single web page that includes:

  • A responsive HTML5 video player (supports loading a sample video or a local file).
  • Simple client controls (Play/Pause, Mute, time display).
  • A centered chat input placed directly under the player that submits a prompt and current video time to a backend /identify endpoint.

AI Assistants:

  • Copilot: created python-run server (FastAPI)
  • Google Intelligence Video API: takes user propt to analysis selected video to find the object user is trying to find in pre recorded video.

Features

  • FastAPI + Jinja2 template for server rendering
  • Tailwind CDN used for quick UI styling
  • Static assets under static/ (player JS, client logic)
  • requirements.txt lists fastapi, uvicorn, jinja2, and optional tools (dotenv, opencv-python-headless, ffmpeg-python)
  • Chat prompt posts { prompt, time } to /identify and displays the result in a small chat history area

Quickstart

  1. Create a virtual environment and install dependencies:
python3 -m venv .venv
.venv/bin/python -m pip install --upgrade pip setuptools wheel
.venv/bin/python -m pip install -r requirements.txt
  1. Run the app with Uvicorn:
.venv/bin/python -m uvicorn main:app --reload --port 8001
  1. Open http://127.0.0.1:8001 in your browser.

Usage

  • Load the page, click Load sample or select a local video file.
  • Use the controls to play/pause and mute.
  • Type a prompt in the input field directly under the player (for example: "identify cars" or "faces at current time") and press Enter or click Send.
  • The client sends the prompt and the current video time to /identify and the server's reply appears in the chat history.

Development

  • Server entrypoint: main.py (FastAPI app and /identify endpoint)
  • Template: templates/index.html (Tailwind + chat UI)
  • Client logic: static/player.js (player controls + chat client)
  • Edit and reload: use Uvicorn's --reload option during development.

Future features

  • tells you live where that person is on what camera channel
  • AI give desiption of video of what happen to the missing indiviaul allowing manager to seemlessly write incident reports
  • Use ffmpeg or opencv-python to extract frames or audio snippets for server-side processing.
  • Consider running heavy ML inference as a separate worker/service and return results via async requests or a job queue.

Testing & QA notes

  • Manual checks:
    • Page loads, static files served correctly.
    • Sample video plays and time display updates.
    • Chat sends prompt + time and receives a response from /identify.
    • Local video file selection works via object URLs.

Edge cases:

  • Large files or long videos may need streaming-based processing rather than loading entire files into memory.
  • If you add ffmpeg-python you likely need the ffmpeg binary installed on the host system.

Research/References:

Security & privacy

  • This demo is local-first and does not send video or prompts anywhere by default.
  • If you integrate cloud video-intelligence or ML services, ensure you obtain consent before transmitting user media.
  • .env is ignored by .gitignore — do not commit secrets.

Contributing

  • Open an issue to discuss major changes.
  • Create focused PRs for feature additions or bug fixes.
  • Keep the demo runnable locally without secrets.

License

MIT — add a LICENSE file if you want to make this explicit.

Acknowledgements

  • FastAPI — backend framework
  • Tailwind CSS — UI utilities via CDN
  • Sample video resources used in the demo templates

How to run (assuming dependencies are installed):

python3 -m uvicorn main:app --reload --port 8000

Open http://127.0.0.1:8000 in your browser. Use the "Load sample" button or choose a local video file.

Notes:

  • Tailwind is loaded via the Play CDN for quick development. Replace with a compiled build for production.
  • Static files are served from /static.

References:

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors