EVA is a database system tailored for video analytics -- think PostgreSQL for videos. It supports a SQL-like language for querying videos like:
- examining the "emotion palette" of different actors
- finding gameplays that lead to a touchdown in a football game
EVA comes with a wide range of commonly used computer vision models. It written in Python, and it is licensed under the Apache license.
If you are wondering why you might need a video database system, start with page on Video Database Systems. It describes how EVA lets users easily make use of deep learning models and how they can reduce money spent on inference on large image or video datasets.
The Getting Started page shows how you can use EVA for different computer vision tasks: image classification, object detection, action recognition, and how you can easily extend EVA to support your custom deep learning model in the form of user-defined functions.
The User Guides section contains Jupyter Notebooks that demonstrate how to use various features of EVA. Each notebook includes a link to Google Colab, where you can run the code by yourself.
Easily combine SQL and Deep Learning to build next-generation database applications
Easily query videos in user-facing applications with a SQL-like interface for commonly used computer vision models.Speed up queries and save money spent on model inference
EVA comes with a collection of built-in sampling, caching, and filtering optimizations inspired by time-tested relational database systems.Extensible by design to support custom deep learning models
EVA has first-class support for user-defined functions that wrap around your deep learning models in PyTorch.- EVA supports Python versions 3.7 through 3.10. To install EVA, we recommend using the pip package manager.
pip install evadb
- EVA works on Jupyter notebooks -- illustrative notebooks are available in the Tutorials folder. EVA adopts a client-server architecture and comes with a terminal-based client. To start the EVA server and a terminal-based client, use the following commands:
eva_server & # launch server
eva_client # launch client
- Load a video onto the server using the client (we use ua_detrac.mp4 video as an example):
LOAD VIDEO "data/ua_detrac/ua_detrac.mp4" INTO MyVideo;
- That's it! You can now start running queries over the loaded video:
SELECT id, data FROM MyVideo WHERE id < 5;
- Search for frames in the video that contain a car
SELECT id, data FROM MyVideo WHERE ['car'] <@ FastRCNNObjectDetector(data).labels;
Source Video | Query Result |
---|---|
- Search for frames in the video that contain a pedestrian and a car
SELECT id, data FROM MyVideo WHERE ['pedestrian', 'car'] <@ FastRCNNObjectDetector(data).labels;
- Search for frames in the video with more than 3 cars
SELECT id, data FROM MyVideo WHERE Array_Count(FastRCNNObjectDetector(data).labels, 'car') > 3;
- You can create a new user-defined function (UDF) that wraps around your custom vision model or an off-the-shelf model like FastRCNN:
CREATE UDF IF NOT EXISTS MyUDF
INPUT (frame NDARRAY UINT8(3, ANYDIM, ANYDIM))
OUTPUT (labels NDARRAY STR(ANYDIM), bboxes NDARRAY FLOAT32(ANYDIM, 4),
scores NDARRAY FLOAT32(ANYDIM))
TYPE Classification
IMPL 'eva/udfs/fastrcnn_object_detector.py';
- You can combine multiple user-defined functions in a single query to accomplish more complicated tasks.
-- Analyse emotions of faces in a video
SELECT id, bbox, EmotionDetector(Crop(data, bbox))
FROM HAPPY JOIN LATERAL UNNEST(FaceDetector(data)) AS Face(bbox, conf)
WHERE id < 15;
Source Video | Query Result |
---|---|
Source Video | Query Result |
---|---|
Source Video | Query Result |
---|---|
Join the EVA community on Slack to ask questions and to share your ideas for improving EVA.
To file a bug or request a feature, please use GitHub issues. Pull requests are welcome. For more information on installing from source and contributing to EVA, see our contributing guidelines.
Copyright (c) 2018-2022 Georgia Tech Database Group Licensed under Apache License.