This python package was initially made to improve robot-human communication, but the use cases can vary. Given an image with a face, useful features are extracted with which robots / agents can better engage in a conversation with humans.
This python package contains four models to get visual features from human faces:
- face detection gives you bounding boxes around the faces and their probabilities.
- face landmark detection gives you 68 face landmarks. This depends on (1)
- age/gender detection gives you estimated gender and age. This depends on (1) and (2).
- face recognition gives you 512-D face embedding vectors. This depends on (1) and (2).
- A x86 Unix or Unix-like machines
- Python 3.7.9 environment
- (Optional) Docker Engine
-
Clone this repo
git clone https://github.com/leolani/cltl-face-all
-
Install the requirements (virtual python environment is highly recommended)
pip install -r requirements.txt
-
Go to the directory where this
README.md
is located. Install thecltl-face-all
repo by runningpip install .
-
Clone this repo
git clone https://github.com/leolani/cltl-face-all
-
Go to the directory where this
README.md
is located.docker build -t cltl-face-all .
-
Run the docker container.
docker run -p 27004:27004 -it --rm cltl-face-all /bin/bash
In your python environment, import the module cltl-face-all
to use the classes and functions. Below is a code snippet.
from cltl_face_all.face_alignment import FaceDetection
from cltl_face_all.arcface import ArcFace
from cltl_face_all.agegender import AgeGender
ag = AgeGender(device='cpu')
af = ArcFace(device='cpu')
fd = FaceDetection(device='cpu', face_detector='blazeface')
Go to the examples
folder to and take a look at some of the jupyter notebooks.
- Watch this video for registering your face and webcam demo.
- Watch this video for obtaining face features from videos and inference on images.
None of the models used were trained by me. I copied the codes and the binary files from already existing repos. The original sources are mentioned in my code.
- Create a test dataset to set a baseline.
- Currently both tensorflow and pytorch are used. Stick to one (preferably pytorch) and make it compatible.
- Better organize the binary weights file downloading. They are stored everywhere at the moment.
- Find a better face detector. This package supports some face detectors (e.g. sfd, blazeface, and dlib), but there's gotta be a better one.
- Include facial emotion detection.
- Clean and readable code.
- Better docstring.
- GPU support.
- Create a server in docker.
- Decouple face detection (bounding box) and face landmark detection. They are technically two separate things.
- Think about extending the visual features from faces to full-sized humans (e.g. human poses)
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
- Taewoon Kim (t.kim@vu.nl)