Skip to content
Switch branches/tags

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Online annotation of valence and arousal.

logo MongoDB and Flask backed online annotation tool for valence and arousal.

The tool is released for research purposes only.

Should you use this tool, please cite:

    author = {J. Kossaifi and G. Tzimiropoulos and S. Todorovic and M. Pantic},
    journal = {Image and Vision Computing},
    title = {AFEW-VA database for valence and arousal estimation in-the-wild},
    year = {2017},

For more information about the tool, please check the wiki.

Screenshot of the online tool:

screenshot annotator



You need MongoDB installed.

With conda

Clone the repository:

git clone
cd affect_annotation

Create a virtual environment, activate it and get the requirements:

conda create --name annotator python
source activate annotator
pip install -r requirements.txt --user

Running the annotation tool in development mode:

Initialiasing the database and adding a new user (with admin rights):

python init username password -a 1

Running the app in testing

python runserver

This will make the app running on port 55555 at localhost:55555/login

Adding a new user with no administration rights

python new_user username password -a 0

Updating the users data after adding new videos:

python update_data

More details on administrating the app in the wiki.

Saving the annotated data:

Saving the whole database using mongo:

To save the db in a file

mongoexport -d annotations -c annotation -o '/data/savefile.json'

(here we are saving the collection annotation from the database annotations)

To import back:

mongoimport -d annotations -c annotations --file '/data/savefile.json'

Exporting annotations to json files:

python save_annotations /path/to/save/folder annotator_username

Setting up your data for annotation

The repo contains some demonstration data. To use yours, create one folder per video in the subfolder:


Each folder should contain images in png format.

Optionally, you can provide annotations in json format. If so, in each folder put a file named like the folder and with a json extension. For more detail check this wiki page.

Deploying in production:

First install tornado:

pip install tornado

Update the SECRET_KEY in ./annotator/

app.config["SECRET_KEY"] = "your_secret_key"

Then use the deploy script (you may want to change the port):


Annotating your data:

When you load the /annotate page, the focus is by default on annotate all frames. If you click that button you can annotate valence using the up and down keys or the sliders. The right key saves the current values and goes to the next frame. When you have annotated all frames for valence, the video is rewinded and you can now annotate arousal with the keys. You can then save as 'check' if you want to verify your annotations later, 'to-do' if you want to redo it or 'done' if you are confident with the results.

Questions and contributions:

For more information about the tool, please check the wiki.

Contributions are welcome, feel free to create a pull-request or open a new issue. If you have questions you can contact me at jean [dot] kossaifi+annotator [at] gmail [.com]


Tool for online Valence and Arousal annotation.



No releases published


No packages published