Skip to content

DeepFooding - a software for automatic labelling of eating behaviors

Notifications You must be signed in to change notification settings

annelisesaive/DeepFooding-CDL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 

Repository files navigation

DeepFooding

A software for automatic labeling of human eating behaviors

The two main goals of the project

(1) Precisely and robustly track movements and actions using video-recordings of dining experiences using pre-trained deep-learning algorithms. The whole pipeline will be developed using the DeepLabCut software package. More info, demos and ressources detailed below.

(2) Create robust labeling of key eating behaviors based on resulting tracking spatial coordinates (e.g., bite, sip etc.). The labelling results will then be compared with the labels of two datasets manually coded by experts.

Step 1: Discover the DeepLabCut package in depth

To get familiar with DeepLabCut, I recommend (1) reading the scientific papers describing the protocol and the incentives of the toolbox especially the Nature Protocol paper. You can find all relevant papers (starting with "0_") in the papers directory (2) checking out the documentation page of the package here and the quick video detailing how to navigate the docs (3) following the Course about the science of DeepLabCut and how to use it. Consider having a look at the DEMO JupyterNotebooks.

Step 2: Getting started with the Living Lab dataset

Label you data - It is now time to use the videos from this project. Pick diverse videos (background, people, colors etc.), create a DLC project and start labeling what you want to track (e.g., fork, knife, glass, facial expressions, hands etc.). You can use the Project Manager GUI or iPyhton (more functions here). You can find all relevant ressources here

Train and evaluate your model - Once you label on your laptop, if you want to train on your CPU, you can use and edit as you need the JupyterNotebook in this repo. There is also the possibility to use GPUs on the cloud, by uploading your project to google drive COLAB NOTEBOOK to create a training set, train, and start evaluating. More information for this step can be found here. But in this project we will use the servers of Ecole Centrale Lyon.

At this stage, you need to document tracking performance across pretrained networks and data augmentation techniques in a controlled way (watch this video for help). You can find more information about what network you should use and why here. Complementary ressources are available here.

Step 3: Scaling your analysis to many new videos

Document and scale your analysis pipeline Once you have established your analysis pipeline for the specific labels we have targeted together, an important step is to be able to (1) document these choices for future projects, and (2) implement a way to automate your analysis (more info here)

Step 4: Making sense of poses estimation

The last but not least step is to make sense of the tracking data you have estimated so far. For example, you can start by trying to identify when a person takes a bite or a sip during their meal based on the positions of their mouth, glass and cutlery and calculating their distances over time. You can then establish distance thresholds (e.g., a distance < 1cm) that will allow you to automatically code the beginning and end of each bite or sip taken during a meal.

A rich set of tools already exists and can help you create your own custom analysis. Check out more here

A compilation of relevant scientific papers can also be found in the "scientific_papers" directory (all papers starting with "1_")

Github and Git ressources

Here are ressources to help you setup Git and Atom and to help you manage Git and Github for the current project

About

DeepFooding - a software for automatic labelling of eating behaviors

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published