This repository presents preliminary research on combining ear tag detection with multi-object tracking to re-identify lost tracks and assign them to specific pig identities. The work has not yet been published in any conference or journal. A presentation summarizing the initial findings is available here
All tools in this repo need to be run while having the base of this repository as the working directory. For further information of the individual tools in this repo, we refer to the corresponding documentations:
Set up the environment by first making sure conda is installed and activated and then running:
source _setup/setup.sh
- Open this documentation in case you want to qualitatively explore the distribution of eartag labels (before running any functionality, just exploration)
- Open this documentation in case you wanna downsample videos or mask out adjacent pens (not necessarily required except if you wanna work with lower frame rate which might be convenient since video files are smaller and everything is not taking that long). If you wanna do that, do it before running anything (even before obtaining eartag/tracking results). Use the downsampled videos for all subsequent tasks.
- Open this documentation in case you wanna reencode videos since their frames are broken. It is also recommended to do this prior to running anything to be sure that you do not obtain tracking results on videos with broken frames that later result in errors e.g. during visualization.
The following two scripts are the actual processing pipeline:
- First, a matching has to be established between Martins labels and the labels from our tracker. See this documentation
- Then, we can combine tracking and ear tag information using the script in this documentation.
To visualize quantitative results of the pipeline you can use this: this ipython notebook. To visualize tracks that have been merged with ear tag information, you can use this python script.