bee detection conv net for a rasp pi on side of a hive
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
rasp_pi rebase rasp_pi/capture_and_send.sh & util.py from squeakus-master 419… Aug 9, 2018
sample_data fix sample reproduction instructions; include run_sample_training_pip… Aug 8, 2018
.gitignore add --num to predict so we can run prediction on a random sample Aug 18, 2018
LICENSE Create LICENSE Nov 16, 2018
README.md update readme with reference to run script Nov 27, 2018
compare_label_dbs.py move collection of tp, fp, fn & p/r/f1 calc into util class Aug 8, 2018
counts_over_days.png update readme May 16, 2018
data.py make rescale factor configurable in materialise_label_db and data (cm… Sep 2, 2018
day_count_stats.py resave, without change, but with emacs whitepsace-cleanup added to sa… Apr 12, 2018
dump_bee_crops.py remove assumptions about image size and introduce --height and --widt… Jul 26, 2018
freeze_graph.sh call freeze_graph as module code Mar 14, 2018
generate_graph_pbtxt.py push reuse scope directly into model Aug 2, 2018
label.201802_sample.db mo labels Apr 18, 2018
label_db.py raise an exception if trying to merge from/to a db that's never been … Sep 21, 2018
label_ui.py remove cutnpaste comment Jul 27, 2018
materialise_label_db.py make --flags consistent across apps Sep 4, 2018
merge_dbs.py raise an exception if trying to merge from/to a db that's never been … Sep 21, 2018
model.py include --pos-weight in train to allow different weighting of +ve pixels Oct 8, 2018
parse_predict_out.py wip scaling out utils for sampling & labelling per day Feb 10, 2018
plot.R update plotting Feb 13, 2018
predict.py add --num to predict so we can run prediction on a random sample Aug 18, 2018
predict_from_frozen.py remove the scoring from predict, move into explicit compare_label_dbs.py Jul 27, 2018
resize.py remove assumptions about image size and introduce --height and --widt… Jul 26, 2018
reverse_optimise.py push reuse scope directly into model Aug 2, 2018
rgb_labels_predictions.png update readme Feb 12, 2018
rotate_ccw.sh wip scaling out utils for sampling & labelling per day Feb 10, 2018
run_sample_training_pipeline.sh include --pos-weight in train to allow different weighting of +ve pixels Oct 8, 2018
sample.py make sample.py more robust to file names Aug 8, 2018
test.py use ModelTester object during training cycle Aug 19, 2018
train.py include --pos-weight in train to allow different weighting of +ve pixels Oct 8, 2018
util.py add TODO re: weighting centroids Oct 8, 2018

README.md

BNN v2

unet style image translation from image of hive entrance to bitmap of location of center of bees.

trained in a semi supervised way on a desktop gpu and deployed to run in real time on the hive using either a raspberry pi using a neural compute stick or a je vois embedded smart camera

see this blog post for more info..

here's an example of predicting bee position on some held out data. the majority of examples trained had ~10 bees per image.

rgb_labels_predictions.png

the ability to locate each bee means you can summarise with a count. note the spike around 4pm when the bees at this time of year come back to base.

counts_over_days.png

usage

see run_sample_training_pipeline.sh for an executable end to end walkthrough of these steps (using sample data)

gathering data

the rasp_pi sub directory includes one method of collecting images on a raspberry pi.

labelling

start by using the label_ui.py tool to manually label some images and create a sqlite label.db

the following command starts the labelling tool for some already labelled (by me!) sample data provided with in this repro.

./label_ui.py \
--image-dir sample_data/training/ \
--label-db sample_data/labels.db \
--width 768 --height 1024

hints

  • left click to label the center of a bee
  • right click to remove the closest label
  • press up to toggle labels on / off. this can help in tricky cases.
  • use left / right to move between images. it's often helpful when labelling to quickly switch back/forth between images to help distinguish background
  • use whatever system your OS provides to zoom in; e.g. in ubuntu super+up / down

you can merge entries from a.db into b.db with merge_db.py

./merge_dbs.py --from-db a.db --into-db b.db

training

before training we materialise a label.db (which is a database of x,y coords) into black and white bitmaps using ./materialise_label_db.py

./materialise_label_db.py \
--label-db sample_data/labels.db \
--directory sample_data/labels/ \
--width 768 --height 1024

we can visualise the training data with data.py. this will generate a number of test*png files with the input data on the left (with data augmentation) and the output labels on the right.

./data.py \
--image-dir sample_data/training/ \
--label-dir sample_data/labels/ \
--width 768 --height 1024

sample_data/test_002_001.png

train with train.py.

run denotes the subdirectory for ckpts and tensorboard logs; e.g. --run r12 checkpoints under ckpts/r12/ and logs under tb/r12.

use --help to get complete list of options including model config, defining validation data and stopping conditions.

e.g. to train for a short time on sample_data run the following... (for a more realistic result we'd want to train for many more steps on much more data)

./train.py \
--run r12 \
--steps 300 \
--train-steps 50 \
--train-image-dir sample_data/training/ \
--test-image-dir sample_data/test/ \
--label-dir sample_data/labels/ \
--width 768 --height 1024

progress can be visualised with tensorboard (serves at localhost:6006)

tensorboard --log-dir tb

inference

predictions can be run with predict.py. to specifiy what type of output set one of the following...

  • --output-label-db to create a label db; this can be merged with a human labelled db, using ./merge_dbs.py for semi supervised learning
  • --export-pngs centroids to export output bitmaps equivalent as those made by ./materialise_label_db.py
  • --export-pngs predictions to export explicit model output (i.e. before connected components post processing)

NOTE: given the above step that only runs a short period on a small dataset we DON'T expect this to give a great result; these instructions are more included to prove the plumbing works...

./predict.py \
--run r12 \
--image-dir sample_data/unlabelled \
--output-label-db sample_predictions.db \
--export-pngs predictions

output predictions can be compared to labelled data to calculate precision recall. (we deem a detection correct if it is within a thresholded distance from a label)

./compare_label_dbs.py --true-db ground_truth.db --predicted-db predictions.db
precision 0.936  recall 0.797  f1 0.861

running on compute stick

( note: this still doesn't work; possibly because of something in these steps, or possibly something about the tf api support of the stick. see this forum post for more info... )

some available datasets