#MakeZurich
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
.gitignore
Forest Sound-Track.pdf
LICENSE
README.md
arduino.ino
classiPi.py
convert.sh
diagram.png
reset.sh
trainModel.py
xenocanto.jpg

README.md

Our project addresses the City Forest Visitors challenge using sound measurements and machine learning techniques. We have creatively interpreted "Visitors" to also mean animals, initially focusing on birds.

Why are we doing this?

Every living thing in a forest has its sound. To identify creatures, classify and trace them, sound detection has many promising potentials. With a focus on how to stay within or close to the low-power parameters of LoRaWAN, we want to monitor the sounds of the forest over long periods of time or in remote areas.

Unlike smartphone apps, whose data depends on the user's schedule, the data produced by our sensor could potentially cover the entire day and night time spectrum. An important use case could also be monitoring the interaction between animals and humans. Our dataset could easily encompass bird and human sounds (footsteps, voices, ...) to study behavior patterns. An examplery target group would be bird watchers. Bird watching is a popular pastime activity, and several apps exist for detecting and recording bird songs.

The low cost approach facilitated by The Things Network ensures that this project could be reproduced by both scientists, city-planners and amateurs alike.

What we tried (so far..)

  • Did research into the idea, discovering numerous projects and even whole competitions to automatically detect and classify bird song. These led us to datasets, sample code, and pro tips.
  • Started putting together a training dataset with crowdsourced (Xeno-Canto) bird song samples from the Zürich region. Download it here
  • We use a simple Feedforward Neural Network to identify the species in the forest. Starting work on a training and classification script. Currently the code closely follows GianlucaPaolocci/Sound-classification-on-Raspberry-Pi-with-Tensorflow
  • Training Model: We run the training neural network on our local pc. We use the crowdsourced data as input to train our 3-layer neural network. Once our network is trained, all we need is the network topology and the final set of weights saved on the Raspberry-Pi to do the classification.
  • Classification Model: The classification network is run on the Raspberry Pi. We install the Rasperry Pi together with the detection device in the forest. As soon as the microphone detects any sound, it registers it and uses that sound as an input and idetifies the sound. Using TTN, the device then sends and alret to the user, e.g. to the birdwatcher, in real time. the alert contains information about the type of the species, and the time and location of detection.
  • Investigated the option of attaching a LoRaWAN antenna directly to Raspberry Pi, but decided to keep an Arduino as part of our hack with which we communicate via USB serial.

What we didn't..

  • Looked into the possibility of doing audio classification on low-power chips (Atmega/Arduino) and discussed the idea with an expert in Arduino sound sensors (Baxter). Despite interesting libraries (walrus, Neurona) and research (Evolutionary Bits'n'Spikes from EPFL!) the low recording quality and limited processing power and capacity would limit our options here severely.
  • Discussed the issue of power draw of the Raspberry Pi, to find out how we could activate the device on a timer (use a MOSFET) and prolong battery life (use the sun).
  • The neural network predictions can be plotted on the map using the GPS data of the sound detector. The final outcome is a distributuion map of the birds or other pitching creatures. See example forest map on Swisstopo's GeoAdmin.

Data sources

References

More use cases, libraries, interesting datasets and example projects can currently be found in our Slack channel #mz-forest-sound-track

Who are we