This repository implements a neural network block for Edge Impulse which leverages dendritic optimization. To see details on how to compile and push this block to Edge Impulse, follow the instructions from the original repository. This block was created for the 2025 Edge Impulse Hackathon. A submission video describing the project is available here.
The original artificial neuron was proposed in 1943, drawing on neuroscience research dating back to the 1860s. Since then, backpropagation was introduced, and there have been significant advances in hardware, optimizers, data curation, and architectures, while the core building block has remained fundamentally the same. Interestingly, for 70 of the last 80 years, neuroscience continued to support this original design. However, modern neuroscience now understands that the perceptron misses a critical piece of biological intelligence: the decision-making performed by a neuron's dendrites. Dendritic optimization leverages these ideas to augment artificial neurons with dendrite nodes, enabling ML practitioners to achieve smarter, smaller, and cheaper models on the same datasets. Experiments frequently show 10-20% reduced error rates after dendritic optimization as well as the ability to compress models by up to 90% without loss in accuracy. By enabling users to develop smaller models with equal accuracy this also enables AI to be built with 90% reduced carbon footprint without impact on end users. For further details about this research, a selection of papers can be found here.
This project first explored the improvements dendritic optimization could achieve on the model in the keyword spotting tutorial, and then created a public Edge Impulse block to enable anyone to leverage this capability on their own Edge Impulse projects.
For details on our experiments, please view the W&B report of the 800 trials we ran while sweeping hyperparameters for this application.
This repository replaces the PyTorch script from the example PyTorch block with our custom script. It updates the hyperparameter settings to enable users to experiment with all of the hyperparameters we swept over. It also compiles the final dendritic models in ONNX format to be used in exactly the same way as the original block. This is a plug-and-play Impulse Block allowing users to use dendritic optimization on any Edge Impulse project that uses audio data. As an open-source project, it enables users to make required adjustments to work with additional data formats. Additionally, working with the Edge Impulse team, this block provides a starting point to extend the default Edge Impulse NN Classifier block with dendritic optimization, empowering all Edge Impulse users to achieve improved outcomes on any project by checking a single checkbox.
train2.py is the main training script which receives input parameters from the block, loads the dataset, trains the model, and outputs the ONNX file. This is the main training script used to train dendritic-optimized networks.
parameters.json is the JSON file that instructs the Impulse Block which parameters to display to users for adjusting neural network training settings.
No other files require review. They are either the original files from the PyTorch example block repository or additional files for users who want to run their own W&B sweeps, perform inference, or run other experiments we found useful along the way. These files are not used in the compiled block.
-
We output a model file in onnx format that seems to be able to be run by Edge Impulse, but the accuracy it generates is much lower than our reported accuracy both for dendritic models and traditional models. We suspect that a post processing step between onnx and the final format is causing issues, but would need to work with the Edge Impulse team to get to the bottom of exactly what is causing this.
-
Vishy is able to adjust the parameters of the dendritic trainer in his project, but when the proejct it cloned it seems like the parameters may not actually be getting read by the training script in the clones so default settings are used each time.