Is it true that each individual neuron in the brain is as powerful as an entire deep artificial neural network?
Yes, depending on the type of neuron and the level functionality required.
In Single cortical neurons as deep artificial neural networks, David et al. designed a Deep Neural Network to match the complexity of an L5 cortical pyramidal neuron from a rat. They captured 200 hours of random excitatory and inhibitory inputs, as well as the voltage output. Then they compared deep neural networks from neural architecture search to the neuron until they were able to find one that could learn and closely match the cortical pyramidal neurons. The result was a TCN architecture with seven layers (depth), 128 channels per layer (width), and T = 153 ms (history), as well as, the model’s accuracy was relatively insensitive to the temporal kernel sizes of the different DNN layers when keeping the total temporal extent of the entire network fixed.
The total complexity of that model would then be `128*7=896` neurons, and assuming the kernel size is 3 and 1-dimensional, `3*896=2688` connections.
Despite this size, the TCN model sped up simulation time of biological neurons by several orders of magnitude.
On the other hand, some neurons, like the retinal ganglion cells in the human eye, run relatively simple operations that can be approximated with top hat or black hat edge detection filters, equivalent to a 3x3 kernel, or 1 neuron and 9 connections. However, RGCs typically have a around 100 connections, making them more robust in the case of cell death.