https://arxiv.org/abs/2311.15947
GloNets introduces a novel neural network architecture that surpasses depth-related limitations of existing deep learning structures. Unlike architectures such as ResNet, GloNets ensure stable training and uniform information distribution across all network layers. A key feature of GloNets is the use of continuous early exits at every layer, creating a network that can be progressively turned on or off. This flexibility allows for balancing the system's energy requirements. Additionally, as the output is an overlay of networks or elementary basis functions, it lends itself to a new degree of model explainability.
\Code\PyTorch
: Implementation of GloNet in various configurations:
GloNet+fc
: GloNet applied to fully connected networks.GloNet+CNN
: GloNet integration with Convolutional Neural Networks.GloNet+ViT
: GloNet combined with Vision Transformers.
\Images
: Contains images demonstrating GloNet's architecture and its post-learning tunable precision.
GloNet, while maintaining comparable performance levels, operates significantly faster than ResNetv2. This is primarily because GloNet does not require Batch Normalization, resulting in enhanced processing speed.
If you use this work in your research, please cite:
Di Cecco, A., Metta, C., Fantozzi, M., Morandin, F., Parton, M. (2023). GloNets: Globally Connected Neural Networks. arXiv preprint arXiv:2311.15947.
[Link to the preprint](https://arxiv.org/abs/2311.15947)
Submitted to the IDA 2024 Conference.
This project is licensed under the MIT License.
For further information or collaboration opportunities, please contact: ant.dicecco@gmail.com
Acknowledgments to those who, with their work, inspired this project, e.g. Yoshua Bengio and Simone Scardapane.