Adaptive Sparsity Level during Training for Efficient Time Series Forecasting with Transformers
-
Updated
Nov 15, 2023
Adaptive Sparsity Level during Training for Efficient Time Series Forecasting with Transformers
Sparse Matrix Library for GPUs, CPUs, and FPGAs via CUDA, OpenCL, and oneAPI
Offical implementation of "Sparser spiking activity can be better: Feature Refine-and-Mask spiking neural network for event-based visual recognition" (Neural Networks 2023)
[Machine Learning Journal (ECML-PKDD 2022 journal track)] Quick and Robust Feature Selection: the Strength of Energy-efficient Sparse Training for Autoencoders
Event-based neural networks
[ICLR 2023] "Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!" Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen, Tianjin Huang, AJAY KUMAR JAISWAL, Zhangyang Wang
Demo code for CVPR2023 paper "Sparsifiner: Learning Sparse Instance-Dependent Attention for Efficient Vision Transformers"
[Machine Learning Journal (ECML-PKDD 2022 journal track)] A Brain-inspired Algorithm for Training Highly Sparse Neural Networks
This is the repository for the SNN-22 Workshop paper on "Generalization and Memorization in Sparse Neural Networks".
[TMLR] Supervised Feature Selection with Neuron Evolution in Sparse Neural Networks
PyTorch Implementation of TopKAST
[IJCAI 2022] "Dynamic Sparse Training for Deep Reinforcement Learning" by Ghada Sokar, Elena Mocanu , Decebal Constantin Mocanu, Mykola Pechenizkiy, and Peter Stone.
Robustness of Sparse Multilayer Perceptrons for Supervised Feature Selection
[ICLR 2022] "Peek-a-Boo: What (More) is Disguised in a Randomly Weighted Neural Network, and How to Find It Efficiently", by Xiaohan Chen, Jason Zhang and Zhangyang Wang.
[ICLR 2022] "Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, and No Retraining" by Lu Miao*, Xiaolong Luo*, Tianlong Chen, Wuyang Chen, Dong Liu, Zhangyang Wang
A neural net with a terminal-based testing program.
Always sparse. Never dense. But never say never. A Sparse Training repository for the Adaptive Sparse Connectivity concept and its algorithmic instantiation, i.e. Sparse Evolutionary Training, to boost Deep Learning scalability on various aspects (e.g. memory and computational time efficiency, representation and generalization power).
Code for testing DCT plus Sparse (DCTpS) networks
Simple C++ implementation of a sparsely connected multi-layer neural network using OpenMP and CUDA for parallelization.
Implementation for the paper "SpaceNet: Make Free Space For Continual Learning" in PyTorch.
Add a description, image, and links to the sparse-neural-networks topic page so that developers can more easily learn about it.
To associate your repository with the sparse-neural-networks topic, visit your repo's landing page and select "manage topics."