Infraestructuras Paralelas y Distribuidas - Parcial I - Parte Práctica
-
Updated
May 5, 2024 - Python
Infraestructuras Paralelas y Distribuidas - Parcial I - Parte Práctica
A plug-and-play JIT implementation for Marshmallow to speed up data serialization and deserialization
This repository is the official implementation of the paper Pruning via Iterative Ranking of Sensitivity Statistics and implements novel pruning / compression algorithms for deep learning / neural networks. Amongst others it implements structured pruning before training, its actual parameter shrinking and unstructured before/during training.
Code for paper "Locally Distributed Deep Learning Inference on Edge Device Clusters"
Re-ordering algorithm for structural sparsity in neural networks
Faster CSV for Python
Continuation methods of Deep Neural Networks optimization, deep-learning, homotopy, bifurcation-analysis, continuation
A library that lets you easily increase efficiency of your deep learning models with no loss of accuracy.
Python realization of wavelet transform with Gabor-kernel (from matlab)
A toolkit for HPC performance evaluation.
Implementation of the paper: Selective_Backpropagation from paper Accelerating Deep Learning by Focusing on the Biggest Losers
A Pytorch Plug&Play implementation of the papers "Deep Networks with Stochastic Depth" and "Drop an Octave"
The Noise Contrastive Estimation for softmax output written in Pytorch
Add a description, image, and links to the speedup topic page so that developers can more easily learn about it.
To associate your repository with the speedup topic, visit your repo's landing page and select "manage topics."