Public code of the ML course
-
Updated
Jun 7, 2017 - Python
Public code of the ML course
Minimal Reproducibility Study of (https://arxiv.org/abs/1911.05248). Experiments with Compression of Deep Neural Networks
Official implementation of the paper "HyperSparse Neural Networks: Shifting Exploration to Exploitation through Adaptive Regularization"
[ICML 2022] "Data-Efficient Double-Win Lottery Tickets from Robust Pre-training" by Tianlong Chen, Zhenyu Zhang, Sijia Liu, Yang Zhang, Shiyu Chang, Zhangyang Wang
3D Loss Landscapes of SoftNet (Sparse Subnetwork)
[ICCV2023 Official PyTorch code] for Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution
Code for CPAL-2024 paper "Continual Learning with Dynamic Sparse Training: Exploring Algorithms for Effective Model Updates"
A pure implementation for sparse denoising autoencoder with adaptive evolutionary training using Scipy. The sparse implementation makes the algorithm scalable to high dimensional data and trainable on CPUs.
Code to reproduce the experiments of the ICLR24-paper: "Sparse Model Soups: A Recipe for Improved Pruning via Model Averaging"
Learning to Rearrange Voxels in Binary Segmentation Masks for Smooth Manifold Triangulation
Molecular-property prediction with sparsity
[AAMAS 2023] Code for the paper "Automatic Noise Filtering with Dynamic Sparse Training in Deep Reinforcement Learning"
Code for the paper "FOCIL: Finetune-and-Freeze for Online Class-Incremental Learning by Training Randomly Pruned Sparse Experts"
[NeurIPS 2022] "Sparse Winning Tickets are Data-Efficient Image Recognizers" by Mukund Varma T, Xuxi Chen, Zhenyu Zhang, Tianlong Chen, Subhashini Venugopalan, Zhangyang Wang
A supervised autoencoder with structured sparsity for efficient and informed clinical prognosis.
Code to reproduce the experiments of ICLR2023-paper: How I Learned to Stop Worrying and Love Retraining
[JCST 2023] "Inductive Lottery Ticket Learning for Graph Neural Networks" by Yongduo Sui, Xiang Wang, Tianlong Chen, Meng Wang, Xiangnan He, Tat-Seng Chua.
Simple world models lead to good abstractions, Google Cerebra internship 2020/master thesis at EPFL LCN 2021 ⬛◼️▪️🔦
WIP. Veloce is a low-code Ray-based parallelization library that makes machine learning computation novel, efficient, and heterogeneous.
Add a description, image, and links to the sparsity topic page so that developers can more easily learn about it.
To associate your repository with the sparsity topic, visit your repo's landing page and select "manage topics."