Pytorch implementation of preconditioned stochastic gradient descent (affine group preconditioner, low-rank approximation preconditioner and more)
-
Updated
Jul 8, 2024 - Python
Pytorch implementation of preconditioned stochastic gradient descent (affine group preconditioner, low-rank approximation preconditioner and more)
Fine-tuning of diffusion models
LoRA (Low-Rank Adaptation) inspector for Stable Diffusion
VIP is a python package/library for angular, reference star and spectral differential imaging for exoplanet/disk detection through high-contrast imaging.
Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression. CVPR2020.
A framework based on the tensor train decomposition for working with multivariate functions and multidimensional arrays
Tensorflow implementation of preconditioned stochastic gradient descent
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
[ICLR 2022] Code for paper "Exploring Extreme Parameter Compression for Pre-trained Language Models"(https://arxiv.org/abs/2205.10036)
Pytorch implemenation of "Learning Filter Basis for Convolutional Neural Network Compression" ICCV2019
Solver in the low-rank tensor train format with cross approximation approach for the multidimensional Fokker-Planck equation
MUSCO: Multi-Stage COmpression of neural networks
My experiment of multilayer NMF, a deep neural network in which the first several layers take Semi-NMF as its pseudo-activation-function that finds the latent sturcture embedding in the original data unsupervisely.
Lowrankdensity
Deep learning models have become state of the art for natural language processing (NLP) tasks, however deploying these models in production system poses significant memory constraints. Existing compression methods are either lossy or introduce significant latency. We propose a compression method that leverages low rank matrix factorization durin…
The repository contains code to reproduce the experiments from our paper Error Feedback Can Accurately Compress Preconditioners available below:
Linear Algebra project `Decomposition into Low-Rank and Sparse Matrices in Computer Vision` | Applied Sciences Faculty, UCU (2019)
Numerical experiments for Optima-TT method from teneva python package. This method finds items which relate to min and max elements of the tensor in the tensor train (TT) format.
Gaussian Mixture Model with low rank approximation
Proyecto para el Trabajo Final de Grado: Aplicación de técnicas de inteligencia artificial para la generación automática de escenarios a partir de lenguaje natural.
Add a description, image, and links to the low-rank-approximation topic page so that developers can more easily learn about it.
To associate your repository with the low-rank-approximation topic, visit your repo's landing page and select "manage topics."