Continual Learning code for SRe2L paper (NeurIPS 2023 spotlight)
-
Updated
May 14, 2024 - Python
Continual Learning code for SRe2L paper (NeurIPS 2023 spotlight)
A collection of dataset distillation papers.
Awesome Graph Condensation Papers
Dataset Distillation on 3D Point Clouds using Gradient Matching
An Efficient Dataset Condensation Plugin and Its Application to Continual Learning. NeurIPS, 2023
[ICLR 2024] "Data Distillation Can Be Like Vodka: Distilling More Times For Better Quality" by Xuxi Chen*, Yu Yang*, Zhangyang Wang, Baharan Mirzasoleiman
Code for Backdoor Attacks Against Dataset Distillation
Official PyTorch Implementation for the "Distilling Datasets Into Less Than One Image" paper.
ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching
Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)
(NeurIPS 2023 spotlight) Large-scale Dataset Distillation/Condensation, 50 IPC (Images Per Class) achieves the highest 60.8% on original ImageNet-1K val set.
[IJCAI 2024] Papers about graph reduction including graph coarsening, graph condensation, graph sparsification, graph summarization, etc.
[ICLR'22] [KDD'22] [IJCAI'24] Implementation of "Graph Condensation for Graph Neural Networks"
Add a description, image, and links to the dataset-distillation topic page so that developers can more easily learn about it.
To associate your repository with the dataset-distillation topic, visit your repo's landing page and select "manage topics."