Skip to content

Ivan45634/RepfusionPlus

Repository files navigation

Diffusion Representation Distillation

Implementation of diffusion model-based representation distillation for computer vision tasks, extending the RepFusion approach. Provides tools for knowledge transfer from pre-trained diffusion models to downstream vision networks.

Extension of RepFusion with experimental modifications for enhanced distillation.

Modifications from RepFusion

Implementation Progress

Modification Status Code Reference
Cross-attention fusion 🚧 Partial distillation.py#L45-L72
Temporal feature weighting ✅ Implemented distillation.py#L28-L44
Modified training pipeline 🚧 Testing run_classification_distill.py
LayerNorm replacement ✅ Completed distillation.py#L31

Installation of Modified Components

cd my_lib
pip install -e .  # Installs with entry points for modified training scripts

Usage of Modified Features

run_distill --use_cross_attn --temp_weights learned

Code Structure

my_lib/                      # Modified components
├── models/                  # Architecture changes
│   └── distillation.py      # Core fusion logic
├── scripts/                 # Training modifications
│   └── run_classification_distill.py
└── setup.py                 # Package config

src/                         # Original RepFusion code

References

Original RepFusion Framework: GitHub

Features

  • Multi-temporal feature alignment from diffusion sampling process
  • Adaptive loss weighting with temperature scaling
  • Integration with MM Segmentation framework
  • Feature visualization utilities (Grad-CAM, activation maps)

Installation

conda create -n diffseg python=3.8
conda activate diffseg
pip install -r requirements.txt

Installation with Poetry

  1. Install Poetry: https://python-poetry.org/docs/#installation
  2. Clone repo and install dependencies:
git clone https://github.com/yourusername/RepfusionPlus.git
cd RepfusionPlus
poetry config virtualenvs.in-project true
poetry install --with dev
  1. Activate environment:
poetry shell

Training

Configure distillation parameters in configs/ then run:

# Multi-GPU training
bash segmentation/tools/train_repfusion.sh <CONFIG> <NUM_GPUS>

# Single-GPU validation
bash segmentation/tools/train_repfusion_single.sh <CONFIG> <GPU_ID>

Evaluation

Benchmark trained models using standard MM Segmentation protocols:

# Multi-scale testing
bash segmentation/tools/test.sh <CONFIG> <CHECKPOINT> --aug-test

# Metric analysis
python segmentation/tools/analyze_results.py <CONFIG> <PRED_DIR> <GT_DIR> --metrics mIoU mAcc

Citation

@article{yang2023diffusion,
  title={Diffusion Model as Representation Learner},
  author={Yang, Xingyi and Wang, Xinchao},
  journal={ICCV},
  year={2023}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages