Skip to content

UniReps/UniReps-resources

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 

Repository files navigation

UniReps Workshop

Unifying Representations in Neural Models.

NeurIPS, New Orleans (USA)

December 15th 2023

Introduction

We've gathered here a comprehensive set of resources on the mechanisms, and extent of similarity in internal representations, for deep learning and neuroscience. Check it out and contribute with a pull request!

Join us on Slack!

Educational Resources

Computational Neuroscience

Conferences and Workshops

Software Libraries

Datasets

Open-Source Neuroscience Datasets

Papers

We have gathered here a collection of relevant papers. Please note that this is currently work-in-progress, we will try to add the correct venues and links as soon as possible. If you have a paper to add or want to contribute in filling the gaps, please submit a pull request.

Measures

Title Venue
Similarity of Neural Network Representations Revisited ICML 2019
Generalized Shape Metrics on Neural Representations NeurIPS 2021
Grounding Representation Similarity with Statistical Testing NeurIPS 2021
SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability NeurIPS 2017
Measuring similarity for clarifying layer difference in multiplex ad hoc duplex information networks J. Informetr.
Reliability of CKA as a Similarity Measure in Deep Learning ICLR 2023
Transferred Discrepancy: Quantifying the Difference Between Representations arXiv [cs.LG]
Diversity creation methods: a survey and categorisation Inf. Fusion
An analysis of diversity measures Mach. Learn.
Understanding the Dynamics of DNNs Using Graph Modularity arXiv [cs.CV]
Similarity of Neural Networks with Gradients arXiv [cs.LG]
Representation Topology Divergence: A Method for Comparing Neural Network Representations arXiv [cs.LG]
Inter-layer Information Similarity Assessment of Deep Neural Networks Via Topological Similarity and Persistence Analysis of Data Neighbour Dynamics arXiv [cs.LG]
Understanding image representations by measuring their equivariance and equivalence arXiv [cs.LG]
Insights on representational similarity in neural networks with canonical correlation arXiv
Understanding metric-related pitfalls in image analysis validation arXiv [cs.CV]
Revisiting Model Stitching to Compare Neural Representations arXiv
Similarity and Matching of Neural Network Representations arXiv
Topology of Deep Neural Networks J. Mach. Learn. Res.
Representational dissimilarity metric spaces for stochastic neural networks arXiv [cs.LG]
Graph-Based Similarity of Neural Network Representations arXiv [cs.LG]
Adaptive Geo-Topological Independence Criterion arXiv [stat.ML]
Using distance on the Riemannian manifold to compare representations in brain and in models Neuroimage
Predictive Multiplicity in Classification arXiv [cs.ML]
Rashomon Capacity: A Metric for Predictive Multiplicity in Classification arXiv [cs.LG]
Understanding Weight Similarity of Neural Networks via Chain Normalization Rule and Hypothesis-Training-Testing arXiv [cs.LG]
Grounding High Dimensional Representation Similarity by Comparing Decodability and Network Performance OpenReview
Measures of Diversity in Classifier Ensembles and Their Relationship with the Ensemble Accuracy Mach. Learn.

Alignment and zero-shot alignment

Title Venue
Relative representations enable zero-shot latent space communication ICLR 2023
ASIF: Coupled Data Turns Unimodal Models to Multimodal Without Training NeurIPS 2023
Manifold alignment using Procrustes analysis arXiv
Bootstrapping Parallel Anchors for Relative Representations arXiv [cs.LG]
Stop Pre-Training: Adapt Visual-Language Models to Unseen Languages arXiv [cs.CL]
GeRA: Label-Efficient Geometrically Regularized Alignment arXiv
Text-To-Concept (and Back) via Cross-Model Alignment arXiv [cs.CV]

Contrastive learning

Title Venue
Connecting Multi-modal Contrastive Representations NeurIPS 2023
Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning NeurIPS 2022
ULIP: Learning Unified Representation of Language, Image and Point Cloud for 3D Understanding CVPR 2023
Fantastic Gains and Where to Find Them: On the Existence and Prospect of General Knowledge Transfer between Any Pretrained Model N/A
Identifiability Results for Multimodal Contrastive Learning ICLR 2023
UNIMO: Towards Unified-Modal Understanding and Generation via Cross-Modal Contrastive Learning N/A
Understanding the Behaviour of Contrastive Loss N/A

Linear mode connectivity and model merging

Title Venue
Git Re-Basin: Merging Models modulo Permutation Symmetries ICLR 2023
Model Fusion via Optimal Transport NeurIPS 2020
Going Beyond Linear Mode Connectivity: The Layerwise Linear Feature Connectivity arXiv [cs.LG]
REPAIR: REnormalizing Permuted Activations for Interpolation Repair arXiv [cs.LG]
Linear Mode Connectivity in Multitask and Continual Learning arXiv
Optimizing Mode Connectivity via Neuron Alignment arXiv
Traversing Between Modes in Function Space for Fast Ensembling arXiv [cs.LG]
An Empirical Study of Multimodal Model Merging arXiv [cs.CV]
Linear Mode Connectivity and the Lottery Ticket Hypothesis arXiv
Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling arXiv
Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs arXiv
The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks arXiv
Essentially No Barriers in Neural Network Energy Landscape arXiv

Neuroscience

Title Venue
Representational similarity analysis - connecting the branches of systems neuroscience Front. Syst. Neurosci.
What makes different people's representations alike: neural similarity space solves the problem of across-subject fMRI decoding J. Cogn. Neurosci.
Distributed and overlapping representations of faces and objects in ventral temporal cortex Science

Other -- to be sorted

Title Venue
Domain Translation via Latent Space Mapping IJCNN 2023
Do Vision Transformers See Like Convolutional Neural Networks? arXiv
Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth arXiv
LiT: Zero-Shot Transfer with Locked-image text Tuning arXiv [cs.CV]
On Linear Identifiability of Learned Representations N/A
Towards Understanding Learning Representations: To What Extent Do Different Neural Networks Learn the Same Representation arXiv
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time ICML 2022
Editing Models with Task Arithmetic ICLR 2023
Task Arithmetic in the Tangent Space: Improved Editing of Pre-Trained Models arXiv [cs.LG]
RNNs of RNNs: Recursive Construction of Stable Assemblies of Recurrent Neural Networks arXiv [cs.LG]
Invariant Risk Minimization arXiv [stat.ML]
Neural networks learn to magnify areas near decision boundaries arXiv [cs.LG]
On a Novel Application of Wasserstein-Procrustes for Unsupervised Cross-Lingual Learning arXiv [cs.CL]
Topology and Geometry of Half-Rectified Network Optimization arXiv
Qualitatively characterizing neural network optimization problems arXiv
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima arXiv
Content and cluster analysis: Assessing representational similarity in neural systems Philos. Psychol.
Contrastive Multiview Coding arXiv [cs.CV]
Similarity of Neural Network Models: A Survey of Functional and Representational Measures arXiv [cs. LG]
High-dimensional dynamics of generalization error in neural networks arXiv [stat.ML]
Exact solutions to the nonlinear dynamics of learning in deep linear neural networks arXiv
Disentanglement by Nonlinear ICA with General Incompressible-flow Networks (GIN) arXiv
Nonlinear ICA Using Auxiliary Variables and Generalized Contrastive Learning arXiv
Variational Autoencoders and Nonlinear ICA: A Unifying Framework arXiv
Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks N/A
Modular Networks: Learning to Decompose Neural Computation arXiv
Model Ratatouille: Recycling Diverse Models for Out-of-Distribution Generalization arXiv [cs.LG]
High-resolution image reconstruction with latent diffusion models from human brain activity BioArxiv
Alignment with human representations supports robust few-shot learning arXiv [cs.LG]
SHARCS: Shared Concept Space for Explainable Multimodal Learning arXiv [cs. LG]
Prevalence of Neural Collapse during the terminal phase of deep learning training arXiv [cs.LG]
Convergent Learning: Do different neural networks learn the same representations? N/A
Controlling Text-to-Image Diffusion by Orthogonal Finetuning arXiv [cs.CV]
CLIPMasterPrints: Fooling Contrastive Language-Image Pre-training Using Latent Variable Evolution arXiv [cs.CV]
On the Symmetries of Deep Learning Models and their Internal Representations arXiv [cs.LG]
Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change N/A
The Effects of Randomness on the Stability of Node Embeddings arXiv [cs.LG]
Representation of object similarity in human vision: psychophysics and a computational model Vision Res.
Mechanistic Mode Connectivity arXiv [cs.LG]
Feature learning in deep classifiers through Intermediate Neural Collapse PMLR
Bootstrapping Vision-Language Learning with Decoupled Language Pre-training arXiv [cs.CV]
Beyond Supervised vs. Unsupervised: Representative Benchmarking and Analysis of Image Representation Learning arXiv
Clustering units in neural networks: upstream vs downstream information arXiv [cs.LG]
Launch and Iterate: Reducing Prediction Churn N/A
Model Stability with Continuous Data Updates arXiv [cs.CL]
Anti-Distillation: Improving reproducibility of deep networks arXiv [cs.LG]
On the Reproducibility of Neural Network Predictions arXiv [cs.LG]
Deep Ensembles: A Loss Landscape Perspective arXiv [stat.ML]
Measuring the Instability of Fine-Tuning arXiv [cs. CL]
mCLIP: Multilingual CLIP via Cross-lingual Transfer N/A
Learning to Decompose Visual Features with Latent Textual Prompts arXiv [cs.CV]
Leveraging Task Structures for Improved Identifiability in Neural Network Representations ArXiv preprint
Additive Decoders for Latent Variables Identification and Cartesian-Product Extrapolation arXiv [cs.LG]
Substance or Style: What Does Your Image Embedding Know? arXiv [cs.LG]
On Privileged and Convergent Bases in Neural Network Representations arXiv [cs.LG]
Stitchable Neural Networks arXiv [cs.LG]
A Multi-View Embedding Space for Modeling Internet Images, Tags, and their Semantics arXiv [cs.CV]
Policy Stitching: Learning Transferable Robot Policies arXiv [cs.RO]
Deep Incubation: Training Large Models by Divide-and-Conquering arXiv [cs.CV]
Flamingo: a Visual Language Model for Few-Shot Learning arXiv [cs.CV]

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published