Skip to content

kwignb/NeuralTangentKernel-Papers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

54 Commits
 
 

Repository files navigation

Neural Tangent Kernel Papers

This list contains papers that adopt Neural Tangent Kernel (NTK) as a main theme or core idea.
NOTE: If there are any papers I've missed, please feel free to raise an issue.

2024

Title Venue PDF CODE
Faithful and Efficient Explanations for Neural Networks via Neural Tangent Kernel Surrogate Models ICLR PDF CODE
PINNACLE: PINN Adaptive ColLocation and Experimental points selection ICLR PDF -
On the Foundations of Shortcut Learning ICLR PDF -
Understanding Reconstruction Attacks with the Neural Tangent Kernel and Dataset Distillation ICLR PDF -
Sample Relationship from Learning Dynamics Matters for Generalisation ICLR PDF -
Robust NAS benchmark under adversarial training: assessment, theory, and beyond ICLR PDF -
Theoretical Analysis of Robust Overfitting for Wide DNNs: An NTK Approach ICLR PDF CODE
Heterogeneous Personalized Federated Learning by Local-Global Updates Mixing via Convergence Rate ICLR PDF -
Neural Network-Based Score Estimation in Diffusion Models: Optimization and Generalization ICLR PDF -
Grokking as the Transition from Lazy to Rich Training Dynamics ICLR PDF -
Generalization of Deep ResNets in the Mean-Field Regime ICLR PDF -

2023

Title Venue PDF CODE
Task Arithmetic in the Tangent Space: Improved Editing of Pre-Trained Models NeurIPS PDF CODE
Deep Learning with Kernels through RKHM and the Perron–Frobenius Operator NeurIPS PDF -
A Theoretical Analysis of the Test Error of Finite-Rank Kernel Ridge Regression NeurIPS PDF -
Fixing the NTK: From Neural Network Linearizations to Exact Convex Programs NeurIPS PDF -
Beyond NTK with Vanilla Gradient Descent: A Mean-Field Analysis of Neural Networks with Polynomial Width, Samples, and Time NeurIPS PDF -
Feature-Learning Networks Are Consistent Across Widths At Realistic Scales NeurIPS PDF -
Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks NeurIPS PDF CODE
Spectral Evolution and Invariance in Linear-width Neural Networks NeurIPS PDF -
Analyzing Generalization of Neural Networks through Loss Path Kernels NeurIPS PDF -
Neural (Tangent Kernel) Collapse NeurIPS PDF -
Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension NeurIPS PDF CODE
A PAC-Bayesian Perspective on the Interpolating Information Criterion NeurIPS-W PDF -
A Kernel Perspective of Skip Connections in Convolutional Networks ICLR PDF -
Scale-invariant Bayesian Neural Networks with Connectivity Tangent Kernel ICLR PDF -
Symmetric Pruning in Quantum Neural Networks ICLR PDF -
The Influence of Learning Rule on Representation Dynamics in Wide Neural Networks ICLR PDF -
Few-shot Backdoor Attacks via Neural Tangent Kernels ICLR PDF -
Analyzing Tree Architectures in Ensembles via Neural Tangent Kernel ICLR PDF -
Supervision Complexity and its Role in Knowledge Distillation ICLR PDF -
NTK-SAP: Improving Neural Network Pruning By Aligning Training Dynamics ICLR PDF CODE
Tuning Frequency Bias in Neural Network Training with Nonuniform Data ICLR PDF -
Simple initialization and parametrization of sinusoidal networks via their kernel bandwidth ICLR PDF -
Characterizing the spectrum of the NTK via a power series expansion ICLR PDF CODE
Adaptive Optimization in the $\infty$-Width Limit ICLR PDF -
Understanding the Generalization of Adam in Learning Neural Networks with Proper Regularization ICLR PDF -
The Onset of Variance-Limited Behavior for Networks in the Lazy and Rich Regimes ICLR PDF -
Restricted Strong Convexity of Deep Learning Models with Smooth Activations ICLR PDF -
Feature selection and low test error in shallow low-rotation ReLU networks ICLR PDF -
Exploring Active 3D Object Detection from a Generalization Perspective ICLR PDF CODE
On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks AISTATS PDF -
Adversarial Noises Are Linearly Separable for (Nearly) Random Neural Networks AISTATS PDF -
Regularize Implicit Neural Representation by Itself CVPR PDF -
WIRE: Wavelet Implicit Neural Representations CVPR PDF CODE
Regularizing Second-Order Influences for Continual Learning CVPR PDF CODE
Multiplicative Fourier Level of Detail CVPR PDF -
KECOR: Kernel Coding Rate Maximization for Active 3D Object Detection ICCV PDF CODE
TKIL: Tangent Kernel Approach for Class Balanced Incremental Learning ICCV-W PDF -
A Fast, Well-Founded Approximation to the Empirical Neural Tangent Kernel ICML PDF -
Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels ICML PDF CODE
Graph Neural Tangent Kernel: Convergence on Large Graphs ICML PDF -
Beyond the Universal Law of Robustness: Sharper Laws for Random Features and Neural Tangent Kernels ICML PDF CODE
Analyzing Convergence in Quantum Neural Networks: Deviations from Neural Tangent Kernels ICML PDF -
Benign Overfitting in Deep Neural Networks under Lazy Training ICML PDF -
Gradient Descent in Neural Networks as Sequential Learning in Reproducing Kernel Banach Space ICML PDF -
A Kernel-Based View of Language Model Fine-Tuning ICML PDF -
Combinatorial Neural Bandits ICML PDF -
What Can Be Learnt With Wide Convolutional Neural Networks? ICML PDF CODE
Reward-Biased Maximum Likelihood Estimation for Neural Contextual Bandits AAAI PDF -
Neural tangent kernel at initialization: linear width suffices UAI PDF -
Kernel Ridge Regression-Based Graph Dataset Distillation SIGKDD PDF CODE
Analyzing Deep PAC-Bayesian Learning with Neural Tangent Kernel: Convergence, Analytic Generalization Bound, and Efficient Hyperparameter Selection TMLR PDF -
The Eigenlearning Framework: A Conservation Law Perspective on Kernel Regression and Wide Neural Networks TMLR PDF CODE
Empirical Limitations of the NTK for Understanding Scaling Laws in Deep Learning TMLR PDF -
Analysis of Convolutions, Non-linearity and Depth in Graph Neural Networks using Neural Tangent Kernel TMLR PDF -
A Framework and Benchmark for Deep Batch Active Learning for Regression JMLR PDF CODE
A Continual Learning Algorithm Based on Orthogonal Gradient Descent Beyond Neural Tangent Kernel Regime IEEE PDF -
The Quantum Path Kernel: A Generalized Neural Tangent Kernel for Deep Quantum Machine Learning QE PDF -
NeuralBO: A Black-box Optimization Algorithm using Deep Neural Networks NC PDF -
Deep Learning in Random Neural Fields: Numerical Experiments via Neural Tangent Kernel NN PDF CODE
Physics-informed radial basis network (PIRBN): A local approximating neural network for solving nonlinear partial differential equations CMAME PDF -
A non-gradient method for solving elliptic partial differential equations with deep neural networks JoCP PDF -
Self-Adaptive Physics-Informed Neural Networks using a Soft Attention Mechanism JoCP PDF -
Towards a phenomenological understanding of neural networks: data MLST PDF -
Weighted Neural Tangent Kernel: A Generalized and Improved Network-Induced Kernel ML PDF CODE
Tensor Programs IVb: Adaptive Optimization in the ∞-Width Limit arXiv PDF -

2022

Title Venue PDF CODE
Generalization Properties of NAS under Activation and Skip Connection Search NeurIPS PDF -
Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a Polynomial Net Study NeurIPS PDF CODE
Graph Neural Network Bandits NeurIPS PDF -
Lossless Compression of Deep Neural Networks: A High-dimensional Neural Tangent Kernel Approach NeurIPS PDF -
GraphQNTK: Quantum Neural Tangent Kernel for Graph Data NeurIPS PDF CODE
Evolution of Neural Tangent Kernels under Benign and Adversarial Training NeurIPS PDF CODE
TCT: Convexifying Federated Learning using Bootstrapped Neural Tangent Kernels NeurIPS PDF CODE
Making Look-Ahead Active Learning Strategies Feasible with Neural Tangent Kernels NeurIPS PDF CODE
Disentangling the Predictive Variance of Deep Ensembles through the Neural Tangent Kernel NeurIPS PDF CODE
On the Generalization Power of the Overfitted Three-Layer Neural Tangent Kernel Model NeurIPS PDF -
What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness? NeurIPS PDF -
On the Spectral Bias of Convolutional Neural Tangent and Gaussian Process Kernels NeurIPS PDF -
Fast Neural Kernel Embeddings for General Activations NeurIPS PDF CODE
Bidirectional Learning for Offline Infinite-width Model-based Optimization NeurIPS PDF -
Infinite Recommendation Networks: A Data-Centric Approach NeurIPS PDF CODE1
CODE2
Distribution-Informed Neural Networks for Domain Adaptation Regression NeurIPS PDF -
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide Neural Networks NeurIPS PDF -
Spectral Bias Outside the Training Set for Deep Networks in the Kernel Regime NeurIPS PDF CODE
Robustness in deep learning: The good (width), the bad (depth), and the ugly (initialization) NeurIPS PDF -
Transition to Linearity of General Neural Networks with Directed Acyclic Graph Architecture NeurIPS PDF -
A Neural Pre-Conditioning Active Learning Algorithm to Reduce Label Complexity NeurIPS PDF -
NFT-K: Non-Fungible Tangent Kernels ICASSP PDF CODE
Label Propagation Across Grapsh: Node Classification Using Graph Neural Tangent Kenrels ICASSP PDF -
A Neural Tangent Kernel Perspective of Infinite Tree Ensembles ICLR PDF -
Neural Networks as Kernel Learners: The Silent Alignment Effect ICLR PDF -
Towards Deepening Graph Neural Networks: A GNTK-based Optimization Perspective ICLR PDF -
Overcoming The Spectral Bias of Neural Value Approximation ICLR PDF CODE
Efficient Computation of Deep Nonlinear Infinite-Width Neural Networks that Learn Features ICLR PDF CODE
Learning Neural Contextual Bandits Through Perturbed Rewards ICLR PDF -
Learning Curves for Continual Learning in Neural Networks: Self-knowledge Transfer and Forgetting ICLR PDF -
The Spectral Bias of Polynomial Neural Networks ICLR PDF -
On Feature Learning in Neural Networks with Global Convergence Guarantees ICLR PDF -
Implicit Bias of MSE Gradient Optimization in Underparameterized Neural Networks ICLR PDF -
Eigenspace Restructuring: A Principle of Space and Frequency in Neural Networks COLT PDF -
Neural Networks can Learn Representations with Gradient Descent COLT PDF -
Neural Contextual Bandits without Regret AISTATS PDF -
Finding Dynamics Preserving Adversarial Winning Tickets AISTATS PDF -
Embedded Ensembles: Infinite Width Limit and Operating Regimes AISTATS PDF -
Global Convergence of MAML and Theory-Inspired Neural Architecture Search for Few-Shot Learning CVPR PDF CODE
Demystifying the Neural Tangent Kernel from a Practical Perspective: Can it be trusted for Neural Architecture Search without training? CVPR PDF CODE
A Structured Dictionary Perspective on Implicit Neural Representations CVPR PDF CODE
NL-FFC: Non-Local Fast Fourier Convolution for Image Super Resolution CVPR-W PDF CODE
Intrinsic Neural Fields: Learning Functions on Manifolds ECCV PDF -
Random Gegenbauer Features for Scalable Kernel Methods ICML PDF -
Fast Finite Width Neural Tangent Kernel ICML PDF CODE
A Neural Tangent Kernel Perspective of GANs ICML PDF CODE
Neural Tangent Kernel Empowered Federated Learning ICML PDF -
Reverse Engineering the Neural Tangent Kernel ICML PDF CODE
How to Train Your Wide Neural Network Without Backprop: An Input-Weight Alignment Perspective ICML PDF CODE
Bounding the Width of Neural Networks via Coupled Initialization – A Worst Case Analysis – ICML PDF -
Leverage Score Sampling for Tensor Product Matrices in Input Sparsity Time ICML PDF -
Lazy Estimation of Variable Importance for Large Neural Networks ICML PDF -
DAVINZ: Data Valuation using Deep Neural Networks at Initialization ICML PDF -
Neural Tangent Kernel Beyond the Infinite-Width Limit: Effects of Depth and Initialization ICML PDF CODE
NeuralEF: Deconstructing Kernels by Deep Neural Networks ICML PDF CODE
Feature Learning and Signal Propagation in Deep Neural Networks ICML PDF -
More Than a Toy: Random Matrix Models Predict How Real-World Neural Representations Generalize ICML PDF CODE
Fast Graph Neural Tangent Kernel via Kronecker Sketching AAAI PDF -
Rethinking Influence Functions of Neural Networks in the Over-parameterized Regime AAAI PDF -
On the Empirical Neural Tangent Kernel of Standard Finite-Width Convolutional Neural Network Architectures UAI PDF -
Feature Learning and Random Features in Standard Finite-Width Convolutional Neural Networks: An Empirical Study UAI PDF -
Out of Distribution Detection via Neural Network Anchoring ACML PDF CODE
Learning Neural Ranking Models Online from Implicit User Feedback WWW PDF -
Trust Your Robots! Predictive Uncertainty Estimation of Neural Networks with Sparse Gaussian Processes CoRL PDF -
When and why PINNs fail to train: A neural tangent kernel perspective CP PDF CODE
How Neural Architectures Affect Deep Learning for Communication Networks? ICC PDF -
Loss landscapes and optimization in over-parameterized non-linear systems and neural networks ACHA PDF -
Feature Purification: How Adversarial Training Performs Robust Deep Learning FOCS PDF -
Kernel-Based Smoothness Analysis of Residual Networks MSML PDF -
Analyzing Finite Neural Networks: Can We Trust Neural Tangent Kernel Theory? MSML PDF -
The Training Response Law Explains How Deep Neural Networks Learn IoP PDF -
Simple, Fast, and Flexible Framework for Matrix Completion with Infinite Width Neural Networks PNAS PDF CODE
Representation Learning via Quantum Neural Tangent Kernels PRX Quantum PDF -
TorchNTK: A Library for Calculation of Neural Tangent Kernels of PyTorch Models arXiv PDF CODE
Neural Tangent Kernel Analysis of Shallow α-Stable ReLU Neural Networks arXiv PDF -
Neural Tangent Kernel: A Survey arXiv PDF -

2021

Title Venue PDF CODE
Neural Tangent Kernel Maximum Mean Discrepancy NeurIPS PDF -
DNN-based Topology Optimisation: Spatial Invariance and Neural Tangent Kernel NeurIPS PDF -
Stability & Generalisation of Gradient Descent for Shallow Neural Networks without the Neural Tangent Kernel NeurIPS PDF -
Scaling Neural Tangent Kernels via Sketching and Random Features NeurIPS PDF -
Dataset Distillation with Infinitely Wide Convolutional Networks NeurIPS PDF -
On the Equivalence between Neural Network and Support Vector Machine NeurIPS PDF CODE
Local Signal Adaptivity: Provable Feature Learning in Neural Networks Beyond Kernels NeurIPS PDF CODE
Explicit Loss Asymptotics in the Gradient Descent Training of Neural Networks NeurIPS PDF -
Kernelized Heterogeneous Risk Minimization NeurIPS PDF CODE
An Empirical Study of Neural Kernel Bandits NeurIPS-W PDF -
The Curse of Depth in Kernel Regime NeurIPS-W PDF -
Wearing a MASK: Compressed Representations of Variable-Length Sequences Using Recurrent Neural Tangent Kernels ICASSP PDF CODE
The Dynamics of Gradient Descent for Overparametrized Neural Networks L4DC PDF -
The Recurrent Neural Tangent Kernel ICLR PDF -
Deep Neural Tangent Kernel and Laplace Kernel Have the Same RKHS ICLR PDF -
Optimal Rates for Averaged Stochastic Gradient Descent under Neural Tangent Kernel Regime ICLR PDF -
Meta-Learning with Neural Tangent Kernels ICLR PDF -
How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks ICLR PDF -
Deep Networks and the Multiple Manifold Problem ICLR PDF -
Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective ICLR PDF CODE
Neural Thompson Sampling ICLR PDF -
Deep Equals Shallow for ReLU Networks in Kernel Regimes ICLR PDF -
A Recipe for Global Convergence Guarantee in Deep Neural Networks AAAI PDF -
A Deep Conditioning Treatment of Neural Networks ALT PDF -
Nonparametric Regression with Shallow Overparameterized Neural Networks Trained by GD with Early Stopping COLT PDF -
Learning with invariances in random features and kernel models COLT PDF -
Implicit Regularization via Neural Feature Alignment AISTATS PDF CODE
Regularization Matters: A Nonparametric Perspective on Overparametrized Neural Network AISTATS PDF -
One-pass Stochastic Gradient Descent in Overparametrized Two-layer Neural Networks AISTATS PDF -
Fast Adaptation with Linearized Neural Networks AISTATS PDF CODE
Fast Learning in Reproducing Kernel Kreın Spaces via Signed Measures AISTATS PDF -
Stable ResNet AISTATS PDF -
A Dynamical View on Optimization Algorithms of Overparameterized Neural Networks AISTATS PDF -
Can We Characterize Tasks Without Labels or Features? CVPR PDF CODE
The Neural Tangent Link Between CNN Denoisers and Non-Local Filters CVPR PDF CODE
Nerfies: Deformable Neural Radiance Fields ICCV PDF CODE
Kernel Methods in Hyperbolic Spaces ICCV PDF -
Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for Deep ReLU Networks ICML PDF -
On the Generalization Power of Overfitted Two-Layer Neural Tangent Kernel Models ICML PDF -
Tensor Programs IIb: Architectural Universality of Neural Tangent Kernel Training Dynamics ICML PDF -
Tensor Programs IV: Feature Learning in Infinite-Width Neural Networks ICML PDF CODE
FL-NTK: A Neural Tangent Kernel-based Framework for Federated Learning Convergence Analysis ICML PDF -
On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent ICML PDF -
Feature Learning in Infinite-Width Neural Networks ICML PDF CODE
On Monotonic Linear Interpolation of Neural Network Parameters ICML PDF -
Uniform Convergence, Adversarial Spheres and a Simple Remedy ICML PDF -
Quantifying the Benefit of Using Differentiable Learning over Tangent Kernels ICML PDF -
Efficient Statistical Tests: A Neural Tangent Kernel Approach ICML PDF -
Neural Tangent Generalization Attacks ICML PDF CODE
On the Random Conjugate Kernel and Neural Tangent Kernel ICML PDF -
Generalization Guarantees for Neural Architecture Search with Train-Validation Split ICML PDF -
Tilting the playing field: Dynamical loss functions for machine learning ICML PDF CODE
PHEW : Constructing Sparse Networks that Learn Fast and Generalize Well Without Training Data ICML PDF -
On the Neural Tangent Kernel of Deep Networks with Orthogonal Initialization IJCAI PDF CODE
Towards Understanding the Spectral Bias of Deep Learning IJCAI PDF -
On Random Kernels of Residual Architectures UAI PDF -
How Shrinking Gradient Noise Helps the Performance of Neural Networks ICBD PDF -
Unsupervised Shape Completion via Deep Prior in the Neural Tangent Kernel Perspective ACM TOG PDF -
Benefits of Jointly Training Autoencoders: An Improved Neural Tangent Kernel Analysis TIT PDF -
Reinforcement Learning via Gaussian Processes with Neural Network Dual Kernels CoG PDF -
Kernel-Based Smoothness Analysis of Residual Networks MSML PDF -
Mathematical Models of Overparameterized Neural Networks IEEE PDF -
A Feature Fusion Based Indicator for Training-Free Neural Architecture Search IEEE PDF -
Pathological spectra of the Fisher information metric and its variants in deep neural networks NC PDF -
Linearized two-layers neural networks in high dimension Ann. Statist. PDF -
Geometric compression of invariant manifolds in neural nets J. Stat. Mech. PDF CODE
A Convergence Theory Towards Practical Over-parameterized Deep Neural Networks arXiv PDF -
Learning with Neural Tangent Kernels in Near Input Sparsity Time arXiv PDF -
Spectral Analysis of the Neural Tangent Kernel for Deep Residual Networks arXiv PDF -
Properties of the After Kernel arXiv PDF CODE

2020

Title Venue PDF CODE
Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from Input-Output Observations ECCV PDF -
Why Do Deep Residual Networks Generalize Better than Deep Feedforward Networks? — A Neural Tangent Kernel Perspective NeurIPS PDF -
Label-Aware Neural Tangent Kernel: Toward Better Generalization and Local Elasticity NeurIPS PDF CODE
Finite Versus Infinite Neural Networks: an Empirical Study NeurIPS PDF -
On the linearity of large non-linear models: when and why the tangent kernel is constant NeurIPS PDF -
On the Similarity between the Laplace and Neural Tangent Kernels NeurIPS PDF -
A Generalized Neural Tangent Kernel Analysis for Two-layer Neural Networks NeurIPS PDF -
Generalization bound of globally optimal non-convex neural network training: Transportation map estimation by infinite dimensional Langevin dynamics NeurIPS PDF -
Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains NeurIPS PDF CODE
Network size and weights size for memorization with two-layers neural networks NeurIPS PDF -
Neural Networks Learning and Memorization with (almost) no Over-Parameterization NeurIPS PDF -
Towards Understanding Hierarchical Learning: Benefits of Neural Representations NeurIPS PDF -
Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher NeurIPS PDF -
On Infinite-Width Hypernetworks NeurIPS PDF -
Predicting Training Time Without Training NeurIPS PDF -
Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel NeurIPS PDF -
Spectra of the Conjugate Kernel and Neural Tangent Kernel for Linear-Width Neural Networks NeurIPS PDF -
Kernel and Rich Regimes in Overparametrized Models COLT PDF -
Learning Over-Parametrized Two-Layer ReLU Neural Networks beyond NTK COLT PDF -
Finite Depth and Width Corrections to the Neural Tangent Kernel ICLR PDF -
Neural tangent kernels, transportation mappings, and universal approximation ICLR PDF -
Neural Tangents: Fast and Easy Infinite Neural Networks in Python ICLR PDF CODE
Picking Winning Tickets Before Training by Preserving Gradient Flow ICLR PDF CODE
Truth or Backpropaganda? An Empirical Investigation of Deep Learning Theory ICLR PDF -
Simple and Effective Regularization Methods for Training on Noisily Labeled Data with Generalization Guarantee ICLR PDF -
The asymptotic spectrum of the Hessian of DNN throughout training ICLR PDF -
Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks ICLR PDF CODE
Beyond Linearization: On Quadratic and Higher-Order Approximation of Wide Neural Networks ICLR PDF -
Asymptotics of Wide Networks from Feynman Diagrams ICLR PDF -
The equivalence between Stein variational gradient descent and black-box variational inference ICLR-W PDF -
Neural Kernels Without Tangents ICML PDF CODE
The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of Generalization ICML PDF -
Dynamics of Deep Neural Networks and Neural Tangent Hierarchy ICML PDF -
Disentangling Trainability and Generalization in Deep Neural Networks ICML PDF -
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks ICML PDF CODE
Finding trainable sparse networks through Neural Tangent Transfer ICML PDF CODE
Associative Memory in Iterated Overparameterized Sigmoid Autoencoders ICML PDF -
Neural Contextual Bandits with UCB-based Exploration ICML PDF -
Optimization Theory for ReLU Neural Networks Trained with Normalization Layers ICML PDF -
Towards a General Theory of Infinite-Width Limits of Neural Classifiers ICML PDF -
Generalisation guarantees for continual learning with orthogonal gradient descent ICML-W PDF CODE
Neural Spectrum Alignment: Empirical Study ICANN PDF -
A type of generalization error induced by initialization in deep neural networks MSML PDF -
Disentangling feature and lazy training in deep neural networks J. Stat. Mech. PDF CODE
Scaling description of generalization with number of parameters in deep learning J. Stat. Mech. PDF CODE
Any Target Function Exists in a Neighborhood of Any Sufficiently Wide Random Network: A Geometrical Perspective NC PDF -
Kolmogorov Width Decay and Poor Approximation in Machine Learning: Shallow Neural Networks, Random Feature Models and Neural Tangent Kernels RMS PDF -
On the infinite width limit of neural networks with a standard parameterization arXiv PDF CODE
On the Empirical Neural Tangent Kernel of Standard Finite-Width Convolutional Neural Network Architectures arXiv PDF -
Infinite-Width Neural Networks for Any Architecture: Reference Implementations arXiv PDF CODE
Every Model Learned by Gradient Descent Is Approximately a Kernel Machine arXiv PDF -
Analyzing Finite Neural Networks: Can We Trust Neural Tangent Kernel Theory? arXiv PDF -
Scalable Neural Tangent Kernel of Recurrent Architectures arXiv PDF CODE
Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning arXiv PDF -

2019

Title Venue PDF CODE
Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel NeurIPS PDF -
Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent NeurIPS PDF CODE
On Exact Computation with an Infinitely Wide Neural Net NeurIPS PDF CODE
Graph Neural Tangent Kernel: Fusing Graph Neural Networks with Graph Kernels NeurIPS PDF CODE
On the Inductive Bias of Neural Tangent Kernels NeurIPS PDF CODE
Convergence of Adversarial Training in Overparametrized Neural Networks NeurIPS PDF -
Generalization Bounds of Stochastic Gradient Descent for Wide and Deep Neural Networks NeurIPS PDF -
Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers NeurIPS PDF -
Limitations of Lazy Training of Two-layers Neural Networks NeurIPS PDF -
The Convergence Rate of Neural Networks for Learned Functions of Different Frequencies NeurIPS PDF CODE
On Lazy Training in Differentiable Programming NeurIPS PDF -
Information in Infinite Ensembles of Infinitely-Wide Neural Networks AABI PDF -
Scaling Limits of Wide Neural Networks with Weight Sharing: Gaussian Process Behavior, Gradient Independence, and Neural Tangent Kernel Derivation arXiv PDF -
Gradient Descent can Learn Less Over-parameterized Two-layer Neural Networks on Classification Problems arXiv PDF -
Gram-Gauss-Newton Method: Learning Overparameterized Neural Networks for Regression Problems arXiv PDF -
Mean-field Behaviour of Neural Tangent Kernel for Deep Neural Networks arXiv PDF -
Order and Chaos: NTK views on DNN Normalization, Checkerboard and Boundary Artifacts arXiv PDF -
A Fine-Grained Spectral Perspective on Neural Networks arXiv PDF CODE
Enhanced Convolutional Neural Tangent Kernels arXiv PDF -

2018

Title Venue PDF CODE
Neural Tangent Kernel: Convergence and Generalization in Neural Networks NeurIPS PDF -

About

Neural Tangent Kernel Papers

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published