Skip to content

NEAT: Nash-Equilibrium Adaptive Training Optimizer for Deep Learning.

License

ItCodinTime/neat-optimizer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NEAT Optimizer

NEAT: Nash-Equilibrium Adaptive Training Optimizer for Deep Learning.

A PyTorch optimizer that combines second-order curvature estimation with adaptive learning rates for efficient deep learning training.

Features

  • 🚀 Adaptive Learning: Dynamic learning rate adjustment based on gradient statistics
  • 📊 Second-Order Optimization: Incorporates curvature information via Laplacian estimation
  • 💪 Stable Training: Improved convergence and stability compared to standard optimizers
  • 🔧 Easy Integration: Drop-in replacement for PyTorch optimizers
  • Efficient: Minimal computational overhead

Installation

From PyPI (once published)

pip install neat-optimizer

From Source

git clone https://github.com/ItCodinTime/neat-optimizer.git
cd neat-optimizer
pip install -e .

Quick Start

import torch
import torch.nn as nn
from neat import NEATOptimizer

# Create your model
model = nn.Sequential(
    nn.Linear(10, 20),
    nn.ReLU(),
    nn.Linear(20, 1)
)

# Initialize NEAT optimizer
optimizer = NEATOptimizer(
    model.parameters(),
    lr=1e-3,
    betas=(0.9, 0.999),
    weight_decay=0.01,
    laplacian_weight=0.01,
)

# Training loop
for epoch in range(num_epochs):
    for batch in dataloader:
        optimizer.zero_grad()
        loss = compute_loss(model, batch)
        loss.backward()
        optimizer.step()

Usage

Basic Usage

The NEAT optimizer can be used as a drop-in replacement for other PyTorch optimizers:

from neat import NEATOptimizer

optimizer = NEATOptimizer(model.parameters(), lr=1e-3)

Advanced Configuration

optimizer = NEATOptimizer(
    model.parameters(),
    lr=1e-3,                    # Learning rate
    betas=(0.9, 0.999),         # Coefficients for gradient averaging
    eps=1e-8,                   # Term for numerical stability
    weight_decay=0.01,          # L2 regularization
    laplacian_weight=0.01,      # Weight for Laplacian regularization
)

Example: Training a Transformer

See the examples/train_transformer.py file for a complete example of training a transformer model with NEAT optimizer.

from neat import NEATOptimizer
import torch.nn as nn

model = MyTransformerModel()
optimizer = NEATOptimizer(
    model.parameters(),
    lr=1e-3,
    betas=(0.9, 0.999),
    weight_decay=0.01,
)

for epoch in range(num_epochs):
    for batch in dataloader:
        optimizer.zero_grad()
        output = model(batch)
        loss = criterion(output, target)
        loss.backward()
        optimizer.step()

Parameters

  • lr (float, default: 1e-3): Learning rate
  • betas (Tuple[float, float], default: (0.9, 0.999)): Coefficients for computing running averages of gradient and its square
  • eps (float, default: 1e-8): Term added to denominator for numerical stability
  • weight_decay (float, default: 0): Weight decay coefficient (L2 penalty)
  • laplacian_weight (float, default: 0.01): Weight for Laplacian regularization term

Development

Running Tests

pip install -e ".[dev]"
pytest tests/

Code Formatting

black neat/ tests/ examples/
flake8 neat/ tests/ examples/

Project Structure

neat-optimizer/
├── neat/
│   ├── __init__.py
│   ├── optimizer.py        # Main NEAT optimizer implementation
│   ├── laplacian.py        # Laplacian estimation utilities
│   └── utils.py            # Helper functions
├── examples/
│   └── train_transformer.py # Example training script
├── tests/
│   └── test_optimizer.py   # Unit tests
├── setup.py                # Package setup
├── pyproject.toml          # Project configuration
└── README.md               # This file

License

This project is licensed under the MIT License - see the LICENSE file for details.

Citation

If you use NEAT optimizer in your research, please cite:

@software{neat_optimizer,
  title={NEAT: Nash-Equilibrium Adaptive Training Optimizer},
  author={Your Name},
  year={2025},
  url={https://github.com/ItCodinTime/neat-optimizer}
}

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Acknowledgments

  • Inspired by modern optimization techniques in deep learning
  • Built on top of PyTorch

About

NEAT: Nash-Equilibrium Adaptive Training Optimizer for Deep Learning.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages