An interactive web-based platform for learning neural network fundamentals through real-time simulations with auto-generated PyTorch code.
| Feature | Description |
|---|---|
| ๐ง Neural Network Visualizer | Interactive SVG network with real-time training, neuron inspection, and weight visualization |
| ๐ Training Charts | Live loss curves and accuracy metrics with PyTorch code annotations |
| ๐ฏ XOR Playground | Decision boundary heatmap showing non-linear classification in action |
| ๐ PyTorch Concepts | Interactive guide to Tensors, Layers, Activations, Backprop, Optimizers, and Loss Functions |
| ๐ง Training Pipeline | Step-by-step walkthrough of the complete PyTorch training workflow |
- Configurable architecture โ 2 to 6 layers, 1 to 8 neurons per hidden layer
- Real-time training with live weight updates and neuron activation visualization
- Forward pass animation โ watch data flow layer by layer
- Click any neuron to inspect pre-activation (z), activation (a), bias (b), and formula
- Weight visualization โ color (blue/red) and thickness encode weight values
- Learning rate slider (0.001 โ 1.0)
- Activation functions โ ReLU, Sigmoid, Tanh (switch in real-time)
- Architecture modification โ add/remove layers and neurons dynamically
Every change to the network architecture automatically generates valid PyTorch code:
class XORNet(nn.Module):
def __init__(self):
super(XORNet, self).__init__()
self.fc1 = nn.Linear(2, 4)
self.fc2 = nn.Linear(4, 4)
self.fc3 = nn.Linear(4, 1)
self.relu = nn.ReLU()
def forward(self, x):
x = self.relu(self.fc1(x))
x = self.relu(self.fc2(x))
return torch.sigmoid(self.fc3(x))- Loss curve (MSE) with area chart visualization
- Accuracy tracking with percentage display
- PyTorch code annotations on every chart
- 2D heatmap that updates during training
- Watch the network learn non-linear classification
- Truth table comparison (target vs. predicted)
- Neural Network Paper โ Full paper on the simulation platform with equations, figures, and references
- Knight Chess AI Paper โ Analysis of chess AI with minimax, alpha-beta pruning, and complexity analysis
| Technology | Purpose |
|---|---|
| React framework with SSR and routing | |
| Type-safe development | |
| Utility-first styling | |
| Smooth animations and transitions | |
| Training metrics visualization | |
| Edge deployment and hosting |
- Node.js 18+
- npm or yarn
# Clone the repository
git clone https://github.com/romizone/pytorch.git
# Navigate to the project
cd pytorch
# Install dependencies
npm install
# Start the development server
npm run devOpen http://localhost:3000 in your browser.
npm run build
npm startpytorch/
โโโ src/
โ โโโ app/
โ โ โโโ page.tsx # Main simulation page
โ โ โโโ layout.tsx # Root layout
โ โ โโโ globals.css # Global styles
โ โ โโโ paper/
โ โ โโโ page.tsx # Neural Network paper (ArXiv style)
โ โ โโโ knight-chess/
โ โ โโโ page.tsx # Knight Chess AI paper (ArXiv style)
โ โโโ components/
โ โโโ NeuralNetworkVisualizer.tsx # Core network simulation
โ โโโ TrainingChart.tsx # Loss & accuracy charts
โ โโโ PyTorchConcepts.tsx # Interactive concept explorer
โ โโโ XORPlayground.tsx # XOR decision boundary
โ โโโ TrainingPipeline.tsx # Training workflow guide
โโโ package.json
โโโ tsconfig.json
โโโ tailwind.config.ts
โโโ next.config.ts
arXiv:2602.09847v1 [cs.LG] โ 14 Feb 2026
Full academic paper analyzing the simulation platform:
- System architecture and component design
- Neural network engine (forward prop, backprop, gradient descent)
- 10 numbered equations, 3 figures, 3 tables
- 10 academic references
๐ Read Paper โ
arXiv:2602.10234v1 [cs.AI] โ 14 Feb 2026
Analysis of the Knight Chess game AI:
- Game design: 8ร9 board, 5 knights, 3,136 starting positions
- AI engine: Minimax + Alpha-Beta pruning (3 difficulty levels)
- Complexity analysis: branching factor ~42, game tree ~10ยนยณโฐ
- 10 equations, 3 figures, 7 tables, 12 references
๐ Read Paper โ
z_j^(l) = ฮฃ w_ij ยท a_i^(l-1) + b_j^(l)
a_j^(l) = ฯ(z_j^(l))
ฮด_out = (ลท - y) ยท ฯ'(z)
w โ w - ฮท ยท ฮด ยท a_prev
| Function | Formula | Best For |
|---|---|---|
| ReLU | max(0, x) | Hidden layers (fastest convergence) |
| Sigmoid | 1/(1+eโปหฃ) | Output layer (binary classification) |
| Tanh | (eหฃ-eโปหฃ)/(eหฃ+eโปหฃ) | Hidden layers (centered output) |
Contributions are welcome! Here's how:
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is open source and available under the MIT License.