The Ultimate Cross-Language Neural Network Framework
Train in Ruby. Deploy in Python. Or vice versa. Your choice.
GRNexus is not just another neural network framework. It's a revolutionary cross-language AI platform that breaks the barriers between Ruby and Python, combining the elegance of high-level languages with the raw power of native C acceleration.
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Ruby │ ←──→ │ .nexus │ ←──→ │ Python │
│ Elegance │ │ Format │ │ Power │
└─────────────┘ └─────────────┘ └─────────────┘
↓ ↓ ↓
└────────────────────┴─────────────────────┘
│
┌───────▼────────┐
│ Native C Core │
│ 10-100x Faster│
└────────────────┘
| Feature | Description | Status |
|---|---|---|
| 🚀 Blazing Fast | Native C implementation (10-100x faster) | ✅ Production Ready |
| 🔄 Cross-Language | Ruby ↔ Python model compatibility | ✅ 100% Compatible |
| 📝 Text AI | Complete NLP pipeline (tokenization, embeddings, TF-IDF) | ✅ Full Suite |
| 🔢 Numeric Ops | 40+ operations (stats, normalization, time series) | ✅ Comprehensive |
| 🎯 35+ Activations | GELU, Swish, Mish, Snake, and more | ✅ State-of-the-art |
| 🏗️ 12+ Layers | Dense, Conv2D, LSTM, GRU, BatchNorm, Dropout | ✅ Production Grade |
| 🎓 Smart Training | EarlyStopping, ModelCheckpoint, ReduceLR | ✅ Intelligent |
| 🔍 Model Inspector | Analyze models without loading | ✅ Unique Feature |
| 🌍 Cross-Platform | Windows, macOS, Linux | ✅ Universal |
| 📦 Zero Dependencies | Pure Ruby/Python + C (no TensorFlow/PyTorch) | ✅ Lightweight |
-
Cross-Language Model Compatibility
- Save models in Ruby, load in Python (and vice versa)
- Universal
.nexusformat with metadata - Automatic architecture reconstruction
- BatchNorm statistics preserved correctly
-
Complete Text Processing
- Vocabulary management
- TF-IDF vectorization
- Word embeddings with Xavier initialization
- Document similarity
- Sentiment analysis ready
- Improved EmbeddingLayer for NLP tasks
-
Advanced Numeric Processing
- Statistical operations (mean, std, variance)
- Normalization (Z-score, MinMax)
- Time series (moving average, differences, integration)
- Array operations (concatenate, power, modulo)
-
Model Inspection
- Analyze models without loading
- View architecture, parameters, training history
- Cross-language metadata
-
Smart Training
- Intelligent callbacks
- Automatic learning rate adjustment
- Early stopping
- Best model checkpointing
-
Enhanced Layer Support
- FlattenLayer now handles 3D tensors (batch × sequence × features)
- EmbeddingLayer with Xavier initialization
- Better text and sequence processing
- Full support for NLP architectures
# Clone the repository
git clone https://github.com/grcodedigitalsolutions/GRNexus.git
cd GRNexus
# That's it! No dependencies to install 🎉# Windows
windows_run.bat
# macOS
chmod +x mac.sh && ./mac.sh
# Linux
chmod +x linux.sh && ./linux.shRuby:
require_relative 'ruby/grnexus'
# XOR dataset
x_train = [[0, 0], [0, 1], [1, 0], [1, 1]]
y_train = [[0], [1], [1], [0]]
# Build model
model = GRNexus::NeuralNetwork.new(loss: 'mse', learning_rate: 0.5)
model.add(GRNEXUSLayer::DenseLayer.new(units: 4, input_dim: 2, activation: GRNEXUSActivations::Tanh.new))
model.add(GRNEXUSLayer::DenseLayer.new(units: 1, input_dim: 4, activation: GRNEXUSActivations::Sigmoid.new))
# Train
model.train(x_train, y_train, epochs: 1000, batch_size: 4)
# Save (works in Python too!)
model.save('xor_model.nexus')
# Predict
puts model.predict([[0, 0]]) # => ~0.0
puts model.predict([[1, 1]]) # => ~0.0
puts model.predict([[0, 1]]) # => ~1.0Python:
from grnexus import NeuralNetwork
from lib.grnexus_layers import DenseLayer
from lib.grnexus_activations import Tanh, Sigmoid
# XOR dataset
x_train = [[0, 0], [0, 1], [1, 0], [1, 1]]
y_train = [[0], [1], [1], [0]]
# Build model
model = NeuralNetwork(loss='mse', learning_rate=0.5)
model.add(DenseLayer(4, 2, activation=Tanh()))
model.add(DenseLayer(1, 4, activation=Sigmoid()))
# Train
model.train(x_train, y_train, epochs=1000, batch_size=4)
# Save (works in Ruby too!)
model.save('xor_model.nexus')
# Predict
print(model.predict([[0, 0]])) # => ~0.0
print(model.predict([[1, 1]])) # => ~0.0
print(model.predict([[0, 1]])) # => ~1.0Python - Simple Sentiment Analysis:
from grnexus import NeuralNetwork
from lib.grnexus_text_proccessing import Vocabulary, TextVectorizer
from lib.grnexus_layers import DenseLayer, DropoutLayer
from lib.grnexus_activations import ReLU, Tanh
from lib.grnexus_normalization import Softmax
# Training data
texts = [
"I love this product it's excellent",
"terrible product very bad quality",
"amazing quality exceeded expectations",
"worst purchase ever disappointed",
"highly recommend great value",
"waste of money poor quality"
]
labels = [[1, 0], [0, 1], [1, 0], [0, 1], [1, 0], [0, 1]] # [positive, negative]
# Create vocabulary and vectorize
vocab = Vocabulary(texts, max_vocab_size=100)
vectorizer = TextVectorizer(vocab)
x_train = [vectorizer.vectorize(text) for text in texts]
# Build sentiment analyzer
model = NeuralNetwork(loss='cross_entropy', learning_rate=0.05, name='sentiment_analyzer')
model.add(DenseLayer(32, vocab.size, activation=ReLU()))
model.add(DropoutLayer(rate=0.3))
model.add(DenseLayer(16, 32, activation=Tanh()))
model.add(DenseLayer(2, 16, activation=Softmax()))
# Train
model.train(x_train, labels, epochs=100, batch_size=2, verbose=True)
# Test predictions
test_text = "excellent product very good"
test_vector = vectorizer.vectorize(test_text)
prediction = model.predict([test_vector])[0]
sentiment = "POSITIVE" if prediction[0] > prediction[1] else "NEGATIVE"
confidence = max(prediction) * 100
print(f"Text: '{test_text}'")
print(f"Sentiment: {sentiment} ({confidence:.2f}% confidence)")
# Save for Ruby
model.save('models/sentiment_analyzer.nexus')Ruby - Same Sentiment Analysis:
require_relative 'ruby/grnexus'
# Training data
texts = [
"I love this product it's excellent",
"terrible product very bad quality",
"amazing quality exceeded expectations",
"worst purchase ever disappointed",
"highly recommend great value",
"waste of money poor quality"
]
labels = [[1, 0], [0, 1], [1, 0], [0, 1], [1, 0], [0, 1]] # [positive, negative]
# Create vocabulary and vectorize
vocab = GRNexusTextProcessing::Vocabulary.new(texts, max_vocab_size: 100)
vectorizer = GRNexusTextProcessing::TextVectorizer.new(vocab)
x_train = texts.map { |text| vectorizer.vectorize(text) }
# Build sentiment analyzer
model = GRNexus::NeuralNetwork.new(loss: 'cross_entropy', learning_rate: 0.05, name: 'sentiment_analyzer')
model.add(GRNEXUSLayer::DenseLayer.new(units: 32, input_dim: vocab.size, activation: GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::DropoutLayer.new(rate: 0.3))
model.add(GRNEXUSLayer::DenseLayer.new(units: 16, input_dim: 32, activation: GRNEXUSActivations::Tanh.new))
model.add(GRNEXUSLayer::DenseLayer.new(units: 2, input_dim: 16, activation: GRNEXUSNormalization::Softmax.new))
# Train
model.train(x_train, labels, epochs: 100, batch_size: 2, verbose: true)
# Test predictions
test_text = "excellent product very good"
test_vector = vectorizer.vectorize(test_text)
prediction = model.predict([test_vector])[0]
sentiment = prediction[0] > prediction[1] ? "POSITIVE" : "NEGATIVE"
confidence = prediction.max * 100
puts "Text: '#{test_text}'"
puts "Sentiment: #{sentiment} (#{confidence.round(2)}% confidence)"
# Save for Python
model.save('models/sentiment_analyzer.nexus')Advanced: Sentiment Analysis with Embeddings:
from grnexus import NeuralNetwork
from lib.grnexus_text_proccessing import Vocabulary, TextEmbeddings
from lib.grnexus_layers import EmbeddingLayer, DenseLayer, DropoutLayer, FlattenLayer
from lib.grnexus_activations import ReLU
from lib.grnexus_normalization import Softmax
# Larger dataset
texts = [
"This movie is absolutely fantastic and amazing",
"Terrible film waste of time and money",
"Great acting superb storyline loved it",
"Boring predictable disappointing experience",
# ... more training data
]
labels = [[1, 0], [0, 1], [1, 0], [0, 1]] # [positive, negative]
# Create vocabulary
vocab = Vocabulary(texts, max_vocab_size=5000)
# Normalize texts to sequences of indices
max_length = 20
x_train = [vocab.normalize_text(text, max_length=max_length) for text in texts]
# Build model with embedding layer
model = NeuralNetwork(loss='cross_entropy', learning_rate=0.001, name='sentiment_embeddings')
# Embedding layer converts word indices to dense vectors
model.add(EmbeddingLayer(
vocab_size=vocab.size,
embedding_dim=128,
input_length=max_length
))
# Flatten embeddings
model.add(FlattenLayer()) # Output: max_length * embedding_dim
# Dense layers
model.add(DenseLayer(64, max_length * 128, activation=ReLU()))
model.add(DropoutLayer(rate=0.5))
model.add(DenseLayer(32, 64, activation=ReLU()))
model.add(DenseLayer(2, 32, activation=Softmax()))
# Train
model.train(x_train, labels, epochs=50, batch_size=16, verbose=True)
# Predict
test_text = "amazing movie highly recommended"
test_seq = vocab.normalize_text(test_text, max_length=max_length)
prediction = model.predict([test_seq])[0]
print(f"Sentiment: {'POSITIVE' if prediction[0] > prediction[1] else 'NEGATIVE'}")
print(f"Confidence: {max(prediction)*100:.2f}%")
model.save('sentiment_embeddings.nexus')Ruby - Sentiment with Embeddings:
require_relative 'ruby/grnexus'
# Larger dataset
texts = [
"This movie is absolutely fantastic and amazing",
"Terrible film waste of time and money",
"Great acting superb storyline loved it",
"Boring predictable disappointing experience"
]
labels = [[1, 0], [0, 1], [1, 0], [0, 1]] # [positive, negative]
# Create vocabulary
vocab = GRNexusTextProcessing::Vocabulary.new(texts, max_vocab_size: 5000)
# Normalize texts to sequences of indices
max_length = 20
x_train = texts.map { |text| vocab.normalize_text(text, max_length: max_length) }
# Build model with embedding layer
model = GRNexus::NeuralNetwork.new(loss: 'cross_entropy', learning_rate: 0.001, name: 'sentiment_embeddings')
# Embedding layer converts word indices to dense vectors
model.add(GRNEXUSLayer::EmbeddingLayer.new(
vocab_size: vocab.size,
embedding_dim: 128,
input_length: max_length
))
# Flatten embeddings
model.add(GRNEXUSLayer::FlattenLayer.new) # Output: max_length * embedding_dim
# Dense layers
model.add(GRNEXUSLayer::DenseLayer.new(units: 64, input_dim: max_length * 128, activation: GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::DropoutLayer.new(rate: 0.5))
model.add(GRNEXUSLayer::DenseLayer.new(units: 32, input_dim: 64, activation: GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::DenseLayer.new(units: 2, input_dim: 32, activation: GRNEXUSNormalization::Softmax.new))
# Train
model.train(x_train, labels, epochs: 50, batch_size: 16, verbose: true)
# Predict
test_text = "amazing movie highly recommended"
test_seq = vocab.normalize_text(test_text, max_length: max_length)
prediction = model.predict([test_seq])[0]
sentiment = prediction[0] > prediction[1] ? "POSITIVE" : "NEGATIVE"
puts "Sentiment: #{sentiment}"
puts "Confidence: #{(prediction.max * 100).round(2)}%"
model.save('sentiment_embeddings.nexus')Load and use cross-language:
# Load Python model in Ruby
model = GRNexus::NeuralNetwork.load('sentiment_embeddings.nexus')
# => Loading model: GRNexus v1.0 (created in Python)
# Use it immediately!
prediction = model.predict(test_data)
puts "Sentiment: #{prediction[0] > prediction[1] ? 'POSITIVE' : 'NEGATIVE'}"# Load Ruby model in Python
model = NeuralNetwork.load('sentiment_embeddings.nexus')
# => Loading model: GRNexus v1.0 (created in Ruby)
# Use it immediately!
prediction = model.predict(test_data)
print(f"Sentiment: {'POSITIVE' if prediction[0] > prediction[1] else 'NEGATIVE'}")This is where GRNexus truly shines. Train in one language, deploy in another:
# Team A: Ruby developers train a model
model = GRNexus::NeuralNetwork.new(loss: 'cross_entropy', learning_rate: 0.1)
model.add(GRNEXUSLayer::DenseLayer.new(units: 128, input_dim: 50, activation: GRNEXUSActivations::GELU.new))
model.add(GRNEXUSLayer::BatchNormLayer.new)
model.add(GRNEXUSLayer::DenseLayer.new(units: 10, input_dim: 128, activation: GRNEXUSNormalization::Softmax.new))
model.train(x_train, y_train, epochs: 50)
model.save('shared_model.nexus')# Team B: Python developers use it
model = NeuralNetwork.load('shared_model.nexus')
# => Loading model: GRNexus v1.0 (created in Ruby)
# Total params: 6,538
# Layers: 3
# Continue training with new data
model.train(new_x, new_y, epochs=20)
# Deploy in production
predictions = model.predict(production_data)Supported paths:
- ✅ Ruby → Python
- ✅ Python → Ruby
- ✅ Ruby → Ruby (obviously)
- ✅ Python → Python (obviously)
- ✅ Relative paths:
../models/model.nexus - ✅ Absolute paths:
/home/user/models/model.nexus - ✅ Windows paths:
C:\Models\model.nexus
Python - Deep Network:
from grnexus import NeuralNetwork
from lib.grnexus_layers import DenseLayer, BatchNormLayer, DropoutLayer
from lib.grnexus_activations import GELU, Swish, Mish, SELU, Linear
# Create a state-of-the-art deep network
model = NeuralNetwork(
loss='mse',
optimizer='adam',
learning_rate=0.001,
name='deep_network'
)
# Layer 1: GELU activation (used in GPT, BERT)
model.add(DenseLayer(
units=128,
input_dim=20,
activation=GELU()
))
model.add(BatchNormLayer())
# Layer 2: Swish activation (Google's discovery)
model.add(DenseLayer(
units=96,
input_dim=128,
activation=Swish()
))
model.add(DropoutLayer(rate=0.2))
# Layer 3: Mish activation (state-of-the-art)
model.add(DenseLayer(
units=64,
input_dim=96,
activation=Mish()
))
model.add(BatchNormLayer())
# Layer 4: SELU (self-normalizing)
model.add(DenseLayer(
units=32,
input_dim=64,
activation=SELU()
))
# Output layer
model.add(DenseLayer(
units=5,
input_dim=32,
activation=Linear()
))
# View architecture
model.summary()
# Train
history = model.train(x_train, y_train, epochs=50, batch_size=32, verbose=True)
# Save
model.save('models/deep_network.nexus')Ruby - Same Deep Network:
require_relative 'ruby/grnexus'
# Create a state-of-the-art deep network
model = GRNexus::NeuralNetwork.new(
loss: 'mse',
optimizer: 'adam',
learning_rate: 0.001,
name: 'deep_network'
)
# Layer 1: GELU activation (used in GPT, BERT)
model.add(GRNEXUSLayer::DenseLayer.new(
units: 128,
input_dim: 20,
activation: GRNEXUSActivations::GELU.new
))
model.add(GRNEXUSLayer::BatchNormLayer.new)
# Layer 2: Swish activation (Google's discovery)
model.add(GRNEXUSLayer::DenseLayer.new(
units: 96,
input_dim: 128,
activation: GRNEXUSActivations::Swish.new
))
model.add(GRNEXUSLayer::DropoutLayer.new(rate: 0.2))
# Layer 3: Mish activation (state-of-the-art)
model.add(GRNEXUSLayer::DenseLayer.new(
units: 64,
input_dim: 96,
activation: GRNEXUSActivations::Mish.new
))
model.add(GRNEXUSLayer::BatchNormLayer.new)
# Layer 4: SELU (self-normalizing)
model.add(GRNEXUSLayer::DenseLayer.new(
units: 32,
input_dim: 64,
activation: GRNEXUSActivations::SELU.new
))
# Output layer
model.add(GRNEXUSLayer::DenseLayer.new(
units: 5,
input_dim: 32,
activation: GRNEXUSActivations::Linear.new
))
# View architecture
model.summary
# ================================================================================
# Model: deep_network
# ================================================================================
# Output Shape Param #
# --------------------------------------------------------------------------------
# DenseLayer (GELU) (1) (None, 128) 2688
# BatchNormLayer (2) (None, 128) 2
# DenseLayer (Swish) (3) (None, 96) 12384
# DropoutLayer (4) (None, 96) 0
# DenseLayer (Mish) (5) (None, 64) 6208
# BatchNormLayer (6) (None, 64) 2
# DenseLayer (SELU) (7) (None, 32) 2080
# DenseLayer (Linear) (8) (None, 5) 165
# ================================================================================
# Total params: 23,529
# Trainable params: 23,529
# Non-trainable params: 0
# ================================================================================
# Train
history = model.train(x_train, y_train, epochs: 50, batch_size: 32, verbose: true)
# Save
model.save('models/deep_network.nexus')Python - Time Series Forecasting:
from grnexus import NeuralNetwork
from lib.grnexus_layers import DenseLayer
from lib.grnexus_activations import Tanh, ReLU
from lib.grnexus_numeric_proccessing import MovingAverage, ZScoreNormalize
import math
import random
# Generate time series
time_series = [math.sin(i * 0.05) * 10 + random.random() * 2 for i in range(200)]
# Preprocess
ma = MovingAverage(window_size=5)
smoothed = ma.process(time_series)
zscore = ZScoreNormalize()
normalized = zscore.process(smoothed)
# Create sliding windows
window_size = 10
x_train = []
y_train = []
for i in range(len(normalized) - window_size - 1):
x_train.append(normalized[i:i+window_size])
y_train.append([normalized[i + window_size]])
# Build model
ts_model = NeuralNetwork(loss='mse', learning_rate=0.01)
ts_model.add(DenseLayer(64, window_size, activation=Tanh()))
ts_model.add(DenseLayer(32, 64, activation=ReLU()))
ts_model.add(DenseLayer(1, 32))
ts_model.train(x_train, y_train, epochs=50, batch_size=16)
ts_model.save('time_series_model.nexus')
# Make predictions
future_window = normalized[-window_size:]
prediction = ts_model.predict([future_window])[0]
print(f"Next value prediction: {prediction[0]:.4f}")Ruby - Same Time Series Forecasting:
require_relative 'ruby/grnexus'
# Generate time series
time_series = (0..199).map { |i| Math.sin(i * 0.05) * 10 + rand * 2 }
# Preprocess
ma = GRNEXUSNumericProcessing::MovingAverage.new(window_size: 5)
smoothed = ma.process(time_series)
zscore = GRNEXUSNumericProcessing::ZScoreNormalize.new
normalized = zscore.process(smoothed)
# Create sliding windows
window_size = 10
x_train = []
y_train = []
(0...(normalized.length - window_size - 1)).each do |i|
x_train << normalized[i, window_size]
y_train << [normalized[i + window_size]]
end
# Build model
ts_model = GRNexus::NeuralNetwork.new(loss: 'mse', learning_rate: 0.01)
ts_model.add(GRNEXUSLayer::DenseLayer.new(units: 64, input_dim: window_size, activation: GRNEXUSActivations::Tanh.new))
ts_model.add(GRNEXUSLayer::DenseLayer.new(units: 32, input_dim: 64, activation: GRNEXUSActivations::ReLU.new))
ts_model.add(GRNEXUSLayer::DenseLayer.new(units: 1, input_dim: 32))
ts_model.train(x_train, y_train, epochs: 50, batch_size: 16)
ts_model.save('time_series_model.nexus')
# Make predictions
future_window = normalized[-window_size..-1]
prediction = ts_model.predict([future_window])[0]
puts "Next value prediction: #{prediction[0].round(4)}"Python - Image Classification with CNN:
from grnexus import NeuralNetwork
from lib.grnexus_layers import *
from lib.grnexus_activations import ReLU
from lib.grnexus_normalization import Softmax
# Build CNN for MNIST-like image classification (28x28 grayscale)
model = NeuralNetwork(
loss='cross_entropy',
optimizer='adam',
learning_rate=0.001,
name='mnist_classifier'
)
# First convolutional block
model.add(Conv2DLayer(
filters=32,
kernel_size=3,
input_shape=(28, 28, 1), # 28x28 grayscale images
activation=ReLU(),
padding='same'
))
model.add(MaxPoolingLayer(pool_size=2, stride=2)) # Output: 14x14x32
# Second convolutional block
model.add(Conv2DLayer(
filters=64,
kernel_size=3,
activation=ReLU(),
padding='same'
))
model.add(MaxPoolingLayer(pool_size=2, stride=2)) # Output: 7x7x64
# Third convolutional block (optional, for deeper networks)
model.add(Conv2DLayer(
filters=128,
kernel_size=3,
activation=ReLU(),
padding='same'
))
# Flatten and dense layers
model.add(FlattenLayer()) # Flatten to 1D: 7x7x128 = 6272
model.add(DenseLayer(
units=256,
input_dim=6272,
activation=ReLU()
))
model.add(DropoutLayer(rate=0.5)) # Regularization
model.add(DenseLayer(
units=10,
input_dim=256,
activation=Softmax() # 10 classes (digits 0-9)
))
# View architecture
model.summary()
# Prepare data (example with random data)
import random
x_train = [[[random.random() for _ in range(28)] for _ in range(28)] for _ in range(1000)]
y_train = []
for _ in range(1000):
label = random.randint(0, 9)
y_train.append([1.0 if i == label else 0.0 for i in range(10)])
# Train
history = model.train(
x_train, y_train,
epochs=20,
batch_size=32,
verbose=True
)
# Save model
model.save('models/mnist_cnn.nexus')
# Evaluate
x_test = [[[random.random() for _ in range(28)] for _ in range(28)] for _ in range(200)]
y_test = []
for _ in range(200):
label = random.randint(0, 9)
y_test.append([1.0 if i == label else 0.0 for i in range(10)])
loss, accuracy = model.evaluate(x_test, y_test)
print(f"Test Accuracy: {accuracy:.2f}%")
# Predict single image
single_image = [[random.random() for _ in range(28)] for _ in range(28)]
prediction = model.predict([single_image])[0]
predicted_digit = prediction.index(max(prediction))
print(f"Predicted digit: {predicted_digit} (confidence: {max(prediction)*100:.2f}%)")Ruby - Same CNN Architecture:
require_relative 'ruby/grnexus'
# Build CNN for MNIST-like image classification (28x28 grayscale)
model = GRNexus::NeuralNetwork.new(
loss: 'cross_entropy',
optimizer: 'adam',
learning_rate: 0.001,
name: 'mnist_classifier'
)
# First convolutional block
model.add(GRNEXUSLayer::Conv2DLayer.new(
filters: 32,
kernel_size: 3,
input_shape: [28, 28, 1], # 28x28 grayscale images
activation: GRNEXUSActivations::ReLU.new,
padding: 'same'
))
model.add(GRNEXUSLayer::MaxPoolingLayer.new(pool_size: 2, stride: 2)) # Output: 14x14x32
# Second convolutional block
model.add(GRNEXUSLayer::Conv2DLayer.new(
filters: 64,
kernel_size: 3,
activation: GRNEXUSActivations::ReLU.new,
padding: 'same'
))
model.add(GRNEXUSLayer::MaxPoolingLayer.new(pool_size: 2, stride: 2)) # Output: 7x7x64
# Third convolutional block (optional, for deeper networks)
model.add(GRNEXUSLayer::Conv2DLayer.new(
filters: 128,
kernel_size: 3,
activation: GRNEXUSActivations::ReLU.new,
padding: 'same'
))
# Flatten and dense layers
model.add(GRNEXUSLayer::FlattenLayer.new) # Flatten to 1D: 7x7x128 = 6272
model.add(GRNEXUSLayer::DenseLayer.new(
units: 256,
input_dim: 6272,
activation: GRNEXUSActivations::ReLU.new
))
model.add(GRNEXUSLayer::DropoutLayer.new(rate: 0.5)) # Regularization
model.add(GRNEXUSLayer::DenseLayer.new(
units: 10,
input_dim: 256,
activation: GRNEXUSNormalization::Softmax.new # 10 classes (digits 0-9)
))
# View architecture
model.summary
# Prepare data (example with random data)
x_train = Array.new(1000) { Array.new(28) { Array.new(28) { rand } } }
y_train = Array.new(1000) do
label = rand(10)
Array.new(10) { |i| i == label ? 1.0 : 0.0 }
end
# Train
history = model.train(
x_train, y_train,
epochs: 20,
batch_size: 32,
verbose: true
)
# Save model (compatible with Python!)
model.save('models/mnist_cnn.nexus')
# Evaluate
x_test = Array.new(200) { Array.new(28) { Array.new(28) { rand } } }
y_test = Array.new(200) do
label = rand(10)
Array.new(10) { |i| i == label ? 1.0 : 0.0 }
end
loss, accuracy = model.evaluate(x_test, y_test)
puts "Test Accuracy: #{accuracy.round(2)}%"
# Predict single image
single_image = Array.new(28) { Array.new(28) { rand } }
prediction = model.predict([single_image])[0]
predicted_digit = prediction.index(prediction.max)
confidence = prediction.max * 100
puts "Predicted digit: #{predicted_digit} (confidence: #{confidence.round(2)}%)"RGB Image Classification (Color Images):
# For RGB images (e.g., 32x32x3 CIFAR-10 style)
model = NeuralNetwork(loss='cross_entropy', learning_rate=0.001)
# Input: 32x32x3 (RGB)
model.add(Conv2DLayer(filters=32, kernel_size=3, input_shape=(32, 32, 3), activation=ReLU()))
model.add(Conv2DLayer(filters=32, kernel_size=3, activation=ReLU()))
model.add(MaxPoolingLayer(pool_size=2))
model.add(DropoutLayer(rate=0.25))
model.add(Conv2DLayer(filters=64, kernel_size=3, activation=ReLU()))
model.add(Conv2DLayer(filters=64, kernel_size=3, activation=ReLU()))
model.add(MaxPoolingLayer(pool_size=2))
model.add(DropoutLayer(rate=0.25))
model.add(FlattenLayer())
model.add(DenseLayer(512, activation=ReLU()))
model.add(DropoutLayer(rate=0.5))
model.add(DenseLayer(10, activation=Softmax()))
# Train on RGB images
model.train(rgb_images, labels, epochs=50, batch_size=64)Ruby - RGB Image Classification:
# For RGB images (e.g., 32x32x3 CIFAR-10 style)
model = GRNexus::NeuralNetwork.new(loss: 'cross_entropy', learning_rate: 0.001)
# Input: 32x32x3 (RGB)
model.add(GRNEXUSLayer::Conv2DLayer.new(filters: 32, kernel_size: 3, input_shape: [32, 32, 3], activation: GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::Conv2DLayer.new(filters: 32, kernel_size: 3, activation: GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::MaxPoolingLayer.new(pool_size: 2))
model.add(GRNEXUSLayer::DropoutLayer.new(rate: 0.25))
model.add(GRNEXUSLayer::Conv2DLayer.new(filters: 64, kernel_size: 3, activation: GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::Conv2DLayer.new(filters: 64, kernel_size: 3, activation: GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::MaxPoolingLayer.new(pool_size: 2))
model.add(GRNEXUSLayer::DropoutLayer.new(rate: 0.25))
model.add(GRNEXUSLayer::FlattenLayer.new)
model.add(GRNEXUSLayer::DenseLayer.new(units: 512, activation: GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::DropoutLayer.new(rate: 0.5))
model.add(GRNEXUSLayer::DenseLayer.new(units: 10, activation: GRNEXUSNormalization::Softmax.new))
# Train on RGB images
model.train(rgb_images, labels, epochs: 50, batch_size: 64)Python - LSTM for Sequence Prediction:
from grnexus import NeuralNetwork
from lib.grnexus_layers import LSTMLayer, DenseLayer
from lib.grnexus_activations import Tanh, Sigmoid
from lib.grnexus_normalization import Softmax
# Build LSTM for sequence classification
model = NeuralNetwork(
loss='cross_entropy',
optimizer='adam',
learning_rate=0.001,
name='lstm_classifier'
)
# LSTM layers
model.add(LSTMLayer(
units=128,
return_sequences=True # Return full sequence
))
model.add(LSTMLayer(
units=64,
return_sequences=False # Return only last output
))
# Dense layers
model.add(DenseLayer(32, 64, activation=Tanh()))
model.add(DenseLayer(10, 32, activation=Softmax()))
# Train on sequences
# x_train shape: (samples, timesteps, features)
model.train(x_sequences, y_labels, epochs=30, batch_size=32)
model.save('lstm_model.nexus')Ruby - LSTM for Sequence Prediction:
require_relative 'ruby/grnexus'
# Build LSTM for sequence classification
model = GRNexus::NeuralNetwork.new(
loss: 'cross_entropy',
optimizer: 'adam',
learning_rate: 0.001,
name: 'lstm_classifier'
)
# LSTM layers
model.add(GRNEXUSLayer::LSTMLayer.new(
units: 128,
return_sequences: true # Return full sequence
))
model.add(GRNEXUSLayer::LSTMLayer.new(
units: 64,
return_sequences: false # Return only last output
))
# Dense layers
model.add(GRNEXUSLayer::DenseLayer.new(units: 32, input_dim: 64, activation: GRNEXUSActivations::Tanh.new))
model.add(GRNEXUSLayer::DenseLayer.new(units: 10, input_dim: 32, activation: GRNEXUSNormalization::Softmax.new))
# Train on sequences
# x_train shape: (samples, timesteps, features)
model.train(x_sequences, y_labels, epochs: 30, batch_size: 32)
model.save('lstm_model.nexus')GRU Alternative (Faster than LSTM):
# Python - GRU is faster and often performs similarly to LSTM
model = NeuralNetwork(loss='cross_entropy', learning_rate=0.001)
model.add(GRULayer(units=128, return_sequences=True))
model.add(GRULayer(units=64, return_sequences=False))
model.add(DenseLayer(10, 64, activation=Softmax()))# Ruby - GRU is faster and often performs similarly to LSTM
model = GRNexus::NeuralNetwork.new(loss: 'cross_entropy', learning_rate: 0.001)
model.add(GRNEXUSLayer::GRULayer.new(units: 128, return_sequences: true))
model.add(GRNEXUSLayer::GRULayer.new(units: 64, return_sequences: false))
model.add(GRNEXUSLayer::DenseLayer.new(units: 10, input_dim: 64, activation: GRNEXUSNormalization::Softmax.new))from grnexus import NeuralNetwork
from lib.grnexus_layers import DenseLayer, BatchNormLayer, DropoutLayer
from lib.grnexus_activations import ReLU
from lib.grnexus_normalization import Softmax
from lib.grnexus_callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint
# Build model
model = NeuralNetwork(loss='cross_entropy', learning_rate=0.1, name='smart_model')
model.add(DenseLayer(64, 15, activation=ReLU()))
model.add(BatchNormLayer())
model.add(DropoutLayer(rate=0.3))
model.add(DenseLayer(32, 64, activation=ReLU()))
model.add(DenseLayer(4, 32, activation=Softmax()))
# Configure intelligent callbacks
callbacks = [
# Stop training if validation loss doesn't improve for 5 epochs
EarlyStopping(
monitor='val_loss',
patience=5,
verbose=True,
restore_best_weights=True
),
# Reduce learning rate when validation loss plateaus
ReduceLROnPlateau(
monitor='val_loss',
factor=0.5,
patience=3,
min_lr=0.0001,
verbose=True
),
# Save best model automatically
ModelCheckpoint(
filepath='models/best_model.nexus',
monitor='val_loss',
save_best_only=True,
verbose=True
)
]
# Train with intelligence
history = model.train(
x_train, y_train,
epochs=100,
batch_size=32,
validation_data=(x_val, y_val),
callbacks=callbacks,
verbose=True
)
# Output:
# Epoch 1/100 - Loss: 1.3862 - Accuracy: 25.00% - Val Loss: 1.3521 - Val Accuracy: 30.00%
# Epoch 1: val_loss improved to 1.3521, saving model to models/best_model.nexus
# ...
# Epoch 8: Reducing learning rate from 0.1 to 0.05
# ...
# Epoch 15: Reducing learning rate from 0.05 to 0.025
# ...
# Early stopping triggered at epoch 22
# Restoring best weights from epoch 17
print(f"Best validation loss: {min(history['val_loss'])}")
print(f"Training stopped at epoch: {len(history['loss'])}")GRNexus now includes 5 classical ML algorithms with the same cross-language compatibility:
- K-Nearest Neighbors (KNN) - Classification based on proximity
- K-Means Clustering - Unsupervised grouping
- Linear Regression - Continuous value prediction
- Logistic Regression - Binary/multi-class classification
- Gaussian Naive Bayes - Probabilistic classification
Key Features:
- ✅ Native C implementation (fast!)
- ✅ Save/Load with
.lnexusformat (different from neural networks.nexus) - ✅ Ruby ↔ Python compatibility
- ✅ Model inspection with
inspect()/__repr__() - ✅ Production-ready
Python Example:
from grnexus import KNeighborsClassifier
# Training data
x_train = [
[1.0, 2.0], [2.0, 3.0], [3.0, 4.0], # Class 0
[8.0, 8.0], [9.0, 9.0], [10.0, 10.0] # Class 1
]
y_train = [0, 0, 0, 1, 1, 1]
# Create and train KNN
knn = KNeighborsClassifier(n_neighbors=3)
knn.fit(x_train, y_train)
# Predict
test_points = [[2.5, 3.5], [9.0, 8.5]]
predictions = knn.predict(test_points)
print(f"Predictions: {predictions}") # [0, 1]
# Get prediction probabilities
probabilities = knn.predict_proba(test_points)
print(f"Probabilities: {probabilities}")
# Save model
knn.save('knn_model.lnexus')
# Load model
knn_loaded = KNeighborsClassifier.load('knn_model.lnexus')
print(knn_loaded) # Model infoRuby Example:
require_relative 'ruby/grnexus'
# Training data
x_train = [
[1.0, 2.0], [2.0, 3.0], [3.0, 4.0], # Class 0
[8.0, 8.0], [9.0, 9.0], [10.0, 10.0] # Class 1
]
y_train = [0, 0, 0, 1, 1, 1]
# Create and train KNN
knn = GRNEXUSMachineLearning::KNeighborsClassifier.new(n_neighbors: 3)
knn.fit(x_train, y_train)
# Predict
test_points = [[2.5, 3.5], [9.0, 8.5]]
predictions = knn.predict(test_points)
puts "Predictions: #{predictions.inspect}" # [0, 1]
# Get prediction probabilities
probabilities = knn.predict_proba(test_points)
puts "Probabilities: #{probabilities.inspect}"
# Save model
knn.save('knn_model.lnexus')
# Load model
knn_loaded = GRNEXUSMachineLearning::KNeighborsClassifier.load('knn_model.lnexus')
puts knn_loaded.inspect # Model infoPython Example:
from grnexus import KMeans
# Data points to cluster
data = [
[1.0, 2.0], [1.5, 1.8], [2.0, 2.5], # Cluster 0
[8.0, 8.0], [8.5, 8.2], [9.0, 9.0], # Cluster 1
[15.0, 15.0], [15.5, 14.8], [16.0, 15.5] # Cluster 2
]
# Create and fit K-Means
kmeans = KMeans(n_clusters=3, max_iters=100)
kmeans.fit(data)
# Predict cluster assignments
new_points = [[2.0, 2.0], [8.5, 8.5], [15.0, 15.0]]
clusters = kmeans.predict(new_points)
print(f"Cluster assignments: {clusters}") # [0, 1, 2]
# Get cluster centers
centers = kmeans.cluster_centers_
print(f"Cluster centers: {centers}")
# Save and load
kmeans.save('kmeans_model.lnexus')
kmeans_loaded = KMeans.load('kmeans_model.lnexus')Ruby Example:
require_relative 'ruby/grnexus'
# Data points to cluster
data = [
[1.0, 2.0], [1.5, 1.8], [2.0, 2.5], # Cluster 0
[8.0, 8.0], [8.5, 8.2], [9.0, 9.0], # Cluster 1
[15.0, 15.0], [15.5, 14.8], [16.0, 15.5] # Cluster 2
]
# Create and fit K-Means
kmeans = GRNEXUSMachineLearning::KMeans.new(n_clusters: 3, max_iters: 100)
kmeans.fit(data)
# Predict cluster assignments
new_points = [[2.0, 2.0], [8.5, 8.5], [15.0, 15.0]]
clusters = kmeans.predict(new_points)
puts "Cluster assignments: #{clusters.inspect}" # [0, 1, 2]
# Get cluster centers
centers = kmeans.cluster_centers
puts "Cluster centers: #{centers.inspect}"
# Save and load
kmeans.save('kmeans_model.lnexus')
kmeans_loaded = GRNEXUSMachineLearning::KMeans.load('kmeans_model.lnexus')Python Example:
from grnexus import LinearRegression
# Training data (house prices example)
x_train = [
[1200, 3], # [square_feet, bedrooms]
[1500, 3],
[1800, 4],
[2000, 4],
[2200, 5]
]
y_train = [200000, 250000, 300000, 350000, 400000] # prices
# Create and train
lr = LinearRegression()
lr.fit(x_train, y_train)
# Predict
new_houses = [[1600, 3], [2100, 4]]
predictions = lr.predict(new_houses)
print(f"Predicted prices: {predictions}")
# Get model coefficients
print(f"Coefficients: {lr.coef_}")
print(f"Intercept: {lr.intercept_}")
# Calculate R² score
r2 = lr.score(x_train, y_train)
print(f"R² score: {r2:.4f}")
# Save and load
lr.save('linear_regression.lnexus')
lr_loaded = LinearRegression.load('linear_regression.lnexus')Ruby Example:
require_relative 'ruby/grnexus'
# Training data (house prices example)
x_train = [
[1200, 3], # [square_feet, bedrooms]
[1500, 3],
[1800, 4],
[2000, 4],
[2200, 5]
]
y_train = [200000, 250000, 300000, 350000, 400000] # prices
# Create and train
lr = GRNEXUSMachineLearning::LinearRegression.new
lr.fit(x_train, y_train)
# Predict
new_houses = [[1600, 3], [2100, 4]]
predictions = lr.predict(new_houses)
puts "Predicted prices: #{predictions.inspect}"
# Get model coefficients
puts "Coefficients: #{lr.coef.inspect}"
puts "Intercept: #{lr.intercept}"
# Calculate R² score
r2 = lr.score(x_train, y_train)
puts "R² score: #{r2.round(4)}"
# Save and load
lr.save('linear_regression.lnexus')
lr_loaded = GRNEXUSMachineLearning::LinearRegression.load('linear_regression.lnexus')Python Example:
from grnexus import LogisticRegression
# Binary classification data
x_train = [
[1.0, 2.0], [2.0, 3.0], [3.0, 4.0], # Class 0
[8.0, 8.0], [9.0, 9.0], [10.0, 10.0] # Class 1
]
y_train = [0, 0, 0, 1, 1, 1]
# Create and train
logreg = LogisticRegression(learning_rate=0.1, max_iters=1000)
logreg.fit(x_train, y_train)
# Predict
test_points = [[2.5, 3.5], [9.0, 8.5]]
predictions = logreg.predict(test_points)
print(f"Predictions: {predictions}") # [0, 1]
# Get probabilities
probabilities = logreg.predict_proba(test_points)
print(f"Probabilities: {probabilities}")
# Save and load
logreg.save('logistic_regression.lnexus')
logreg_loaded = LogisticRegression.load('logistic_regression.lnexus')Ruby Example:
require_relative 'ruby/grnexus'
# Binary classification data
x_train = [
[1.0, 2.0], [2.0, 3.0], [3.0, 4.0], # Class 0
[8.0, 8.0], [9.0, 9.0], [10.0, 10.0] # Class 1
]
y_train = [0, 0, 0, 1, 1, 1]
# Create and train
logreg = GRNEXUSMachineLearning::LogisticRegression.new(learning_rate: 0.1, max_iters: 1000)
logreg.fit(x_train, y_train)
# Predict
test_points = [[2.5, 3.5], [9.0, 8.5]]
predictions = logreg.predict(test_points)
puts "Predictions: #{predictions.inspect}" # [0, 1]
# Get probabilities
probabilities = logreg.predict_proba(test_points)
puts "Probabilities: #{probabilities.inspect}"
# Save and load
logreg.save('logistic_regression.lnexus')
logreg_loaded = GRNEXUSMachineLearning::LogisticRegression.load('logistic_regression.lnexus')Python Example:
from grnexus import GaussianNB
# Training data
x_train = [
[1.0, 2.0], [1.5, 1.8], [2.0, 2.5], # Class 0
[8.0, 8.0], [8.5, 8.2], [9.0, 9.0] # Class 1
]
y_train = [0, 0, 0, 1, 1, 1]
# Create and train
gnb = GaussianNB()
gnb.fit(x_train, y_train)
# Predict
test_points = [[2.0, 2.0], [8.5, 8.5]]
predictions = gnb.predict(test_points)
print(f"Predictions: {predictions}") # [0, 1]
# Get probabilities
probabilities = gnb.predict_proba(test_points)
print(f"Probabilities: {probabilities}")
# Save and load
gnb.save('naive_bayes.lnexus')
gnb_loaded = GaussianNB.load('naive_bayes.lnexus')Ruby Example:
require_relative 'ruby/grnexus'
# Training data
x_train = [
[1.0, 2.0], [1.5, 1.8], [2.0, 2.5], # Class 0
[8.0, 8.0], [8.5, 8.2], [9.0, 9.0] # Class 1
]
y_train = [0, 0, 0, 1, 1, 1]
# Create and train
gnb = GRNEXUSMachineLearning::GaussianNB.new
gnb.fit(x_train, y_train)
# Predict
test_points = [[2.0, 2.0], [8.5, 8.5]]
predictions = gnb.predict(test_points)
puts "Predictions: #{predictions.inspect}" # [0, 1]
# Get probabilities
probabilities = gnb.predict_proba(test_points)
puts "Probabilities: #{probabilities.inspect}"
# Save and load
gnb.save('naive_bayes.lnexus')
gnb_loaded = GRNEXUSMachineLearning::GaussianNB.load('naive_bayes.lnexus')Just like neural networks, classical ML models are fully compatible across languages:
# Train in Python
from grnexus import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(x_train, y_train)
knn.save('shared_knn.lnexus')# Load and use in Ruby
knn = GRNEXUSMachineLearning::KNeighborsClassifier.load('shared_knn.lnexus')
predictions = knn.predict(test_data)
puts "Predictions from Python model: #{predictions.inspect}"Important Notes:
- Neural networks use
.nexusformat - Classical ML models use
.lnexusformat - Both formats are cross-language compatible
- Attempting to load the wrong format will raise a clear error message
One of GRNexus's unique features: inspect models without loading them into memory.
# Inspect any .nexus model file
GRNexus::NeuralNetwork.inspect_model('models/production_model.nexus')Output:
================================================================================
MODEL INSPECTION: models/production_model.nexus
================================================================================
Framework: GRNexus
Version: 2.0
Language: Python
Name: sentiment_analyzer
Created: 2025-11-24T15:30:45
Loss Function: cross_entropy
Optimizer: adam
Learning Rate: 0.001
Metadata:
Total Parameters: 11,847
Trainable Parameters: 11,847
Layers Count: 9
Architecture:
--------------------------------------------------------------------------------
Layer 1: DenseLayer
Units: 128
Activation: GELU
Trainable: true
Layer 2: BatchNormLayer
Trainable: true
Layer 3: DropoutLayer
Trainable: false
Layer 4: DenseLayer
Units: 64
Activation: Swish
Trainable: true
Layer 5: BatchNormLayer
Trainable: true
Layer 6: DenseLayer
Units: 32
Activation: Mish
Trainable: true
Layer 7: DenseLayer
Units: 16
Activation: ReLU
Trainable: true
Layer 8: DropoutLayer
Trainable: false
Layer 9: DenseLayer
Units: 2
Activation: Softmax
Trainable: true
Training History:
Epochs trained: 50
Final loss: 0.1234
Final accuracy: 95.67%
================================================================================
Use cases:
- 🔍 Quick model analysis without loading
- 📊 Compare multiple models
- 🐛 Debug architecture issues
- 📝 Generate model documentation
- 🔄 Verify cross-language compatibility
GRNexus comes with 6 complete test suites covering every feature:
# Windows
windows_run.bat
# macOS
chmod +x mac.sh && ./mac.sh
# Linux
chmod +x linux.sh && ./linux.sh| Test Suite | Command | What It Tests |
|---|---|---|
| Ruby Advanced | ruby ruby/test/test_advanced_complete.rb |
Text generation, sentiment analysis, deep networks, callbacks |
| Ruby Architectures | ruby ruby/test/test_complex_architectures.rb |
Complex architectures, all activations, numeric ops |
| Ruby ← Python | ruby ruby/test/test_load_python_models.rb |
Loading Python models in Ruby, cross-language compatibility |
| Python Advanced | python python/test/test_advanced_complete.py |
Text generation, sentiment analysis, deep networks, callbacks |
| Python Architectures | python python/test/test_complex_architectures.py |
Complex architectures, all activations, numeric ops |
| Python ← Ruby | python python/test/test_load_ruby_models.py |
Loading Ruby models in Python, cross-language compatibility |
✅ Text Processing (NLP)
├─ Vocabulary creation
├─ Tokenization
├─ TF-IDF vectorization
├─ Text embeddings
└─ Document similarity
✅ Numeric Processing
├─ Statistical operations (mean, std, variance)
├─ Normalization (Z-score, MinMax)
├─ Time series (moving average, differences)
└─ Array operations (40+ functions)
✅ Neural Networks
├─ 35+ activation functions
├─ 12+ layer types
├─ Multiple loss functions
├─ Multiple optimizers
└─ Batch training
✅ Cross-Language
├─ Ruby → Python model loading
├─ Python → Ruby model loading
├─ Continue training across languages
└─ Model inspection
✅ Smart Training
├─ EarlyStopping callback
├─ ReduceLROnPlateau callback
├─ ModelCheckpoint callback
└─ Custom callbacks
✅ Model Management
├─ Save/Load models
├─ Model inspection
├─ Architecture summary
└─ Parameter counting
Python - Shared Layers with Multiple Outputs:
from grnexus import NeuralNetwork
from lib.grnexus_layers import DenseLayer, DropoutLayer
from lib.grnexus_activations import ReLU
from lib.grnexus_normalization import Softmax
# Build a model with shared feature extraction
# Task 1: Sentiment classification (positive/negative)
# Task 2: Topic classification (tech/sports/politics)
model = NeuralNetwork(loss='cross_entropy', learning_rate=0.001)
# Shared layers (feature extraction)
model.add(DenseLayer(128, 100, activation=ReLU()))
model.add(DropoutLayer(rate=0.3))
model.add(DenseLayer(64, 128, activation=ReLU()))
# Task-specific output layers can be added separately
# For multi-task, train on combined loss
model.add(DenseLayer(5, 64, activation=Softmax())) # Combined output
model.train(x_train, y_train, epochs=50, batch_size=32)Ruby - Shared Layers with Multiple Outputs:
require_relative 'ruby/grnexus'
# Build a model with shared feature extraction
# Task 1: Sentiment classification (positive/negative)
# Task 2: Topic classification (tech/sports/politics)
model = GRNexus::NeuralNetwork.new(loss: 'cross_entropy', learning_rate: 0.001)
# Shared layers (feature extraction)
model.add(GRNEXUSLayer::DenseLayer.new(units: 128, input_dim: 100, activation: GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::DropoutLayer.new(rate: 0.3))
model.add(GRNEXUSLayer::DenseLayer.new(units: 64, input_dim: 128, activation: GRNEXUSActivations::ReLU.new))
# Task-specific output layers can be added separately
# For multi-task, train on combined loss
model.add(GRNEXUSLayer::DenseLayer.new(units: 5, input_dim: 64, activation: GRNEXUSNormalization::Softmax.new)) # Combined output
model.train(x_train, y_train, epochs: 50, batch_size: 32)Python - Feature Extraction:
# Step 1: Train base model on large dataset
base_model = NeuralNetwork(loss='cross_entropy', learning_rate=0.001)
base_model.add(DenseLayer(256, 1000, activation=ReLU()))
base_model.add(DenseLayer(128, 256, activation=ReLU()))
base_model.add(DenseLayer(64, 128, activation=ReLU()))
base_model.add(DenseLayer(10, 64, activation=Softmax()))
base_model.train(large_dataset_x, large_dataset_y, epochs=100)
base_model.save('base_model.nexus')
# Step 2: Load and fine-tune on specific task
transfer_model = NeuralNetwork.load('base_model.nexus')
# Continue training with smaller learning rate
transfer_model.learning_rate = 0.0001
transfer_model.train(specific_task_x, specific_task_y, epochs=20)Ruby - Transfer Learning:
# Step 1: Train base model on large dataset
base_model = GRNexus::NeuralNetwork.new(loss: 'cross_entropy', learning_rate: 0.001)
base_model.add(GRNEXUSLayer::DenseLayer.new(units: 256, input_dim: 1000, activation: GRNEXUSActivations::ReLU.new))
base_model.add(GRNEXUSLayer::DenseLayer.new(units: 128, input_dim: 256, activation: GRNEXUSActivations::ReLU.new))
base_model.add(GRNEXUSLayer::DenseLayer.new(units: 64, input_dim: 128, activation: GRNEXUSActivations::ReLU.new))
base_model.add(GRNEXUSLayer::DenseLayer.new(units: 10, input_dim: 64, activation: GRNEXUSNormalization::Softmax.new))
base_model.train(large_dataset_x, large_dataset_y, epochs: 100)
base_model.save('base_model.nexus')
# Step 2: Load and fine-tune on specific task
transfer_model = GRNexus::NeuralNetwork.load('base_model.nexus')
# Continue training with smaller learning rate
transfer_model.learning_rate = 0.0001
transfer_model.train(specific_task_x, specific_task_y, epochs: 20)Python - Model Ensemble:
# Train multiple models with different architectures
models = []
# Model 1: Deep network
model1 = NeuralNetwork(loss='cross_entropy', learning_rate=0.001)
model1.add(DenseLayer(128, 50, activation=ReLU()))
model1.add(DenseLayer(64, 128, activation=ReLU()))
model1.add(DenseLayer(10, 64, activation=Softmax()))
model1.train(x_train, y_train, epochs=50)
models.append(model1)
# Model 2: Wide network
model2 = NeuralNetwork(loss='cross_entropy', learning_rate=0.001)
model2.add(DenseLayer(256, 50, activation=ReLU()))
model2.add(DenseLayer(10, 256, activation=Softmax()))
model2.train(x_train, y_train, epochs=50)
models.append(model2)
# Model 3: Different activation
model3 = NeuralNetwork(loss='cross_entropy', learning_rate=0.001)
model3.add(DenseLayer(128, 50, activation=GELU()))
model3.add(DenseLayer(64, 128, activation=Swish()))
model3.add(DenseLayer(10, 64, activation=Softmax()))
model3.train(x_train, y_train, epochs=50)
models.append(model3)
# Ensemble prediction (voting)
def ensemble_predict(models, x):
predictions = [model.predict(x) for model in models]
# Average predictions
ensemble_pred = [[sum(p[i][j] for p in predictions) / len(predictions)
for j in range(len(predictions[0][i]))]
for i in range(len(predictions[0]))]
return ensemble_pred
# Use ensemble
test_predictions = ensemble_predict(models, x_test)Ruby - Model Ensemble:
require_relative 'ruby/grnexus'
# Train multiple models with different architectures
models = []
# Model 1: Deep network
model1 = GRNexus::NeuralNetwork.new(loss: 'cross_entropy', learning_rate: 0.001)
model1.add(GRNEXUSLayer::DenseLayer.new(units: 128, input_dim: 50, activation: GRNEXUSActivations::ReLU.new))
model1.add(GRNEXUSLayer::DenseLayer.new(units: 64, input_dim: 128, activation: GRNEXUSActivations::ReLU.new))
model1.add(GRNEXUSLayer::DenseLayer.new(units: 10, input_dim: 64, activation: GRNEXUSNormalization::Softmax.new))
model1.train(x_train, y_train, epochs: 50)
models << model1
# Model 2: Wide network
model2 = GRNexus::NeuralNetwork.new(loss: 'cross_entropy', learning_rate: 0.001)
model2.add(GRNEXUSLayer::DenseLayer.new(units: 256, input_dim: 50, activation: GRNEXUSActivations::ReLU.new))
model2.add(GRNEXUSLayer::DenseLayer.new(units: 10, input_dim: 256, activation: GRNEXUSNormalization::Softmax.new))
model2.train(x_train, y_train, epochs: 50)
models << model2
# Model 3: Different activation
model3 = GRNexus::NeuralNetwork.new(loss: 'cross_entropy', learning_rate: 0.001)
model3.add(GRNEXUSLayer::DenseLayer.new(units: 128, input_dim: 50, activation: GRNEXUSActivations::GELU.new))
model3.add(GRNEXUSLayer::DenseLayer.new(units: 64, input_dim: 128, activation: GRNEXUSActivations::Swish.new))
model3.add(GRNEXUSLayer::DenseLayer.new(units: 10, input_dim: 64, activation: GRNEXUSNormalization::Softmax.new))
model3.train(x_train, y_train, epochs: 50)
models << model3
# Ensemble prediction (voting)
def ensemble_predict(models, x)
predictions = models.map { |model| model.predict(x) }
# Average predictions
ensemble_pred = []
predictions[0].length.times do |i|
sample_pred = []
predictions[0][i].length.times do |j|
avg = predictions.map { |p| p[i][j] }.sum / predictions.length.to_f
sample_pred << avg
end
ensemble_pred << sample_pred
end
ensemble_pred
end
# Use ensemble
test_predictions = ensemble_predict(models, x_test)
puts "Ensemble predictions: #{test_predictions.length} samples"Python - Grid Search Pattern:
from grnexus import NeuralNetwork
from lib.grnexus_layers import DenseLayer, DropoutLayer
from lib.grnexus_activations import ReLU
# Define hyperparameter grid
learning_rates = [0.001, 0.01, 0.1]
dropout_rates = [0.2, 0.3, 0.5]
hidden_units = [64, 128, 256]
best_accuracy = 0
best_params = {}
# Grid search
for lr in learning_rates:
for dropout in dropout_rates:
for units in hidden_units:
print(f"Testing: lr={lr}, dropout={dropout}, units={units}")
model = NeuralNetwork(loss='cross_entropy', learning_rate=lr)
model.add(DenseLayer(units, 50, activation=ReLU()))
model.add(DropoutLayer(rate=dropout))
model.add(DenseLayer(10, units, activation=Softmax()))
model.train(x_train, y_train, epochs=20, batch_size=32, verbose=False)
loss, accuracy = model.evaluate(x_val, y_val)
if accuracy > best_accuracy:
best_accuracy = accuracy
best_params = {'lr': lr, 'dropout': dropout, 'units': units}
model.save('best_model.nexus')
print(f"Best params: {best_params}")
print(f"Best accuracy: {best_accuracy:.2f}%")Ruby - Grid Search Pattern:
require_relative 'ruby/grnexus'
# Define hyperparameter grid
learning_rates = [0.001, 0.01, 0.1]
dropout_rates = [0.2, 0.3, 0.5]
hidden_units = [64, 128, 256]
best_accuracy = 0
best_params = {}
# Grid search
learning_rates.each do |lr|
dropout_rates.each do |dropout|
hidden_units.each do |units|
puts "Testing: lr=#{lr}, dropout=#{dropout}, units=#{units}"
model = GRNexus::NeuralNetwork.new(loss: 'cross_entropy', learning_rate: lr)
model.add(GRNEXUSLayer::DenseLayer.new(units: units, input_dim: 50, activation: GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::DropoutLayer.new(rate: dropout))
model.add(GRNEXUSLayer::DenseLayer.new(units: 10, input_dim: units, activation: GRNEXUSNormalization::Softmax.new))
model.train(x_train, y_train, epochs: 20, batch_size: 32, verbose: false)
loss, accuracy = model.evaluate(x_val, y_val)
if accuracy > best_accuracy
best_accuracy = accuracy
best_params = {lr: lr, dropout: dropout, units: units}
model.save('best_model.nexus')
end
end
end
end
puts "Best params: #{best_params.inspect}"
puts "Best accuracy: #{best_accuracy.round(2)}%"1. Data Preparation:
Python:
# Always normalize/standardize your data
from lib.grnexus_numeric_proccessing import ZScoreNormalize
normalizer = ZScoreNormalize()
x_train_normalized = [normalizer.process(sample) for sample in x_train]Ruby:
# Always normalize/standardize your data
normalizer = GRNEXUSNumericProcessing::ZScoreNormalize.new
x_train_normalized = x_train.map { |sample| normalizer.process(sample) }2. Train/Validation/Test Split:
Python:
# Split data properly
train_size = int(0.7 * len(data))
val_size = int(0.15 * len(data))
x_train = data[:train_size]
x_val = data[train_size:train_size+val_size]
x_test = data[train_size+val_size:]Ruby:
# Split data properly
train_size = (0.7 * data.length).to_i
val_size = (0.15 * data.length).to_i
x_train = data[0...train_size]
x_val = data[train_size...(train_size + val_size)]
x_test = data[(train_size + val_size)..-1]3. Use Callbacks:
Python:
from lib.grnexus_callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint
callbacks = [
EarlyStopping(patience=10, restore_best_weights=True),
ReduceLROnPlateau(factor=0.5, patience=5),
ModelCheckpoint('best_model.nexus', save_best_only=True)
]
model.train(x_train, y_train, validation_data=(x_val, y_val), callbacks=callbacks)Ruby:
early_stop = GRNEXUSCallbacks::EarlyStopping.new(patience: 10, restore_best_weights: true)
lr_reduce = GRNEXUSCallbacks::ReduceLROnPlateau.new(factor: 0.5, patience: 5)
checkpoint = GRNEXUSCallbacks::ModelCheckpoint.new(
filepath: 'best_model.nexus',
save_best_only: true
)
callbacks = [early_stop, lr_reduce, checkpoint]
model.train(x_train, y_train, validation_data: [x_val, y_val], callbacks: callbacks)4. Regularization:
Python:
# Use dropout and batch normalization
model.add(DenseLayer(128, 64, activation=ReLU()))
model.add(BatchNormLayer())
model.add(DropoutLayer(rate=0.3))Ruby:
# Use dropout and batch normalization
model.add(GRNEXUSLayer::DenseLayer.new(units: 128, input_dim: 64, activation: GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::BatchNormLayer.new)
model.add(GRNEXUSLayer::DropoutLayer.new(rate: 0.3))5. Monitor Training:
Python:
# Always use validation data and verbose mode during development
history = model.train(
x_train, y_train,
validation_data=(x_val, y_val),
epochs=100,
batch_size=32,
verbose=True
)
# Plot training history (if using matplotlib)
import matplotlib.pyplot as plt
plt.plot(history['loss'], label='Training Loss')
plt.plot(history['val_loss'], label='Validation Loss')
plt.legend()
plt.show()Ruby:
# Always use validation data and verbose mode during development
history = model.train(
x_train, y_train,
validation_data: [x_val, y_val],
epochs: 100,
batch_size: 32,
verbose: true
)
# Access training history
puts "Final training loss: #{history['loss'].last.round(4)}"
puts "Final validation loss: #{history['val_loss'].last.round(4)}"6. Save Checkpoints:
Python:
# Save models at different stages
model.save('model_epoch_10.nexus')
# Continue training
model.train(x_train, y_train, epochs=10)
model.save('model_epoch_20.nexus')Ruby:
# Save models at different stages
model.save('model_epoch_10.nexus')
# Continue training
model.train(x_train, y_train, epochs: 10)
model.save('model_epoch_20.nexus')7. Cross-Language Development:
# Python team: Train and save
model.train(x_train, y_train, epochs=50)
model.save('shared_model.nexus')# Ruby team: Load and deploy
model = GRNexus::NeuralNetwork.load('shared_model.nexus')
predictions = model.predict(production_data)# Ruby
model = GRNexus::NeuralNetwork.new(
loss: 'cross_entropy', # or 'mse'
optimizer: 'sgd', # or 'adam', 'rmsprop'
learning_rate: 0.01,
name: 'my_model'
)
model.add(layer) # Add layer
model.train(x, y, epochs:, batch_size:) # Train
model.predict(x) # Predict
model.evaluate(x_test, y_test) # Evaluate
model.save('model.nexus') # Save
model = GRNexus::NeuralNetwork.load('model.nexus') # Load
GRNexus::NeuralNetwork.inspect_model('model.nexus') # Inspect
model.summary # View architecture# Python
model = NeuralNetwork(
loss='cross_entropy', # or 'mse'
optimizer='sgd', # or 'adam', 'rmsprop'
learning_rate=0.01,
name='my_model'
)
model.add(layer) # Add layer
model.train(x, y, epochs=, batch_size=) # Train
model.predict(x) # Predict
model.evaluate(x_test, y_test) # Evaluate
model.save('model.nexus') # Save
model = NeuralNetwork.load('model.nexus') # Load
NeuralNetwork.inspect_model('model.nexus') # Inspect
model.summary() # View architecture# Ruby
vocab = GRNexusTextProcessing::Vocabulary.new(documents, max_vocab_size: 1000)
indices = vocab.normalize_text(text, max_length: 20)
text = vocab.denormalize_indices(indices)
vectorizer = GRNexusTextProcessing::TextVectorizer.new(vocab)
vector = vectorizer.vectorize(text)
embeddings = GRNexusTextProcessing::TextEmbeddings.new(vocab, embedding_dim: 100)
similar_indices, similarities = embeddings.find_similar(token_idx, top_k: 10)# Python
vocab = Vocabulary(documents, max_vocab_size=1000)
indices = vocab.normalize_text(text, max_length=20)
text = vocab.denormalize_indices(indices)
vectorizer = TextVectorizer(vocab)
vector = vectorizer.vectorize(text)
embeddings = TextEmbeddings(vocab, embedding_dim=100)
similar_indices, similarities = embeddings.find_similar(token_idx, top_k=10)# Ruby
# Statistical operations
mean = GRNEXUSNumericProcessing::MeanArray.new.process(data)
std = GRNEXUSNumericProcessing::StdArray.new.process(data)
# Normalization
zscore = GRNEXUSNumericProcessing::ZScoreNormalize.new
normalized = zscore.process(data)
minmax = GRNEXUSNumericProcessing::MinMaxNormalize.new(min_range: 0.0, max_range: 1.0)
normalized = minmax.process(data)
# Time series
ma = GRNEXUSNumericProcessing::MovingAverage.new(window_size: 5)
smoothed = ma.process(time_series)
diff = GRNEXUSNumericProcessing::FiniteDifference.new
differences = diff.process(data)# Python
# Statistical operations
mean = MeanArray().process(data)
std = StdArray().process(data)
# Normalization
zscore = ZScoreNormalize()
normalized = zscore.process(data)
minmax = MinMaxNormalize(min_range=0.0, max_range=1.0)
normalized = minmax.process(data)
# Time series
ma = MovingAverage(window_size=5)
smoothed = ma.process(time_series)
diff = FiniteDifference()
differences = diff.process(data)require_relative 'ruby/grnexus'
model = GRNexus::NeuralNetwork.new(loss: 'cross_entropy', learning_rate: 0.01)
# 1. DenseLayer (Fully Connected)
model.add(GRNEXUSLayer::DenseLayer.new(
units: 128,
input_dim: 64,
activation: GRNEXUSActivations::ReLU.new
))
# 2. ActivationLayer (Standalone)
model.add(GRNEXUSLayer::ActivationLayer.new(
GRNEXUSActivations::Tanh.new
))
# 3. DropoutLayer (Regularization)
model.add(GRNEXUSLayer::DropoutLayer.new(
rate: 0.5 # Drop 50% of neurons during training
))
# 4. BatchNormLayer (Normalization)
model.add(GRNEXUSLayer::BatchNormLayer.new(
epsilon: 1e-5,
momentum: 0.1
))
# 5. Conv2DLayer (Convolutional)
model.add(GRNEXUSLayer::Conv2DLayer.new(
filters: 32,
kernel_size: 3,
stride: 1,
padding: 'same',
activation: GRNEXUSActivations::ReLU.new
))
# 6. MaxPoolingLayer (Downsampling)
model.add(GRNEXUSLayer::MaxPoolingLayer.new(
pool_size: 2,
stride: 2
))
# 7. LSTMLayer (Recurrent)
model.add(GRNEXUSLayer::LSTMLayer.new(
units: 64,
return_sequences: true
))
# 8. GRULayer (Recurrent)
model.add(GRNEXUSLayer::GRULayer.new(
units: 64,
return_sequences: false
))
# 9. EmbeddingLayer (Word Embeddings)
model.add(GRNEXUSLayer::EmbeddingLayer.new(
vocab_size: 10000,
embedding_dim: 128
))
# 10. FlattenLayer (Reshape to 1D)
model.add(GRNEXUSLayer::FlattenLayer.new)
# 11. ReshapeLayer (Custom Shape)
model.add(GRNEXUSLayer::ReshapeLayer.new(
target_shape: [28, 28, 1]
))
# 12. SoftmaxLayer (Probability Distribution)
model.add(GRNEXUSLayer::SoftmaxLayer.new)from grnexus import NeuralNetwork
from lib.grnexus_layers import *
from lib.grnexus_activations import *
model = NeuralNetwork(loss='cross_entropy', learning_rate=0.01)
# 1. DenseLayer (Fully Connected)
model.add(DenseLayer(
units=128,
input_dim=64,
activation=ReLU()
))
# 2. ActivationLayer (Standalone)
model.add(ActivationLayer(Tanh()))
# 3. DropoutLayer (Regularization)
model.add(DropoutLayer(
rate=0.5 # Drop 50% of neurons during training
))
# 4. BatchNormLayer (Normalization)
model.add(BatchNormLayer(
epsilon=1e-5,
momentum=0.1
))
# 5. Conv2DLayer (Convolutional)
model.add(Conv2DLayer(
filters=32,
kernel_size=3,
stride=1,
padding='same',
activation=ReLU()
))
# 6. MaxPoolingLayer (Downsampling)
model.add(MaxPoolingLayer(
pool_size=2,
stride=2
))
# 7. LSTMLayer (Recurrent)
model.add(LSTMLayer(
units=64,
return_sequences=True
))
# 8. GRULayer (Recurrent)
model.add(GRULayer(
units=64,
return_sequences=False
))
# 9. EmbeddingLayer (Word Embeddings)
model.add(EmbeddingLayer(
vocab_size=10000,
embedding_dim=128
))
# 10. FlattenLayer (Reshape to 1D)
model.add(FlattenLayer())
# 11. ReshapeLayer (Custom Shape)
model.add(ReshapeLayer(
target_shape=(28, 28, 1)
))
# 12. SoftmaxLayer (Probability Distribution)
model.add(SoftmaxLayer())require_relative 'ruby/grnexus'
# ============================================================================
# BASIC ACTIVATIONS
# ============================================================================
# Linear (Identity)
GRNEXUSActivations::Linear.new
# Step (Binary)
GRNEXUSActivations::Step.new
# Sigmoid (0 to 1)
GRNEXUSActivations::Sigmoid.new
# Tanh (-1 to 1)
GRNEXUSActivations::Tanh.new
# ReLU (Rectified Linear Unit)
GRNEXUSActivations::ReLU.new
# ============================================================================
# MODERN ACTIVATIONS (State-of-the-art)
# ============================================================================
# GELU (Gaussian Error Linear Unit) - Used in GPT, BERT
GRNEXUSActivations::GELU.new
# Swish (Self-Gated) - Google's discovery
GRNEXUSActivations::Swish.new
# Mish (Self-Regularized) - State-of-the-art
GRNEXUSActivations::Mish.new
# LiSHT (Linearly Scaled Hyperbolic Tangent)
GRNEXUSActivations::LiSHT.new
# SiLU (Sigmoid Linear Unit) - Same as Swish
GRNEXUSActivations::SiLU.new
# ============================================================================
# PARAMETRIC ACTIVATIONS
# ============================================================================
# LeakyReLU (Leaky Rectified Linear Unit)
GRNEXUSActivations::LeakyReLU.new(alpha: 0.01)
# PReLU (Parametric ReLU)
GRNEXUSActivations::PReLU.new(alpha: 0.25)
# ELU (Exponential Linear Unit)
GRNEXUSActivations::ELU.new(alpha: 1.0)
# SELU (Scaled Exponential Linear Unit) - Self-normalizing
GRNEXUSActivations::SELU.new
# CELU (Continuously Differentiable ELU)
GRNEXUSActivations::CELU.new(alpha: 1.0)
# ============================================================================
# SPECIALIZED ACTIVATIONS
# ============================================================================
# Maxout
GRNEXUSActivations::Maxout.new
# Minout
GRNEXUSActivations::Minout.new
# GLU (Gated Linear Unit)
GRNEXUSActivations::GLU.new
# ARelu (Adaptive ReLU)
GRNEXUSActivations::ARelu.new
# FReLU (Funnel ReLU)
GRNEXUSActivations::FReLU.new
# BReLU (Bounded ReLU)
GRNEXUSActivations::BReLU.new
# ============================================================================
# SHRINKAGE ACTIVATIONS
# ============================================================================
# HardShrink
GRNEXUSActivations::HardShrink.new(lambda: 0.5)
# SoftShrink
GRNEXUSActivations::SoftShrink.new(lambda: 0.5)
# TanhShrink
GRNEXUSActivations::TanhShrink.new
# ============================================================================
# SMOOTH ACTIVATIONS
# ============================================================================
# Softplus (Smooth ReLU)
GRNEXUSActivations::Softplus.new
# Softsign
GRNEXUSActivations::Softsign.new
# HardSigmoid
GRNEXUSActivations::HardSigmoid.new
# HardTanh
GRNEXUSActivations::HardTanh.new
# ============================================================================
# ADVANCED ACTIVATIONS
# ============================================================================
# Snake (Periodic)
GRNEXUSActivations::Snake.new(frequency: 1.0)
# SnakeBeta (Learnable Periodic)
GRNEXUSActivations::SnakeBeta.new(alpha: 1.0, beta: 1.0)
# ============================================================================
# VARIANT ACTIVATIONS
# ============================================================================
# ThresholdedReLU
GRNEXUSActivations::ThresholdedReLU.new(theta: 1.0)
# ReLU6 (Bounded ReLU)
GRNEXUSActivations::ReLU6.new
# HardSwish (Mobile-optimized)
GRNEXUSActivations::HardSwish.new
# ISRU (Inverse Square Root Unit)
GRNEXUSActivations::ISRU.new(alpha: 1.0)
# ISRLU (Inverse Square Root Linear Unit)
GRNEXUSActivations::ISRLU.new(alpha: 1.0)
# ============================================================================
# SQUARED ACTIVATIONS
# ============================================================================
# ReLUSquared
GRNEXUSActivations::ReLUSquared.new
# SquaredReLU
GRNEXUSActivations::SquaredReLU.new
# ============================================================================
# NORMALIZATION (Often used as output activations)
# ============================================================================
# Softmax (Probability distribution)
GRNEXUSNormalization::Softmax.newfrom lib.grnexus_activations import *
from lib.grnexus_normalization import Softmax
# ============================================================================
# BASIC ACTIVATIONS
# ============================================================================
Linear() # Identity
Step() # Binary
Sigmoid() # 0 to 1
Tanh() # -1 to 1
ReLU() # Rectified Linear Unit
# ============================================================================
# MODERN ACTIVATIONS (State-of-the-art)
# ============================================================================
GELU() # Gaussian Error Linear Unit - Used in GPT, BERT
Swish() # Self-Gated - Google's discovery
Mish() # Self-Regularized - State-of-the-art
LiSHT() # Linearly Scaled Hyperbolic Tangent
SiLU() # Sigmoid Linear Unit - Same as Swish
# ============================================================================
# PARAMETRIC ACTIVATIONS
# ============================================================================
LeakyReLU(alpha=0.01) # Leaky Rectified Linear Unit
PReLU(alpha=0.25) # Parametric ReLU
ELU(alpha=1.0) # Exponential Linear Unit
SELU() # Scaled ELU - Self-normalizing
CELU(alpha=1.0) # Continuously Differentiable ELU
# ============================================================================
# SPECIALIZED ACTIVATIONS
# ============================================================================
Maxout() # Maximum of inputs
Minout() # Minimum of inputs
GLU() # Gated Linear Unit
ARelu() # Adaptive ReLU
FReLU() # Funnel ReLU
BReLU() # Bounded ReLU
# ============================================================================
# SHRINKAGE ACTIVATIONS
# ============================================================================
HardShrink(lambda_=0.5) # Hard shrinkage
SoftShrink(lambda_=0.5) # Soft shrinkage
TanhShrink() # Tanh shrinkage
# ============================================================================
# SMOOTH ACTIVATIONS
# ============================================================================
Softplus() # Smooth ReLU
Softsign() # Smooth sign
HardSigmoid() # Piecewise linear sigmoid
HardTanh() # Piecewise linear tanh
# ============================================================================
# ADVANCED ACTIVATIONS
# ============================================================================
Snake(frequency=1.0) # Periodic activation
SnakeBeta(alpha=1.0, beta=1.0) # Learnable periodic
# ============================================================================
# VARIANT ACTIVATIONS
# ============================================================================
ThresholdedReLU(theta=1.0) # ReLU with threshold
ReLU6() # Bounded ReLU (0 to 6)
HardSwish() # Mobile-optimized Swish
ISRU(alpha=1.0) # Inverse Square Root Unit
ISRLU(alpha=1.0) # Inverse Square Root Linear Unit
# ============================================================================
# SQUARED ACTIVATIONS
# ============================================================================
ReLUSquared() # ReLU then square
SquaredReLU() # Square then ReLU
# ============================================================================
# NORMALIZATION (Often used as output activations)
# ============================================================================
Softmax() # Probability distribution| Activation | Range | Use Case | Pros | Cons |
|---|---|---|---|---|
| ReLU | [0, ∞) | General purpose | Fast, simple | Dead neurons |
| GELU | (-∞, ∞) | Transformers, NLP | State-of-the-art | Slower |
| Swish | (-∞, ∞) | Deep networks | Smooth, self-gated | Computationally expensive |
| Mish | (-∞, ∞) | Image classification | Best accuracy | Most expensive |
| Tanh | (-1, 1) | RNNs, small networks | Zero-centered | Vanishing gradient |
| Sigmoid | (0, 1) | Binary classification | Probabilistic | Vanishing gradient |
| LeakyReLU | (-∞, ∞) | Deep networks | No dead neurons | Needs tuning |
| SELU | (-∞, ∞) | Self-normalizing nets | Auto-normalization | Specific initialization |
| ELU | (-α, ∞) | Deep networks | Smooth, negative values | Slower than ReLU |
| Softmax | (0, 1) | Multi-class output | Probability distribution | Only for output layer |
| Layer | Description | Parameters | Use Case |
|---|---|---|---|
| DenseLayer | Fully connected with Xavier/He init | units, input_dim, activation |
Standard networks |
| ActivationLayer | Standalone activation | activation |
Flexible activation placement |
| DropoutLayer | Regularization (auto train/test mode) | rate |
Prevent overfitting |
| BatchNormLayer | Batch normalization + running stats | epsilon, momentum |
Stable training, faster convergence |
| Conv2DLayer | 2D convolution | filters, kernel_size, stride |
Image processing, CNNs |
| MaxPoolingLayer | Spatial downsampling | pool_size, stride |
Reduce spatial dimensions |
| LSTMLayer | Long Short-Term Memory | units, return_sequences |
Sequence modeling, time series |
| GRULayer | Gated Recurrent Unit | units, return_sequences |
Faster alternative to LSTM |
| SoftmaxLayer | Probability distribution | - | Multi-class classification |
| EmbeddingLayer | Word embeddings | vocab_size, embedding_dim |
NLP, text processing |
| FlattenLayer | Reshape to 1D | - | CNN to Dense transition |
| ReshapeLayer | Arbitrary reshaping | target_shape |
Flexible architecture design |
Example: Building a CNN:
from lib.grnexus_layers import *
model = NeuralNetwork(loss='cross_entropy', learning_rate=0.001)
model.add(Conv2DLayer(filters=32, kernel_size=3, input_shape=(28, 28, 1)))
model.add(MaxPoolingLayer(pool_size=2))
model.add(Conv2DLayer(filters=64, kernel_size=3))
model.add(MaxPoolingLayer(pool_size=2))
model.add(FlattenLayer())
model.add(DenseLayer(128, activation=ReLU()))
model.add(DropoutLayer(rate=0.5))
model.add(DenseLayer(10, activation=Softmax()))GRNexus's native C core delivers 10-100x speedup over pure Python/Ruby:
| Operation | Pure Python/Ruby | GRNexus (C) | Speedup | Notes |
|---|---|---|---|---|
| Activation (1M ops) | 850ms | 8ms | 106x ⚡ | GELU, Swish, Mish |
| Dense Forward Pass | 320ms | 12ms | 27x ⚡ | Matrix multiplication |
| Batch Normalization | 180ms | 6ms | 30x ⚡ | Running stats |
| Text Vectorization | 450ms | 15ms | 30x ⚡ | TF-IDF computation |
| Numeric Statistics | 120ms | 4ms | 30x ⚡ | Mean, std, variance |
| Dropout (training) | 95ms | 3ms | 32x ⚡ | Random masking |
| Model Save/Load | 250ms | 45ms | 5.5x ⚡ | Compression + serialization |
Real-world training comparison:
Dataset: 10,000 samples, 50 features, 10 classes
Architecture: 3 hidden layers (128, 64, 32 units)
Epochs: 100
Pure Python: ~45 minutes
GRNexus: ~2.5 minutes (18x faster!)
Why so fast?
- ✅ Native C implementation for compute-intensive operations
- ✅ Optimized memory management
- ✅ Efficient matrix operations
- ✅ Zero Python/Ruby overhead in hot paths
- ✅ Compiled with -O3 optimization
Contributions welcome! GRNexus is GPL-3.0 licensed.
- Fork the repository
- Create feature branch
- Commit changes
- Push and create Pull Request
GNU General Public License v3.0 - See LICENSE
-
XOR Problem (
ruby/example_xor.rb,python/example_xor.py)- Classic neural network introduction
- Perfect for beginners
-
Advanced Demos (
ruby/test/advanced test/,python/test/advanced test/)- Digit Recognition (GTK3 interactive app) - Draw and recognize handwritten digits
- Sentiment Analysis (3 variants) - Simple, Embeddings, and Sequence-based
- 3D Image Classifier - RGB image processing with tensors
- Complete production-ready examples
-
Text Generation (
ruby/test/test_advanced_complete.rb)- Next-word prediction
- Vocabulary management
- Sequence modeling
-
Sentiment Analysis (
python/test/test_advanced_complete.py)- Binary classification
- Text vectorization
- Real-world NLP
-
Time Series Prediction (
ruby/test/test_load_python_models.rb)- Sliding window approach
- Numeric preprocessing
- Forecasting
-
Deep Networks (All test files)
- Modern activations (GELU, Swish, Mish)
- Batch normalization
- Dropout regularization
GRNexus/
├── README.md # You are here!
├── CAMBIOS_IMPLEMENTADOS.md # Changelog (Spanish)
├── docs/
│ ├── es/ # Spanish documentation
│ ├── fr/ # French documentation
│ └── pt/ # Portuguese documentation
├── ruby/
│ ├── grnexus.rb # Main Ruby API
│ ├── lib/ # Ruby modules
│ ├── example_xor.rb # Quick start example
│ └── test/ # Complete test suites
└── python/
├── grnexus.py # Main Python API
├── lib/ # Python modules
├── example_xor.py # Quick start example
└── test/ # Complete test suites
- GPU acceleration (CUDA support)
- Transformer layers (attention mechanism)
- Model quantization (INT8, FP16)
- ONNX export support
- Web deployment (WASM)
- Distributed training
- AutoML capabilities
- Model compression
- Mobile deployment (iOS, Android)
- Real-time inference API
| Feature | TensorFlow | PyTorch | GRNexus |
|---|---|---|---|
| Cross-Language | ❌ | ❌ | ✅ Ruby ↔ Python |
| Zero Dependencies | ❌ | ❌ | ✅ Pure + C |
| Model Inspection | ❌ | ❌ | ✅ Without loading |
| Learning Curve | Steep | Moderate | Gentle |
| File Size | ~500MB | ~800MB | <5MB |
| Setup Time | 10-30 min | 10-30 min | 30 seconds |
| Production Ready | ✅ | ✅ | ✅ |
| Performance | Excellent | Excellent | Very Good |
| Text Processing | External | External | ✅ Built-in |
| Numeric Ops | External | External | ✅ Built-in |
Perfect for:
- 🎓 Learning neural networks from scratch
- 🚀 Rapid prototyping
- 🔬 Research and experimentation
- 📱 Embedded systems (low memory)
- 🌐 Cross-language teams
- 🎯 Production deployments (small-medium scale)
Not ideal for:
- 🖼️ Large-scale image processing (use TensorFlow/PyTorch)
- 🎮 Real-time video processing
- 🌍 Distributed training across clusters
- 🔥 Cutting-edge research (transformers, diffusion models)
We welcome contributions! GRNexus is GPL-3.0 licensed and open source.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- 📝 Documentation improvements
- 🌍 Translations (more languages)
- 🧪 More test cases
- 🐛 Bug reports and fixes
- ⚡ Performance optimizations
- 🎨 Example projects
- 📊 Benchmarks
GNU General Public License v3.0
This means you can:
- ✅ Use commercially
- ✅ Modify
- ✅ Distribute
- ✅ Use privately
But you must:
⚠️ Disclose source⚠️ License under GPL-3.0⚠️ State changes
See LICENSE for full details.
GRNexus stands on the shoulders of giants:
- Inspiration: TensorFlow, PyTorch, Keras
- Activations: Research papers from Google, OpenAI, DeepMind
- Architecture: Modern deep learning best practices
- Community: Ruby and Python communities
- 🐛 Issues: GitHub Issues
- 💬 Discussions: GitHub Discussions
- 📧 Email: support@grcodedigitalsolutions.com
- 🌐 Website: grcodedigitalsolutions.com
If you find GRNexus useful, please consider giving it a star! ⭐
It helps others discover the project and motivates us to keep improving it.
git clone https://github.com/grcodedigitalsolutions/GRNexus.git
Made with ⚡ and ❤️ by GR Code Digital Solutions
Copyright © 2024-2025 GR Code Digital Solutions. Licensed under GPL-3.0.
Neural Networks • Cross-Language • Production Ready
