Note: For the Turkish version of this document, refer to README_TR.md.
- Project Summary
- Project File Structure
- Data Generation Functions
- Neural Network Layers
- Activation Functions
- Regularization Layers
- Dense Layer
- Loss Functions
- Neural Network Class
- Optimizers
- Learning Rate Scheduler
- Utilities
- Example Usage
NeuroFox is a neural network application that includes various neural network components and optimization techniques. It performs performance analyses using binary classification and various activation functions.
NeuroFox/
β
βββ assets/
β βββ linear_activation.png # Graph of the linear activation function
β βββ logo.png # Project logo
β βββ relu_activation.png # Graph of the ReLU activation function
β βββ sigmoid_activation.png # Graph of the sigmoid activation function
β βββ softmax_activation.jpg # Graph of the softmax activation function
β
βββ data/
β βββ __init__.py # File defining the data module
β βββ data.py # File containing data generation functions
β
βββ layers/
β βββ __init__.py # File defining the layers module
β βββ dense.py # File containing the Dense layer class
β βββ dropout.py # File containing the Dropout regularization layer
β βββ layer.py # File containing the base layer class
β βββ activations/ # Activation functions
β βββ __init__.py # File defining the activations module
β βββ linear.py # File containing the linear activation function
β βββ relu.py # File containing the ReLU activation function
β βββ sigmoid.py # File containing the sigmoid activation function
β βββ softmax.py # File containing the softmax activation function
β
βββ losses/
β βββ __init__.py # File defining the losses module
β βββ binary_crossentropy.py # File containing the binary cross-entropy loss function
β βββ binary_focal_loss.py # File containing the binary focal loss function
β βββ categorical_crossentropy.py # File containing the categorical cross-entropy loss function
β
βββ neural_network/
β βββ __init__.py # File defining the neural_network module
β βββ neural_network.py # File defining the neural network structure
β
βββ optimizers/
β βββ __init__.py # File defining the optimizers module
β βββ adagrad_optimizer.py # File containing the Adagrad optimization algorithm
β βββ adam_optimizer.py # File containing the Adam optimization algorithm
β βββ learning_rate_scheduler.py # File containing the learning rate scheduler
β βββ rmsprop_optimizer.py # File containing the RMSprop optimization algorithm
β βββ sgd_optimizer.py # File containing the Stochastic Gradient Descent (SGD) optimization algorithm
β
βββ utils/
β βββ __init__.py # File defining the utils module
β βββ binary_classification.py # Tools for generating binary classification data
β βββ model_utils.py # Various utility functions related to models
β βββ one_hot_encoding.py # One-hot encoding function
β βββ standard_scaler.py # Function for standardizing data
β βββ train_test_split.py # Function for splitting data into training and testing sets
β
βββ binary_classification_model.py # Example of a binary classification model
βββ iris_dataset_model.py # Example model with the IRIS dataset
βββ xor_model.py # Example model with the XOR dataset
βββ README.md # General information about the project, installation, and usage instructions
assets/: Visual files related to the project, including activation function graphs.data/: Functions for generating and testing training data.layers/: Neural network layers and activation functions.losses/: Loss functions and their implementations.neural_network/: Building blocks of the neural network model.optimizers/: Optimization algorithms and learning rate schedulers.utils/: Functions for data processing, model management, and other utilities.README.md: General project information, installation instructions, usage details, and examples.
Generates XOR data for binary classification tasks.
-
Usage:
X, y = create_xor_data(1000)
-
Parameters:
num_samples(int): Number of data points to generate.
-
Returns:
X: Input featuresy: Labels
Generates binary classification data with an option to add noise.
-
Usage:
X, y = create_binary_classification_data(samples=1000, noise=0.1)
-
Parameters:
num_samples(int): Number of data points to generate.
-
Returns:
X: Input featuresy: Labels
Base class for all layers within the neural network.
Applies the Softmax activation function to the input.
Applies the Sigmoid activation function to the input.
Applies the ReLU activation function to the input.
Applies the linear activation function to the input (no change).
Applies dropout regularization.
-
Usage:
dropout_layer = Dropout(rate=0.5)
-
Parameters:
rate(float): The proportion of input units to drop.
A fully connected layer in the neural network.
-
Usage:
dense_layer = Dense(input_size=128, output_size=64)
-
Parameters:
input_size(int): Number of input features.output_size(int): Number of output features.
Calculates binary cross-entropy loss.
-
Formula:
$$L = -\frac{1}{N}\sum_{i=1}^{N} [y_i \log(\hat{y}_i) + (1-y_i) \log(1-\hat{y}_i)]$$
Calculates categorical cross-entropy loss.
-
Formula:
$$L = -\sum_{i=1}^{N} y_i \log(\hat{y}_i)$$
Calculates binary focal loss, often used to address class imbalance.
-
Formula:
$$\text{FL}(p_t) = -\alpha_t (1 - p_t)^\gamma \log(p_t)$$
-
Parameters
:
gamma(float): Focusing parameter.alpha(float): Balancing factor.
The main class for defining and training neural networks.
- Usage:
nn = NeuralNetwork() nn.add(Dense(128, 64)) nn.add(ActivationReLU()) nn.compile(loss=BinaryCrossentropy(), optimizer=AdamOptimizer(learning_rate=0.001)) nn.train(X_train, y_train, epochs=10, batch_size=32)
The Adam optimization algorithm.
-
Usage:
optimizer = AdamOptimizer(learning_rate=0.001)
-
Parameters:
learning_rate(float): Learning rate for the optimizer.
Stochastic Gradient Descent optimizer.
-
Usage:
optimizer = SGDOptimizer(learning_rate=0.01)
-
Parameters:
learning_rate(float): Learning rate for the optimizer.
Adjusts the learning rate during training.
- Usage:
scheduler = LearningRateScheduler(schedule=lambda epoch: 0.001 * 0.95 ** epoch)
Splits data into training and testing sets.
-
Usage:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
-
Parameters:
X(array): Features.y(array): Labels.test_size(float): Proportion of the dataset to include in the test split.
from neural_network import NeuralNetwork
from layers import Dense, ActivationReLU
from losses import BinaryCrossentropy
from optimizers import AdamOptimizer
from utils import create_binary_classification_data, train_test_split
# Generate data
X, y = create_binary_classification_data()
# Split data
X_train, X_test, y_train, y_test = train_test_split(X, y)
# Create and train model
model = NeuralNetwork()
model.add_layer(Dense(input_size=2, output_size=8))
model.add_layer(ActivationReLU())
model.add_layer(Dense(input_size=8, output_size=1))
model.compile(loss=BinaryCrossentropy(), optimizer=AdamOptimizer())
model.fit(X_train, y_train, epochs=10)
# Evaluate model
accuracy = model.evaluate(X_test, y_test)
print(f"Accuracy: {accuracy * 100:.2f}%")from neural_network import NeuralNetwork
from layers import Dense, ActivationSoftmax
from losses import CategoricalCrossentropy
from optimizers import AdamOptimizer
from utils import load_iris_data, train_test_split
# Load data
X, y = load_iris_data()
# Split data
X_train, X_test, y_train, y_test = train_test_split(X, y)
# Create and train model
model = NeuralNetwork()
model.add_layer(Dense(input_size=4, output_size=32))
model.add_layer(ActivationReLU())
model.add_layer(Dense(input_size=32, output_size=3))
model.add_layer(ActivationSoftmax())
model.compile(loss=CategoricalCrossentropy(), optimizer=AdamOptimizer())
model.fit(X_train, y_train, epochs=10)
# Evaluate model
accuracy = model.evaluate(X_test, y_test)
print(f"Accuracy: {accuracy * 100:.2f}%")from neural_network import NeuralNetwork
from layers import Dense, ActivationReLU
from losses import BinaryCrossentropy
from optimizers import AdamOptimizer
from utils import create_xor_data, train_test_split
# Generate data
X, y = create_xor_data(1000)
# Split data
X_train, X_test, y_train, y_test = train_test_split(X, y)
# Create and train model
model = NeuralNetwork()
model.add_layer(Dense(input_size=2, output_size=8))
model.add_layer(ActivationReLU())
model.add_layer(Dense(input_size=8, output_size=1))
model.compile(loss=BinaryCrossentropy(), optimizer=AdamOptimizer())
model.fit(X_train, y_train, epochs=10)
# Evaluate model
accuracy = model.evaluate(X_test, y_test)
print(f"Accuracy: {accuracy * 100:.2f}%")



