Skip to content

A comparative analysis of neural network implementation for function approximation using Wolfram Mathematica and Python (TensorFlow, Keras, Optuna). This diploma project explores algorithms, tuning tools, and performance evaluation.

License

Notifications You must be signed in to change notification settings

DzimaSh/neural-network-analysis-wolfram-python

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Algorithms and Tools for Neural Network Tuning and Evaluation: A Comparative Analysis

This repository contains the complete diploma project by Dmitry Shushkevich, submitted to the Faculty of Applied Mathematics and Informatics at Belarusian State University in 2025. The research focuses on the algorithms and tools for tuning and evaluating the efficiency of neural networks, featuring a comparative analysis of implementations in Wolfram Mathematica and Python.

📜 Project Overview

The core objective of this project is to investigate, compare, and practically implement algorithms for configuring, training, and evaluating neural networks. The study is demonstrated on a function approximation task using noisy data, with implementations in two distinct environments:

  1. Wolfram Mathematica: Leveraging its integrated suite of tools for neural networks.
  2. Python: Utilizing the powerful ecosystem of TensorFlow, Keras, and the Optuna hyperparameter optimization framework.

The work provides a detailed analysis of the advantages, limitations, and overall efficiency of each platform in solving applied machine learning problems.

✨ Key Features

  • Comparative Analysis: A head-to-head comparison of Wolfram Mathematica and Python for building and fine-tuning neural networks.
  • Function Approximation: Neural networks are trained to approximate a complex, non-linear function from data corrupted with both uniform and normal noise.
  • Automated Hyperparameter Tuning: The project employs automated tools like Optuna (in Python) and a custom regression-based approach (in Mathematica) to find optimal hyperparameters, including hidden layer size, epochs, and optimizer type.
  • In-Depth Evaluation: Models are rigorously evaluated using Mean Squared Error (MSE), with results visualized through plots and heatmaps.
  • Comprehensive Documentation: The repository includes the full diploma text, presentation slides, and source code.

🛠️ Technologies Used

  • Wolfram Mathematica: For symbolic computation, data visualization, and its built-in neural network framework.
  • Python: The primary programming language for the open-source implementation.
  • TensorFlow & Keras: For building, training, and evaluating the neural network models.
  • Optuna: An advanced hyperparameter optimization framework used to automate model tuning.
  • NumPy & Pandas: For data manipulation and analysis.
  • Matplotlib & Seaborn: For data visualization, including plots and heatmaps.

🚀 Getting Started

Prerequisites

  • For Mathematica: A working installation of Wolfram Mathematica (Version 12.0 or newer is recommended).
  • For Python: Python 3.8+ and the dependencies listed in src/python/requirements.txt.

Installation

  1. Clone the repository:
    git clone [https://github.com/DzimaSh/neural-network-analysis-wolfram-python.git](https://github.com/DzimaSh/neural-network-analysis-wolfram-python.git)
    cd neural-network-analysis-wolfram-python
  2. Install Python dependencies:
    pip install -r src/python/requirements.txt

Usage

  • Wolfram Mathematica:

    1. Open the notebook src/mathematica/shushkevich_diploma.nb.
    2. Run the cells sequentially to reproduce the data generation, model training, and analysis.
  • Python (Jupyter Notebook):

    1. Navigate to the notebooks directory.
    2. Launch Jupyter Notebook: jupyter notebook.
    3. Open shushkevich_diploma.ipynb and run the cells. The notebook contains the full pipeline, from data generation to hyperparameter tuning and final visualization.

📊 Results Summary

The study successfully demonstrated that both Wolfram Mathematica and Python are highly capable platforms for this task. Automated hyperparameter tuning proved crucial for achieving optimal performance.

Here's a summary of the best-performing models from the Python implementation using Optuna:

Dataset + Optimizer Hidden Size Epochs Optimizer Predicted MSE Real MSE
Uniform + ADAM 249 350 adam 0.162611 0.155225
Uniform + RMSProp 173 333 rmsprop 0.144401 0.240935
Normal + ADAM 249 330 adam 0.355868 0.353068
Normal + RMSProp 118 350 rmsprop 0.332923 0.381010

Here is a summary of the best-performing models from the Wolfram Mathematica implementation:

Model HiddenSize Epochs Predicted MSE Real MSE
Uniform + ADAM 255 260 0.245427 0.252199
Uniform + RMSProp 180 240 0.174243 0.164981
Normal + ADAM 255 280 0.397965 0.411665
Normal + RMSProp 255 200 0.308476 0.298536

🤝 Contributing

Contributions, issues, and feature requests are welcome! Feel free to check the issues page.

📝 License

This project is licensed under the MIT License. See the LICENSE file for details.

About

A comparative analysis of neural network implementation for function approximation using Wolfram Mathematica and Python (TensorFlow, Keras, Optuna). This diploma project explores algorithms, tuning tools, and performance evaluation.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published