Skip to content

Zephyrus02/Computational-Offloading

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Light weight Computational Offloading using Machine learning

Introduction

Computational offloading enhances the performance of resource-constrained devices, such as smartphones and IoT devices, by transferring resource-intensive tasks to cloud or edge servers. This reduces device stress, conserves battery life, and optimizes memory usage. Traditional offloading methods, like MMQ and FIFO, are limited by high computational costs. This paper introduces lightweight ML and DL models optimized for resource constrained devices achieving superior performance compared to traditional approaches. Key findings include maximum accuracy of 99. 83% (Random Forest), 99. 84% (Decision Tree), and 99.70% (DNN). Quantization reduced model sizes significantly—DNN to 20.56 KB, Random Forest to 1756.12 KB, and Decision Tree to 41.83 KB—while maintaining high accuracy: 99.6% (DNN), 99.69% (Random Forest), and 99.85% (Decision Tree). Extended simulations demonstrated improved processing times, resource utilization, energy efficiency, and scalability compared to traditional methods. This paper provides a scalable and efficient framework for real-time applications and minimizing network and server strain. It contributes to dynamic resource management for next-generation IoT and mobile devices using intelligent ML and DL-based offloading strategies.

Architecture

Implementation

The implementation of computational offloading involves using machine learning models to make decisions about whether to process tasks locally on a device or offload them to a more powerful server.

The system monitors network parameters like latency and bandwidth and uses machine learning models like Random Forest, Decision Trees, and Deep Neural Networks (DNNs) to analyze these parameters and predict the best offloading strategies. These models are optimized for resource-constrained devices using techniques like quantization, which reduces their size and computational overhead.

Based on the model's predictions, the system decides whether to offload the task to a cloud server for high-capacity tasks or an edge device for low-latency, real-time tasks. The task is then processed, and the results are sent back to the originating device.

Running the project on local system

To run the Computational Offloading project on your local system, follow these steps:

  1. Clone the Repository: Clone the repository to your local machine using the following command:

    git clone https://github.com/Zephyrus02/Computational-Offloading.git
    cd Computational-Offloading
  2. Install Dependencies: Install the required dependencies using pip.

    pip install -r requirements.txt
  3. Run the Jupyter Notebooks: Launch Jupyter Notebook to run and interact with the project code.

    jupyter notebook

    Or run using VS code.

    code .

About

Light weight Computational Offloading using Machine learning

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published