Skip to content

This is an example to show how Spark ML could be used to predict response time of a service for a server-side application

Notifications You must be signed in to change notification settings

AliAminian/server-response-time-predictor

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Server response time predictor

This is an example to show how Spark ML could be used to predict response time of a service for a server-side application. The number of parameters that model a service for our ml-based module can differe and we could intensify the important parameters with suitalbe coefficients.

Ideas

  1. In a server/client scenarios that different services from server-side application, response time of a server for its services could be vital for client. This module could predict the response time of an especial service. Indeed, the response time as an important factor helps administrators to make a suitalbe decision in different situations in real-world usage of services for their servers to be responsive and available.
  2. There are many multi-instance server side applications that the number of instantians depends on the hardware of machine used in operational environments (e.g. RAM Space, CPU cors, etc). In fact, developers and tester may not pre-define a constant number of instances for their multi-instance server-side applications on different machines configuration. Based on this module we could have a intelligent module for automatically tunning the number of instanciation with predicating the response time of different instance of server application.

How the module could help

architecture

About

This is an example to show how Spark ML could be used to predict response time of a service for a server-side application

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages