Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loadshed Middleware - Proportional Request Rejection Based on Probabilistic CPU Load #899

Merged
merged 3 commits into from
Jan 22, 2024

Conversation

Behzad-Khokher
Copy link
Member

Loadshed middleware work by intentionally shedding or dropping some of the load (e.g., incoming requests), so the system can maintain acceptable performance and avoid potential failures.

The unique aspect of my implementation is Proportional Request Rejection Based on Probabilistic CPU Load. This means that as the CPU load increases, the probability of rejecting a request rises proportionally.

The formula used for this proportional rejection is:
rejectionProbability := (cpuUsage - cfg.LowerThreshold*100) / (cfg.UpperThreshold - cfg.LowerThreshold)

A: cpuUsage - cfg.LowerThreshold * 100: Calculates how much the current CPU usage exceeds the lower threshold
B: cfg.UpperThreshold - cfg.LowerThreshold: Calculates range between the lower and upper thresholds
C: A/B : division scales the exceeded amount as a fraction of the range between the lower and upper thresholds.

The result is a rejectionProbability. As you can see, keeping everything constant, as the cpuUsage increases, this results in an increase in rejectionProbability. Which can be used to reject request probabilistically.

If the CPU usage is below the LowerThreshold, no requests are rejected. If the CPU usage is above the LowerThreshold, every request has a probability of being rejected depending on how closer the cpu usage is to the UpperThreshold. Once usage exceeds UpperThreshold, all requests are rejected.

The Loadshed Middleware is designed to be extendable for other metrics. Currently the loadshed middleware sheds load based on CPU load.

@ReneWerner87
Copy link
Member

Please take a look at the readme in the root and expand it as well as the workflows

loadshed/README.md Outdated Show resolved Hide resolved
loadshed/cpu.go Outdated Show resolved Hide resolved
@ReneWerner87
Copy link
Member

Please take a look at the readme in the root and expand it as well as the workflows

@Behzad-Khokher don´t forget this part

@Behzad-Khokher
Copy link
Member Author

Really hope this gets merged! Was a bit new with the workflows and had no idea that those were also suppose to be implemented, @ReneWerner87 thanks for pointing me in the right direction 😄 !

@ReneWerner87
Copy link
Member

ReneWerner87 commented Jan 15, 2024

@Behzad-Khokher pls also add the new middleware to

  • dependabot.yml
  • govulncheck.yml
  • gosec.yml

@ReneWerner87
Copy link
Member

@Behzad-Khokher can you please refresh with the master, I have rebuilt the 2 workflows so that they go through the middlewares dynamically and no one has to extend the workflows anymore

@Behzad-Khokher
Copy link
Member Author

@ReneWerner87 I have refreshed it with master to ensure feature branch is up-to-date. I have also updated dependabot.yml. It seems that now I don't have to make any change in govulncheck.yml and gosec.yml as thats been updated to go through the middleware dynamically.

@ReneWerner87
Copy link
Member

ReneWerner87 commented Jan 21, 2024

Thx, i will check it again, tomorrow morning

Copy link
Member

@ReneWerner87 ReneWerner87 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ReneWerner87
Copy link
Member

@gofiber/maintainers you can check briefly, then I merge and release

@ReneWerner87 ReneWerner87 merged commit a3ce566 into gofiber:main Jan 22, 2024
53 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
✏️ Feature New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants