Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thoughts on a plugin architecture #96

Closed
ollym opened this issue Mar 7, 2024 · 5 comments
Closed

Thoughts on a plugin architecture #96

ollym opened this issue Mar 7, 2024 · 5 comments

Comments

@ollym
Copy link
Contributor

ollym commented Mar 7, 2024

We're exploring a way to limit memory usage of individual workers, kinda similar to how you've implemented timeouts. Where a timeout allocates a fixed number of seconds a request is allowed to run before being killed, we wanted to implement the same for memory usage.

I wonder if there's any interest in implementing a lightweight plugin system, that the timeout implementation can be refactored to use, and then other features like our "memory limiter" could be developed on the same set of rails without us having to fork or monkey patch the code base.

@casperisfine
Copy link
Contributor

casperisfine commented Mar 7, 2024

other features like our "memory limiter" could be developed on the same set of rails without us having to fork or monkey patch the code base.

Have you seen: #92

I'm not convinced a plugin system is needed for that, but also perhaps I don't quite understand what you have in mind?

Also the soft timeout is a bit inadequate right now, I need to make some changes to it.

@ollym
Copy link
Contributor Author

ollym commented Mar 7, 2024

Yes we're aware of after_request_complete but just as with the timeout implementation it wouldn't work because we wanted to kill a worker during the request if it went above a certain threshold, as you have with the timeout thread.

Maybe you can add like a around_request event which we can yield to perform the request and spawn a thread that monitors the memory usage while the request is running and kills it (like timeout does) if it goes above a threshold.

Does that make sense?

EDIT: Meant to say around_request 🤦 not after_request

@casperisfine
Copy link
Contributor

we wanted to kill a worker during the request if it went above a certain threshold,

I don't understand why you'd want to do that. But if you really want to do that, you don't need anything special from Pitchfork, you can just spawn that thread from after_worker_fork. So I don't see any need for a plugin system here.

@ollym
Copy link
Contributor Author

ollym commented Mar 7, 2024

Sorry @casperisfine not sure how after_worker_fork helps.

request_env = process_client(client, worker, prepare_timeout(worker))
@after_request_complete&.call(self, worker, request_env)
worker.increment_requests_count

What we need is a callback before process_client is called that allows us to begin the memory thread monitor, just as you're doing with timeout via prepare_timeout(worker)

Maybe something simple like before_request like:

@before_request&.call(self, worker)
request_env = process_client(client, worker, prepare_timeout(worker))
@after_request_complete&.call(self, worker, request_env)
worker.increment_requests_count

Will you accept a PR for this?

@casperisfine
Copy link
Contributor

My point is you don't need to start that memory monitor thread on each request. Actually it would be really bad for performance.

You can just do something like:

after_worker_fork do
  loop do
    if memory_exhausted?
     exit! 42
    end
   sleep 1
  end
end

But again, I don't see why you wouldn't just do that on after_request_complete so you let the request terminate first.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants