Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak problem #19

Closed
dsnipe opened this issue Jul 31, 2017 · 12 comments
Closed

Memory leak problem #19

dsnipe opened this issue Jul 31, 2017 · 12 comments

Comments

@dsnipe
Copy link

dsnipe commented Jul 31, 2017

Hey!
Thanks for a great library.
We use it in production under high load, but after one month or so we experienced out of memory on the server. We found that the reason is this library.
Our usage example (it's a plug):

# SKIP
  defp check_rate(conn, options) do
    interval_milliseconds = options[:interval_seconds] * 1000
    max_requests = options[:max_requests]
    ExRated.check_rate(bucket_name(conn), interval_milliseconds, max_requests)
  end

  # Bucket name should be a combination of ip address and request path, like so:
  #
  # "127.0.0.1:/api/v1/authorizations"
  defp bucket_name(conn) do
    path = Enum.join(conn.path_info, "/")
    ip   = conn.remote_ip |> Tuple.to_list |> Enum.join(".")
    "#{ip}:#{path}"
  end
# SKIP

And configurations: config :ex_rated, :timeout, 600_000
In my controller: plug MyApp.Plug.RateLimit, max_requests: 5, interval_seconds: 60

Maybe you have any ideas what might be a problem? Of course I'll try to investigate it and if I find a solution, will create a PR.

@grempe
Copy link
Owner

grempe commented Aug 1, 2017 via email

@sergiotapia
Copy link

@dsnipe This uses ETS for saving bucket information - could it be you're never purging stale bucket data?

@ryanwinchester
Copy link

ryanwinchester commented Dec 25, 2017

@dsnipe did you ever solve this? I wanted to use this in a production application as well, but now I'm scared to.

@grempe
Copy link
Owner

grempe commented Dec 27, 2017

I'll mark this as help wanted as I don't currently have the time to investigate. I suspect there is an edge case at play here though as this lib has been used by others in production for a couple of years now with no similar reports.

@dsnipe Can you provide more information as to how you determined that it was this lib that was seemingly at fault for your server memory issue?

There is an automatic pruning process for old ETS records which should limit the growth of the ETS tables with old buckets.

@sergiotapia
Copy link

@grempe There's also the issue where adding ex_rated to a Phoenix project essentially turns the application single-threaded since every request goes through the exrated genserver. It becomes a bottleneck.

@jaimeiniesta
Copy link
Contributor

@sergiotapia can you please elaborate further in a separate issue? If if makes every request pass through the exrated bottleneck, then that's serious.

@ryanwinchester
Copy link

@sergiotapia
Copy link

sergiotapia commented Jan 2, 2018

@jaimeiniesta It makes every single request go through the single genserver process. It is really bad. I didn't notice this issue but Chris McCord mentioned it to me and said it was a major problem; turning Phoenix apps into a single-threaded app.

He then pointed me to this article: https://dockyard.com/blog/2017/05/19/optimizing-elixir-and-phoenix-with-ets

@jaimeiniesta
Copy link
Contributor

But, you mean every single request that uses ExRated, right? Or every single request? It's bad in both cases, but not as bad in the first case.

@dsnipe
Copy link
Author

dsnipe commented Jan 5, 2018

It turned out that ex_rated wasn't a problem. I can close the issue.

@dsnipe
Copy link
Author

dsnipe commented Jan 5, 2018

Thank you everyone and sorry for a false alarm.

@dsnipe dsnipe closed this as completed Jan 5, 2018
@sergiotapia
Copy link

@jaimeiniesta Yes, every single request that uses exrated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants