Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Auto-scaling Ring Buffer - also downscaling automatical? #50

Closed
patrick-othmer opened this issue Jan 6, 2020 · 13 comments
Closed

Auto-scaling Ring Buffer - also downscaling automatical? #50

patrick-othmer opened this issue Jan 6, 2020 · 13 comments
Assignees
Labels
help wanted Extra attention is needed question Further information is requested waiting for response waiting for the response from commenter

Comments

@patrick-othmer
Copy link

First of all: many thanks for this library!

What is your question about gnet?
I have a question about the ring buffer: Does the ring buffer also downscaling automatically? I didn't find anything in the code, but maybe I'm blind.

I have 40k open websocket sessions, if I now reduce them to 20k, the memory consumption of the ring buffer remains on the same level, even after 30 min.

Thank you in advance for your answer.

@patrick-othmer patrick-othmer added help wanted Extra attention is needed question Further information is requested labels Jan 6, 2020
@panjf2000
Copy link
Owner

panjf2000 commented Jan 7, 2020

That is odd, it has nothing to do with the ring-buffer size but its number based on your desc, gnet will recycle ring-buffers after connections are closed, so it ought to save some memories in my expectation.

Could you tell it more detailedly about the I have 40k open websocket sessions, if I now reduce them to 20k? It could be some bugs in gnet and I want to ensure it.

@panjf2000 panjf2000 added the waiting for response waiting for the response from commenter label Jan 7, 2020
@panjf2000
Copy link
Owner

Besides, which version of gnet are you using?

@patrick-othmer
Copy link
Author

Hi panjf2000,

first of all: Thank you for your response. I use version v1.0.0-rc.4. I close the connections and gnet also sees that since it triggers OnClosed.

I benchmark my service by docker container. So I have the service running on the host with gnet and then spawn several containers that connect to the service on the host. Every connection is registered via OnOpened and every disconnect via OnClosed. Then I stop half of the containers to get 20k connections.

@panjf2000
Copy link
Owner

panjf2000 commented Jan 7, 2020

Could you use pprof tool to find out which part that occupies the most of memory? In that way, we will have an overall view about the memory usage of your gnet server.

@patrick-othmer
Copy link
Author

At startup:
0_connections

Opening 20k connections that send every second a text
20k_connections

Close 10k connections so we still have 10k connections
10k_connections

@panjf2000
Copy link
Owner

panjf2000 commented Jan 7, 2020

It seems that ring-buffers weren't GCed after connections were closed but I can't tell why that was happening because ring-buffers would be put back to sync.Pool which would be GCed at the next GC time, I will go investigate it further to figure out the root cause.

@panjf2000
Copy link
Owner

Or could you add some logs to see if the releasing code have been executed?

@patrick-othmer
Copy link
Author

Added breakpoint, gets triggered:
Bildschirmfoto_2020-01-07_11-22-10

@panjf2000
Copy link
Owner

panjf2000 commented Jan 7, 2020

Then I have no idea why those ring-buffers from closed connections weren't GCed, that makes no sense, I would have to take some time to investigate it.

@patrick-othmer
Copy link
Author

Hi @panjf2000,

I run the test for longer time and it looks like it's working.

Here's what I did:

  1. started the server
  2. 40k connections established
  3. 82 MB memory consumption
  4. All connections disconnected
  5. 82 MB memory consumption
  6. 5 min. later: 166 MB memory consumption with the ring buffer
  7. 30 minutes later:
    Bildschirmfoto_2020-01-07_16-41-56

Looks like sometimes the Golang GC takes longer.

Please excuse the unnecessary confusion and unnecessary work.

@panjf2000
Copy link
Owner

Please excuse the unnecessary confusion and unnecessary work.

Don't be sorry for that, feel free to open an issue when you encounter any problem when using gnet😄, it is my duty to help gnet users solve problems.

Also glad to hear that GC works, but still I have a doubt that Golang GC takes that long time(30 minutes) to collect garbages cuz GC is triggered by a fixed and short time(2 minutes) if the last GC finishes over 2 minutes ago, so it is still odd. But on the other hand, it also could be a GC mutator assist which might cause a long time pause to GC, maybe that is it.

Anyway, I will keep an eye on the GC in gnet, thanks for your discovery on this.

@kpfaulkner
Copy link

Hi @patrick-othmer

Have just discovered Gnet myself and am starting some work with websockets. Is the work you've done publicly available (eg github) or is closed source? Would definitely like to see what you've done (saves me from re-inventing the wheel).

Thanks

Ken

@patrick-othmer
Copy link
Author

Hi @kpfaulkner,

sorry, it's closed source. But the gnet examples may help you: https://github.com/gnet-io/gnet-examples

Regards,

Patrick

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed question Further information is requested waiting for response waiting for the response from commenter
Projects
None yet
Development

No branches or pull requests

3 participants