-
Notifications
You must be signed in to change notification settings - Fork 201
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Documentation on library beyond blog post? #80
Comments
I have to echo that it would be interesting to read more about how/if the pools are eventually dealocated. We've been very happy so far with the performance of this library, and are using it to support very fast generation of binary blobs of varying sizes - most under 1 MB, but some in the 10s of MB. We have a memory utilization graph of one of our services like this. The spikes are particularly large requests. An instance of our service jumping up to 2 GB RAM is fine, but it seems weird that the RAM would hang around for an hour if the steady state doesn't require it. I wonder if there might be a good strategy to have more control over shrinking the buffers after particularly big requests - because 99% of our requests don't need a large buffer, but for that 1%, it's nice. I am aware of the aggressive setting, but I don't want to completely deallocate everything all the time either. I'm not entirely sure why the memory shrunk when it did - that yellow request shrunk basically right away, the pink one was almost exactly an hour later, and the teal one was like 80 minutes. This is with .NET Core 3.1 latest on Linux/Docker in Kubernetes. |
From my experience, aggressive buffer return did not return memory to
windows. I have taken to manually managing the memory pool since I know
what specific code will need it and when. When the task is over, I try to
change the manager so the GC can find it, but only if the memory in use is
0. It works but is probably not feasible in most projects. Still, the
documentation on this library is extremely lacking.
…On Wed, Jan 22, 2020, 9:05 PM Steve Ognibene ***@***.***> wrote:
I have to echo that it would be interesting to read more about how/if the
pools are eventually dealocated. We've been very happy so far with the
performance of this library, and are using it to support very generating
binary blobs of varying sizes - most under 1 MB, but some in the 10s of MB.
We have a memory utilization graph of one of our services like this.
[image: image]
<https://user-images.githubusercontent.com/3755379/72955706-0ac99700-3d6b-11ea-9d89-84986ca7c2c0.png>
The spikes are particularly large requests. An instance of our service
jumping up to 2 GB RAM is fine, but it seems weird that the RAM would hang
around for an hour if the steady state doesn't require it. I wonder if
there might be a good strategy to have more control over shrinking the
buffers after particularly big requests - because 99% of our requests don't
need a large buffer, but for that 1%, it's nice. I am aware of the
aggressive setting, but I don't want to completely deallocate everything
all the time either.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#80?email_source=notifications&email_token=AAU4VFGQB2CVJ4UE2COI7F3Q7EJPLA5CNFSM4KESLEAKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEJV547A#issuecomment-577494652>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAU4VFDC3AXZEUA7CV7YERTQ7EJPLANCNFSM4KESLEAA>
.
|
I would be willing to do a PR to write and add such docs if @benmwatson or someone else would be willing to chat about it on Skype or something. Docs are tough on open source - I really appreciate that this library exists at all. Thanks very much to the maintainers!! |
I have literally had in my Todo list app to update this documentation, and I ahve it started, but I keep postponing it. There are especially some newer features that need some explanation to effectively use. Give me a week. I'll ping this again once I have a PR. |
Please consider opening the PR as-is and give it a WIP label. I’ll take a look and give as good feedback as I can. Thanks! |
No PR yet, but you can see the branch so far: https://github.com/microsoft/Microsoft.IO.RecyclableMemoryStream/tree/docupdate For now, I'm keeping it all in readme.md. If it really grows too large, I can split it up, but I'm hoping to keep it simple. |
hey this is a great start - I added some questions to your commit. No reason to break it up as part of this first pass. Thank you!!! |
@nycdotnet and others, please see this PR: #82 |
This has been addressed in PR #82 |
Nice job - Thank you! |
The work I am doing for my project deals with decompressing a file (like 10MB to a 40MB stream) and then running those streams through some patch program which in turn may output a 40MB stream that is then fed once again into a patch program many times. In this instance it means I am using like 5 or 6 40MB streams with a few seconds.
I've found this library significantly reduces memory usage but I can't really figure out what the options do. the only documentation specified is a blog post but it doesn't really explain what any of the options actually do (I don't deal with a lot of memory-related things). I have also found that the memory allocated for the pools doesn't seem to be returned, or returnable, unless I'm missing something. E.g. the app seems to allocate about 600MB of data (on top of 200 idle) but after it ends the app still sits at 800MB used. I understand you want to keep these pools around and allocated but is there a way to get rid of them? I only use them for a certain task, so once that task has finished, keeping it around is not beneficial. But the documentation has nothing that even looks at this kind of scenario.
The lack of intellisense makes using this library extremely difficult as I have almost no idea what some of the options do.
The text was updated successfully, but these errors were encountered: