Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory usage #48

Closed
cgrantcal opened this issue Mar 4, 2019 · 7 comments
Closed

Memory usage #48

cgrantcal opened this issue Mar 4, 2019 · 7 comments
Labels
Help Wanted Extra attention is needed

Comments

@cgrantcal
Copy link

Hey,

I am running some tests and i notice that with more requests more memory is allocated as is expected. However i see that this memory is never de-allocated.

My question is that is there was a way for this memory to be de-allocated once the requests have been serviced and queuing reduced. I appreciate that this would not be the best for optimal performance. The host this is running on has limited resources, having a way to limit memory usage during low traffic periods would be helpful.

Any ideas you think could help are appreciated.

Regards

Conor

@cgrantcal
Copy link
Author

Note :
I have tested adding -D OATPP_DISABLE_POOL_ALLOCATIONS.

 * Define this to disable memory-pool allocations.
 * This will make oatpp::base::memory::MemoryPool, method obtain and free call new and delete directly

I see no change to the memory behaviour

Thanks

Conor

@lganzzzo
Copy link
Member

lganzzzo commented Mar 4, 2019

Hello @cgrantcal,

OATPP_DISABLE_POOL_ALLOCATIONS is the right guess.

Note that you have to rebuild oatpp module liboatpp in order for this to work.

Please let me know how is your progress.

Regards,
Leonid

@lganzzzo lganzzzo added the Help Wanted Extra attention is needed label Mar 4, 2019
@lganzzzo
Copy link
Member

lganzzzo commented Mar 4, 2019

@cgrantcal
Update.

I've added a fix for disabling pool allocations. See #49.
Also I made it possible to configure oatpp compiler options from cmake.

So to build oatpp with OATPP_DISABLE_POOL_ALLOCATIONS:
(from oatpp dir)

$ cd build/
$ cmake -DOATPP_DISABLE_POOL_ALLOCATIONS=ON ..
$ make
$ make install

To make sure that oatpp is built with the right options you may add a log oatpp::base::Environment::printCompilationConfig();

So now it should deallocate more memory once load is dropped and connections are closed.

Please let me know if you have more questions.

Best Regards,
Leonid

@cgrantcal
Copy link
Author

Hey,

I can see that with that flag enabled:

############################################################################
## oatpp module compilation config:

OATPP_DISABLE_ENV_OBJECT_COUNTERS=OFF
OATPP_DISABLE_POOL_ALLOCATIONS=ON
OATPP_THREAD_HARDWARE_CONCURRENCY=AUTO
OATPP_THREAD_DISTRIBUTED_MEM_POOL_SHARDS_COUNT=10
OATPP_ASYNC_EXECUTOR_THREAD_NUM_DEFAULT=2

I can still see that the memory usage never decreases
memory

In your tests how long does it take before the memory is de allocated?

Thanks

Conor

@lganzzzo
Copy link
Member

lganzzzo commented Mar 5, 2019

Hey @cgrantcal ,

Looks like not all pools are disabled. I think it might be a bug.

Can you please specify what load you are running against your service:

  • type of request
  • concurrency level
  • requests per second

Thanks,
Leonid

@cgrantcal
Copy link
Author

Hey,

I am making simple post and get requests.
There are about 5 clients hooked up. making about 100 requests a second each for 10 seconds.

Hope that helps.

Thanks

Conor

@lganzzzo
Copy link
Member

lganzzzo commented Mar 6, 2019

Hey @cgrantcal ,

I've conducted some tests and results are as follows:

Currently the best results in terms of memory you are able to get with oatpp being built as:

cmake -DOATPP_DISABLE_POOL_ALLOCATIONS=ON -DOATPP_THREAD_DISTRIBUTED_MEM_POOL_SHARDS_COUNT=1 -DCMAKE_BUILD_TYPE=Release ..

In this setup I get:

  • 352 KB on service start.
  • 13.2 MB during the load wrk -t1 -c500 -d10s "http://127.0.0.1:8000/"
  • 5.4 MB when load is dropped

Also memory consumption will grow a bit as the load repeated till some point (due to memory fragmentation).

I'll try to make some memory optimization in future, but for now it is what it is

Best Regards,
Leonid

@lganzzzo lganzzzo closed this as completed Mar 9, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Help Wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants