New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory consumed totally 3-4 times as data size #7056
Comments
there is a similar issue which focuses on querynode, #6745 |
Currently invoking cgo functionality must do |
Changing the format of the internal data to arrow will solves the problem of copying the data between cgo and golang. |
An MEP is in WIP status for this. Please check #7210 |
assign @cydrain |
why are we copying the data? from design perspective, data should not be resident in go? |
By |
I will check if there is a memory leak |
/assign |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
lets keep it |
Steps/Code to reproduce:
in tests20/python_client: run pytest test_e2e.py --host x.x.x.x after updating nb=100,000
or
1.1 after deployment, Milvus consumes about 380MB memory totally.
Expected result:
memory consumed smoothly
Actual results:
total memory consumed from 380MB to 1480MB and keeps at 750MB after step#7, looking into different pods:
querynode: 50MB to 222MB, and keeps at 200MB after step#7
datanode: 50MB to 470MB, and keeps 152MB after step#7
Indexnode: 50M to 286MB, and keeps at 150MB after step#7
proxy: 60M to 336MB, and keeps at 75MB after step#7 (I think 75MB is acceptable after step#7)
Environment:
pymilvus 2.0.0rc3.dev16
pymilvus-orm 2.0.0rc3.dev15
Configuration file:
Additional context:
after deployment with no workload:
when there is workload
after step#7
The text was updated successfully, but these errors were encountered: