-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Aerospike pods consume a lot of RAM during migration, which is not released after migration #41
Comments
That's a very stale KB article. The issue with threads loading entire partitions into memory was addressed back in Aerospike 3.x. I've requested this be reviewed internally. |
any ideas what could be the problem here? we are still experiencing this. should we update to latest version? |
Could you provide the output of:
If the migrations were from decreasing the cluster size, it is possible that the primary index added one or more stages. The primary index is allocated in 1 GiB stages (by default). Primary index memory isn't ever freed back to the OS, the free space in the stages is managed by Aerospike. |
@kportertx thanks for the response! right now there are no migrations happening, and the output of
|
Oh, right, this is Aerospike Community so |
Platform: GKE
Aerospike container version: aerospike/aerospike-server:5.5.0.7
Aerospike pods consume a lot of RAM during migration, which is not released after migration: (graph below is
![image](https://user-images.githubusercontent.com/29513074/186672316-d70ff548-2d9f-466b-8c13-ac1b51dd9f9a.png)
container memory usage
/container memory limit
, which is5Gi
)we run a 3 Aerospike community edition pods cluster:
pods resources:
some of the aerospike helm values:
at the start of the migration, running
kubectl top -n aerospike pod aerospike-v2-aerospike-0
showsand at the end of the migration it shows:
output of
asadm -e "info"
after migration:I have seen this post which explains why does it consumes this much memory, but i don't see why this memory is not released after migration is done.
The text was updated successfully, but these errors were encountered: