Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update seednode config to increase resource limits for v1.2 #3545

Merged
merged 2 commits into from Nov 4, 2019

Conversation

@wiz
Copy link
Member

wiz commented Nov 2, 2019

Seednodes required more RAM after v1.2 upgrade:
Screen Shot 2019-11-02 at 21 18 48

wiz added 2 commits Nov 2, 2019
@wiz wiz requested a review from ripcurlx as a code owner Nov 2, 2019
Copy link
Member

ripcurlx left a comment

utACK

@ripcurlx

This comment has been minimized.

Copy link
Member

ripcurlx commented Nov 4, 2019

@chimp1984 @sqrrm Do you have an idea, why this changed so drastically due to the 1.2+ update?

@ripcurlx ripcurlx merged commit fe4059a into bisq-network:master Nov 4, 2019
1 check passed
1 check passed
continuous-integration/travis-ci/pr The Travis CI build passed
Details
@devinbileck

This comment has been minimized.

Copy link
Member

devinbileck commented Nov 4, 2019

Sorry if I missed any discussion, but what is the reasoning for this and what is the chart illustrating? Based on that chart, it doesn't look like all the memory is being consumed?

If this increase is truly required, then we will need to increase the minimum requirements: https://docs.bisq.network/exchange/howto/run-seednode.html#system-requirements-for-hosting-machine

@wiz wiz deleted the wiz:update-seednode-config branch Nov 4, 2019
@wiz

This comment has been minimized.

Copy link
Member Author

wiz commented Nov 4, 2019

@devinbileck After upgrading to v1.2, my seednode got a 80% memory utilization warning and crashed from OOM. I had to reduce max connections from 100 to 50 and increase RAM from 4GB to 8GB.

@devinbileck

This comment has been minimized.

Copy link
Member

devinbileck commented Nov 5, 2019

Ah ok. I believe other seed nodes are using more modest parameters (30 max connections) which is why I assume you are the only/first to encounter this.

In any case, I think we should either (1) adjust these "default" parameters (Xms, Xmx, maxmemory, maxconnections) to be more modest and capable of meeting the stated minimum requirements, or (2) increase the minimum requirements and ensure all seednode operators update their configuration.

@chimp1984

This comment has been minimized.

Copy link
Contributor

chimp1984 commented Nov 5, 2019

@wiz At 1.2.0/1.2.1 we have an issue with the trade statistics which caused a lot of resources as each node got delivered 7-9 MB of data. That was fixed with the branch I provided and now memory consumption should be back to normal. I am not aware of any reason why v1.2 should consume more ram as before.

There was the issue that risq (as discused with @bodymindarts and you) made requests which caused seeds to respond with about 10 MB of data (all accountage witness data and all trade stats) but those have been about 10 requests a day so that should not have caused permanent memory/cpu pressure, but could have been the cause for an OOM error as well. I have not tested the performance cost here but I know from BSQ blocks that requesting so much datat was very heavy (took about 30 sec for 12MB data).

Looking at https://monitor.bisq.network/d/qclhStdWz/server-metrics?orgId=1&from=now-90d&to=now&var-Server=fl3mmribyxgrv63c

With a 90 days history it seems we got higher memory consumption around the 15th of September. That was around 1.1.6 release. But different seeds have different patterns, but all had lower memory consumption back in August. So some changes in that time frame has caused more pressure. 1.1.6. introduces traders chat, so that produces more mailbox messages and currently there are about 600 which is rather high, but probably caused by the update mess....

We should monitor those data more closely and alert devs in case we see some increases....

I will try to look into code changes in 1.1.6. which might be responsible for those changes.

@chimp1984

This comment has been minimized.

Copy link
Contributor

chimp1984 commented Nov 5, 2019

I compared 1.1.5. with 1.1.6. and in the p2p module i did not saw anything suspicious. Will try to find a good profiler to find out more....

@wiz wiz referenced this pull request Nov 10, 2019
@ripcurlx ripcurlx referenced this pull request Nov 11, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants
You can’t perform that action at this time.