-
-
Notifications
You must be signed in to change notification settings - Fork 884
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increasing and high Server Load Avearge #4282
Comments
I also noticed increased database load after the upgrade. Is this related to the new persistent federation queue? Edit: setting |
Just want to +1 this - I've seen on average a doubling of the server load metric since the update was applied. |
I have been able to reduce the database load by setting |
Yes this is most likely because of the new federation queue. Previously, outgoing activities would be handled entirely in memory in the Lemmy process, but now they get written to the db and then read again. #4285 should help by batching these db queries. |
Thanks! Adding |
Please follow these steps to get info about database performance:
|
Also, in general a higher server load floor on 0.19 is expected and not really an issue. The floor of server use is higher (esp. for small instances) but it scales better to higher federation loads. |
My issue isn't particularly the increased CPU load but rather the increased IO load which is tanking performance on other services I host, as well as the increased memory usage due to increased PostgreSQL activity (which filled my host's swap until I set a limit to the connection pool size) I hope #4285 will resolve my issues. |
It won't.. Do you have synchronous_commit=off set? |
I don't think so, my postgres command is
I'll try that though |
Postgres memory usage is down again after setting |
I am following your comments, and I'll be sure to wait for a solution. |
@arifwn You can control postgres with the If you tell postgres to use 3 GB and 1 cpu (for example), it wont use all your resources. It may use memory if its not used by something else, and then release it the second something else needs it. Thats normal. That being said, I also reduced the pool size to 30 but didnt really notice a difference. The postgres settings made the major difference. |
I'll close this since it seems like the same thing as #4334 and that one has more detail (I don't see any info here that's not present there) (?) |
@linux-cultist My postgres config: https://gist.github.com/arifwn/1c86fe79708dfe3bd43ecabaafc73320 The VPS has 4.5 GB of RAM and postgres is configured to use 2 GB (or did I configure it wrong?). Unless I configured lemmy to database.pool_size to 30 (instead of leaving it to use default value, which was 95 back then in 0.19.0), after two days either lemmy or posgres got oom-killed because the ram was exhausted. I haven't tried again in 0.19.1. |
Requirements
Summary
After upgrading my Lemmy instance to version 0.19.0 I noted a fast increase in load average on the server that remains high (1.30).
Steps to Reproduce
docker compose down
docker-compose.yml
filedocker compose up -d
Technical Details
The server OS is Ubuntu 22.04.3 LTS.
Some files:
Version
0.19.0
Lemmy Instance URL
https://community.nicfab.it
The text was updated successfully, but these errors were encountered: