You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
A performance improvement is seen when using hugepages, but planning for how many to use is difficult.
Describe the solution you'd like
There are many articles out there about the performance improvement of using huge_pages with postgresql
Trying to estimate and plan how many to allocate is difficult. We use the pgtune website to help us estimate and template the configs for different sized instances of our database, and it would be great if it could give us a guideline of how many hugepages tor reserve as well. (it appears to be shared memory, plus connections, plus some other factors to consider)
Describe alternatives you've considered
I have looked at individual systems, to see what the tuning should be, but they are often being used differently, so not the same number comes up
I do understand this is not part of the postgresql.conf file, and might be out of the scope of this tool.
The text was updated successfully, but these errors were encountered:
I am not sure hardware info is enough to tune huge_pages, because it also depend from size of database and SQL queries. Can you give example of systems, which tune huge_pages for postgresql, so I will check its?
Is your feature request related to a problem? Please describe.
A performance improvement is seen when using hugepages, but planning for how many to use is difficult.
Describe the solution you'd like
There are many articles out there about the performance improvement of using huge_pages with postgresql
Example: https://www.percona.com/blog/why-linux-hugepages-are-super-important-for-database-servers-a-case-with-postgresql/
another example: https://www.enterprisedb.com/blog/improving-postgresql-performance-without-making-changes-postgresql
Trying to estimate and plan how many to allocate is difficult. We use the pgtune website to help us estimate and template the configs for different sized instances of our database, and it would be great if it could give us a guideline of how many hugepages tor reserve as well. (it appears to be shared memory, plus connections, plus some other factors to consider)
Describe alternatives you've considered
I have looked at individual systems, to see what the tuning should be, but they are often being used differently, so not the same number comes up
I do understand this is not part of the postgresql.conf file, and might be out of the scope of this tool.
The text was updated successfully, but these errors were encountered: