Skip to content

Commit

Permalink
ceph: raise mon_max_pg_per_osd from 300 to 600
Browse files Browse the repository at this point in the history
Allow for creating more pools.
In e529256, we raised
the mon_max_pg_per_osd to 300 to prevent pool creations
from failing. It turns out that this wasn't quite enough.
Raising it to 600 now after extensive discussions to
allow for a few more rbd pools to be created.
Here is some explanation of the backgrounds:

When pools are created with target_size_ratio, they will
possibly be created with more pools initially, so that the
PG count per OSD roughly matches target size ratio percentage
of the mon_target_pg_per_osd which defaults to 100.
So the default pools that ocs-operator creates with a
target_size ratio of .49 will get 128 PGs, resulting in
some 43 PGs per OSD (replica 3). If additional pools are
created later, if they are also created with a target_size_ratio,
they might get more than the 32 default PGs as well.

The pg auto scaler can scale the pgs of the pools down
later to get closer to its target_pg_per_osd count.
But for a certain period of time, we need more headroom
to be able to create additional rbd pools.
Raising to 600 to be on the safer side.

Signed-off-by: Michael Adam <obnox@redhat.com>
  • Loading branch information
obnoxxx committed Feb 9, 2021
1 parent 3d0de03 commit a5db977
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion controllers/storagecluster/reconcile.go
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ const (
mon_osd_full_ratio = .85
mon_osd_backfillfull_ratio = .8
mon_osd_nearfull_ratio = .75
mon_max_pg_per_osd = 300
mon_max_pg_per_osd = 600
[osd]
osd_memory_target_cgroup_limit_ratio = 0.5
`
Expand Down

0 comments on commit a5db977

Please sign in to comment.