Skip to content

Commit

Permalink
Adjust mon_max_pg_per_osd for Rook-Ceph
Browse files Browse the repository at this point in the history
This is a temporary workaround for a race condition with the Ceph PG
autoscaler. In cases where the autoscaler only detects our default RBD
and CephFS pools it will set the PG count for them to 128 instead of the
usual 32. In small clusters, this prevents the creation of additional
pools, e.g. for RGW. Setting mon_max_pg_per_osd to a higher value helps
account for the temporary state where few pools ar eusing a high number
of PGs.

Signed-off-by: Jose A. Rivera <jarrpa@redhat.com>
  • Loading branch information
jarrpa committed Jan 12, 2021
1 parent 3e01144 commit e529256
Showing 1 changed file with 1 addition and 0 deletions.
1 change: 1 addition & 0 deletions controllers/storagecluster/reconcile.go
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ const (
mon_osd_full_ratio = .85
mon_osd_backfillfull_ratio = .8
mon_osd_nearfull_ratio = .75
mon_max_pg_per_osd = 300
[osd]
osd_memory_target_cgroup_limit_ratio = 0.5
`
Expand Down

0 comments on commit e529256

Please sign in to comment.