-
Notifications
You must be signed in to change notification settings - Fork 523
Closed
Labels
Description
What did you do to encounter the bug?
Steps to reproduce the behavior:
- I have a successfully running 3 node replicaset (PSS)
- While converting PSS to PSA by adding arbiters: 1 in cr.yaml (mongo version 5.0.6 ), it is getting failed with error
"Reconfig attempted to install a config that would change the implicit default write concern. Use the setDefaultRWConcern command to set a cluster-wide write concern and try the reconfig again." - Setting default Read concern to local and write concern to 1 using the below admin command and reapplying the cr file , resulting in error as below:
db.adminComamnd({
"setDefaultRWConcern" : 1,
"defaultWriteConcern" : { "w" :1},
"defaultREadConcern": {"level" : "local"}
})
Error:
NOT EQUAL :
V1 : 1
V2 : majority
REASON: Type() has different result : int32, string
NOT EQUAL :
V1 : map[w:1 wtimeout:0]
V2 : map[w:majority wtimeout:0]
REASON: Map field is different for key w : 1, majority
What did you expect?
The arbiter should get added to the replica set while converting from PSS to PSA without any issues.
What happened instead?
The arbiter pod is not getting added and the mongo pod is resulting in readiness probe failed.
$kubectl get pods
NAME READY STATUS RESTARTS AGE
arbiter-mongodb-0 2/3 Running 0 29m
Operator Information
- Operator Version - 0.7.3
- MongoDB Image used - 5.0.6
If possible, please include:
cr.yaml
---
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: arbiter-mongodb
spec:
members: 3
type: ReplicaSet
arbiters: 1
version: "5.0.6"
featureCompatibilityVersion: "5.0"
security:
authentication:
modes: ["SCRAM"]
users:
- name: mongodba
db: admin
passwordSecretRef: # a reference to the secret that will be used to generate the user's password
name: dbauser-scrt
roles:
- name: root
db: admin
- name: role-restore
db: admin
scramCredentialsSecretName: mongodba-scram
additionalMongodConfig:
storage.wiredTiger.engineConfig.journalCompressor: zlib
#use specific storage class, CPU & Memory - start
# persistent: true
statefulSet:
spec:
volumeClaimTemplates:
- metadata:
name: data-volume
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: mongo-stg-class
resources:
requests:
storage: 6Gi
- metadata:
name: logs-volume
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: mongo-stg-class
resources:
requests:
storage: 3Gi
template:
spec:
initContainers:
containers:
- name: "mongodb-agent"
resources:
requests:
cpu: 200m
memory: 200Mi
limits:
cpu: 250m
memory: 250Mi
- name: "mongod"
resources:
requests:
cpu: 1 #500m
memory: 1Gi #500Mi
limits:
cpu: 2
memory: 2Gi
#use specific storage class, CPU & Memory - end
Logs - arbiter-mongodb-0
[2022-03-24T06:31:57.868+0000] [.info] [src/director/director.go:planAndExecute:563] <arbiter-mongodb-0> [06:31:57.868] Step=WaitPrimary as part of Move=WaitPrimary in plan failed : <arbiter-mongodb-0> [06:31:57.868] Postcondition not yet met for step WaitPrimary because
['currentState.IsPrimary' = false].
Recomputing a plan...
[2022-03-24T06:31:58.204+0000] [.info] [src/config/config.go:ReadClusterConfig:437] [06:31:58.204] Retrieving cluster config from /var/lib/automation/config/cluster-config.json...
[2022-03-24T06:31:58.204+0000] [.info] [main/components/agent.go:LoadClusterConfig:270] [06:31:58.204] clusterConfig unchanged
[(local=false) using desired auth key
NOT EQUAL :
V1 : 1
V2 : majority
REASON: Type() has different result : int32, string
NOT EQUAL :
V1 : map[w:1 wtimeout:0]
V2 : map[w:majority wtimeout:0]
REASON: Map field is different for key w : 1, majority
[2022-03-24T06:31:59.160+0000] [.info] [src/mongoctl/processctl.go:Update:3307] <arbiter-mongodb-0> [06:31:59.160] <DB_WRITE> Updated with query map[] and update [{$set [{agentFeatures [StateCache]} {nextVersion 2}]}] and upsert=true on local.clustermanager
[2022-03-24T06:31:59.256+0000] [.info] [src/config/config.go:ReadClusterConfig:437] [06:31:59.256] Retrieving cluster config from /var/lib/automation/config/cluster-config.json...
[2022-03-24T06:31:59.257+0000] [.info] [main/components/agent.go:LoadClusterConfig:270] [06:31:59.256] clusterConfig unchanged
[2022-03-24T06:32:00.057+0000] [.info] [src/director/director.go:computePlan:279] <arbiter-mongodb-0> [06:32:00.057] ... process has a plan : WaitPrimary,UpdateDefaultRWConcern,WaitRsConfCommitted
[2022-03-24T06:32:00.057+0000] [.info] [src/director/director.go:tracef:793] <arbiter-mongodb-0> [06:32:00.057] Running step: 'WaitPrimary' of move 'WaitPrimary'
[2022-03-24T06:32:00.057+0000] [.info] [src/director/director.go:tracef:793] <arbiter-mongodb-0> [06:32:00.057] because
[All the following are true:
['currentState.NeedToStepDownCurrentPrimary' = false]
['currentState.Up' = true]
]