Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to deploy tidb operator v1.4.0-beta.3 with cluster-permission-pv set to false #4797

Closed
lalitkfk opened this issue Dec 6, 2022 · 3 comments · Fixed by #4837
Closed

Comments

@lalitkfk
Copy link
Contributor

lalitkfk commented Dec 6, 2022

Bug Report

What version of Kubernetes are you using?
1.20.7
What version of TiDB Operator are you using?
v1.4.0-beta.3
What storage classes exist in the Kubernetes cluster and what are used for PD/TiKV pods?
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-storage kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 130d

What's the status of the TiDB cluster pods?
Tidb cluster is not deployed yet

What did you do?
Deploy tidb-operator version v1.4.0-beta.3 with cluster-scope set to false and cluster-permission-pv set to false.

        - -cluster-scoped=false
        - -cluster-permission-node=true
        - -cluster-permission-pv=false
        - -cluster-permission-sc=true

What did you expect to see?
The tidb operator pod should have come up properly

What did you see instead?
The tidb operator pod came up but went into Error status.
It seems the podVolModifier is trying to use PVLister, but the PVLister is not initialized since cluster-permission-pv flag is not set. This flow should be disabled if cluster-permission-pv flag is not set.

I see the following error in logs -

E1206 14:14:02.464456 1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 660 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x30f3040, 0x4f243d0)
k8s.io/apimachinery@v0.19.16/pkg/util/runtime/runtime.go:74 +0xa6
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
k8s.io/apimachinery@v0.19.16/pkg/util/runtime/runtime.go:48 +0x86
panic(0x30f3040, 0x4f243d0)
runtime/panic.go:965 +0x1b9
github.com/pingcap/tidb-operator/pkg/manager/volumes.(*podVolModifier).getBoundPVFromPVC(...)
github.com/pingcap/tidb-operator/pkg/manager/volumes/pod_vol_modifier.go:303
github.com/pingcap/tidb-operator/pkg/manager/volumes.(*podVolModifier).NewActualVolumeOfPod(0xc0005d9c00, 0xc000df13b0, 0x1, 0x1, 0xc000f5b180, 0x9, 0xc001127800, 0x7fd836d6af18, 0x30, 0xc0014b0f00)
github.com/pingcap/tidb-operator/pkg/manager/volumes/pod_vol_modifier.go:363 +0xa4
github.com/pingcap/tidb-operator/pkg/manager/volumes.(*podVolModifier).GetActualVolumes(0xc0005d9c00, 0xc00085ebb8, 0xc000df13b0, 0x1, 0x1, 0x8010102, 0x1, 0x7fd80fadc308, 0x20, 0x20)
github.com/pingcap/tidb-operator/pkg/manager/volumes/pod_vol_modifier.go:339 +0x139
github.com/pingcap/tidb-operator/pkg/manager/volumes.observeVolumeStatus(0x3b784c8, 0xc0005d9c00, 0xc001521360, 0x3, 0x4, 0xc000df13b0, 0x1, 0x1, 0x0)
github.com/pingcap/tidb-operator/pkg/manager/volumes/sync_volume_status.go:78 +0xce
github.com/pingcap/tidb-operator/pkg/manager/volumes.SyncVolumeStatus(0x3b784c8, 0xc0005d9c00, 0x3b421b0, 0xc00063f420, 0xc000760000, 0x34ff3a2, 0x2, 0x0, 0x0)
github.com/pingcap/tidb-operator/pkg/manager/volumes/sync_volume_status.go:49 +0x42c
github.com/pingcap/tidb-operator/pkg/manager/member.(*pdMemberManager).syncTidbClusterStatus(0xc000b98c60, 0xc000760000, 0xc00022e500, 0x0, 0x0)
github.com/pingcap/tidb-operator/pkg/manager/member/pd_member_manager.go:424 +0xe36
github.com/pingcap/tidb-operator/pkg/manager/member.(*pdMemberManager).syncPDStatefulSetForTidbCluster(0xc000b98c60, 0xc000760000, 0x0, 0x0)
github.com/pingcap/tidb-operator/pkg/manager/member/pd_member_manager.go:204 +0x3d8
github.com/pingcap/tidb-operator/pkg/manager/member.(*pdMemberManager).Sync(0xc000b98c60, 0xc000760000, 0xc000760000, 0x0)
github.com/pingcap/tidb-operator/pkg/manager/member/pd_member_manager.go:105 +0x155
github.com/pingcap/tidb-operator/pkg/controller/tidbcluster.(*defaultTidbClusterControl).updateTidbCluster(0xc00023ec60, 0xc000760000, 0x1, 0xc0002e3880)
github.com/pingcap/tidb-operator/pkg/controller/tidbcluster/tidb_cluster_control.go:182 +0x372
github.com/pingcap/tidb-operator/pkg/controller/tidbcluster.(*defaultTidbClusterControl).UpdateTidbCluster(0xc00023ec60, 0xc000760000, 0x0, 0x0)
github.com/pingcap/tidb-operator/pkg/controller/tidbcluster/tidb_cluster_control.go:114 +0xcf
github.com/pingcap/tidb-operator/pkg/controller/tidbcluster.(*Controller).syncTidbCluster(...)
github.com/pingcap/tidb-operator/pkg/controller/tidbcluster/tidb_cluster_controller.go:166
github.com/pingcap/tidb-operator/pkg/controller/tidbcluster.(*Controller).sync(0xc001180e40, 0xc000de4030, 0x18, 0x0, 0x0)
github.com/pingcap/tidb-operator/pkg/controller/tidbcluster/tidb_cluster_controller.go:162 +0x1ed
github.com/pingcap/tidb-operator/pkg/controller/tidbcluster.(*Controller).processNextWorkItem(0xc001180e40, 0x203000)
github.com/pingcap/tidb-operator/pkg/controller/tidbcluster/tidb_cluster_controller.go:129 +0xfa
github.com/pingcap/tidb-operator/pkg/controller/tidbcluster.(*Controller).worker(...)
github.com/pingcap/tidb-operator/pkg/controller/tidbcluster/tidb_cluster_controller.go:117

@lalitkfk
Copy link
Contributor Author

lalitkfk commented Dec 6, 2022

This issue is critical for us, since in production we run tidb operator with cluster-scope set to false and cluster-permission-pv set to false

@csuzhangxc
Copy link
Member

if podVolModifier does nothing with -cluster-permission-pv=false, is that satisfy your requirements?

we may do some if c.deps.PVLister != nil check as we did in some other places.

@lalitkfk
Copy link
Contributor Author

lalitkfk commented Dec 9, 2022

Hi @csuzhangxc, I think that should work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants