Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Get cluster scope resource using dynamic client error when set namespace. #111248

Open
0xff-dev opened this issue Jul 19, 2022 · 9 comments
Open
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@0xff-dev
Copy link
Contributor

What happened?

ns := os.Getenv("ns")
crs, err := dc.Resource(schema.GroupVersionResource{Group: "rbac.authorization.k8s.io", Version: "v1", Resource: "clusterroles"}).Namespace(ns).List(context.      TODO(), v1.ListOptions{})

My namespace is obtained from the environment variable. When I don't set it, the code can return all clusterrole, but when I set the ns environment variable, it will report an error the server could not find the requested resource

What did you expect to happen?

When using kubectl to get the clusterrole, it can return normally even if the namespace parameter is added. So whether the dynamic client should also add a judgment?

[root@root ~]# kubectl get clusterrole -ndefault
NAME                                                                   CREATED AT
admin                                                                  2021-10-09T05:31:00Z
aggregate-olm-edit                                                     2021-10-09T05:35:40Z
aggregate-olm-view                                                     2021-10-09T05:35:40Z

How can we reproduce it (as minimally and precisely as possible)?

package main

import (
	"context"
	"fmt"
	"log"
	"os"
	"flag"
	"path/filepath"


	v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	"k8s.io/apimachinery/pkg/runtime/schema"
	"k8s.io/client-go/dynamic"
	"k8s.io/client-go/tools/clientcmd"
	"k8s.io/client-go/util/homedir"
	"k8s.io/client-go/rest"
)

var (
	kubeConfig *string
	cfg        *rest.Config
	err        error
)

func init() {
	if home := homedir.HomeDir(); home != "" {
		kubeConfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
	} else {
		kubeConfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
	}
	flag.Parse()

	// *kubeConfig = clusterConfigPath
	log.Printf("kubeconfig[%s]", *kubeConfig)

	cfg, err = clientcmd.BuildConfigFromFlags("", *kubeConfig)
	if err != nil {
		log.Fatalf("build config from flags error: %s", err)
	}
}

func main() {
	dc, err := dynamic.NewForConfig(cfg)
	if err != nil {
		log.Fatal(err)
	}
	ns := os.Getenv("ns")
	log.Println("ns: ", ns)

	log.Println("try to get clusterroles...")
	crs, err := dc.Resource(schema.GroupVersionResource{Group: "rbac.authorization.k8s.io", Version: "v1", Resource: "clusterroles"}).Namespace(ns).List(context.TODO(), v1.ListOptions{})
	if err != nil {
		log.Fatal(err)
	}

	for _, cr := range crs.Items {
		log.Printf("name: %s\n", cr.GetName())
	}

	pods, err := dc.Resource(schema.GroupVersionResource{Group: "", Version: "v1", Resource: "pods"}).Namespace(ns).List(context.TODO(), v1.ListOptions{})
	if err != nil {
		log.Fatal(err)
	}

	for _, p := range pods.Items {
		fmt.Printf("[POD] name: %s -- namespace: %s\n", p.GetName(), p.GetNamespace())
	}
}

Anything else we need to know?

k8s version: 1.20
client-go version: v0.24.3
I didn't read the code of kubectl.

Kubernetes version

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.8", GitCommit:"76863b6264ea4ca3f9545e7597f82900f04b1eba", GitTreeState:"clean", BuildDate:"2021-06-28T08:04:52Z", GoVersion:"go1.15.13", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.8", GitCommit:"76863b6264ea4ca3f9545e7597f82900f04b1eba", GitTreeState:"clean", BuildDate:"2021-06-28T07:46:57Z", GoVersion:"go1.15.13", Compiler:"gc", Platform:"linux/amd64"}

Cloud provider

OS version

# On Linux:
$ cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

$ uname -a
Linux 21v-tc-zhangsh-team-1 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

Install tools

Container runtime (CRI) and version (if applicable)

Related plugins (CNI, CSI, ...) and versions (if applicable)

@0xff-dev 0xff-dev added the kind/bug Categorizes issue or PR as related to a bug. label Jul 19, 2022
@k8s-ci-robot k8s-ci-robot added needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jul 19, 2022
@0xff-dev
Copy link
Contributor Author

/sig api-machinery

@k8s-ci-robot k8s-ci-robot added sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jul 19, 2022
@0xff-dev
Copy link
Contributor Author

/assign

@leilajal
Copy link
Contributor

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jul 19, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 17, 2022
@vaibhav2107
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 21, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 19, 2023
@vaibhav2107
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 29, 2023
@k8s-triage-robot
Copy link

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

  • Confirm that this issue is still relevant with /triage accepted (org members only)
  • Close this issue with /close

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. and removed triage/accepted Indicates an issue or PR is ready to be actively worked on. labels Mar 28, 2024
@seans3
Copy link
Contributor

seans3 commented Mar 28, 2024

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Mar 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

6 participants