Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flaky behaviour using custom slice types and listType marker #435

Closed
avorima opened this issue Apr 30, 2020 · 23 comments · Fixed by #692
Closed

Flaky behaviour using custom slice types and listType marker #435

avorima opened this issue Apr 30, 2020 · 23 comments · Fixed by #692

Comments

@avorima
Copy link

avorima commented Apr 30, 2020

controller-gen will sometimes fail to create the CRD manifests when using a custom slice type, e.g. type Foos []Foo and the listType and listMapKey markers.

To reproduce this create a sample project with kubebuilder v2.3.1:

kubebuilder init --repo temp
kubebuilder create api --group ship --version v1beta1 --kind Frigate

and then add the following to the api/v1beta1/frigate_types.go file:

// FrigateStatus defines the observed state of Frigate
type FrigateStatus struct {
        // +kubebuilder:validation:Type=array
        // +optional
        // +listType=map
        // +listMapKey=type
        Conditions Conditions `json:"conditions,omitempty"`
}

type Conditions []Condition

type Condition struct {
        Type   string `json:"type,omitemtpy"`
        Status string `json:"status,omitempty"`
}

Running make manifests will then on occasion yield the following errors:

/tmp/test/api/v1beta1/frigate_types.go:35:2: must apply listType to an array
/tmp/test/api/v1beta1/frigate_types.go:35:2: must apply listMapKey to an associative-list

If the +kubebuilder:validation:Type=array marker is above the type definition it fails consistently.

I have verified that this occurs with both v0.2.5 and v0.3.0.

The issue seem to come the fact that both listType and kubebuilder:validation:Type markers implement ApplyFirst to ensure they get applied before all other markers. But since applyMarkers loops over a map, the order is random.

@avorima avorima changed the title Flaky behaviour using custom slice types. listType and listMapKey Flaky behaviour using custom slice types, listType and listMapKey May 5, 2020
@avorima avorima changed the title Flaky behaviour using custom slice types, listType and listMapKey Flaky behaviour using custom slice types and listType marker May 6, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 4, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 3, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@zx8
Copy link

zx8 commented Sep 15, 2021

/reopen

@k8s-ci-robot
Copy link
Contributor

@zx8: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@avorima
Copy link
Author

avorima commented Sep 16, 2021

This issues still is present in 0.6.0

/reopen

@k8s-ci-robot
Copy link
Contributor

@avorima: Reopened this issue.

In response to this:

This issues still is present in 0.6.0

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@avorima
Copy link
Author

avorima commented Oct 19, 2021

/reopen

@k8s-ci-robot
Copy link
Contributor

@avorima: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Oct 19, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@avorima
Copy link
Author

avorima commented Nov 18, 2021

/reopen
/remove-lifecycle-rotten

@k8s-ci-robot
Copy link
Contributor

@avorima: Reopened this issue.

In response to this:

/reopen
/remove-lifecycle-rotten

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Nov 18, 2021
@avorima
Copy link
Author

avorima commented Nov 18, 2021

/remove-lifecycle-rotten

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@chrischdi
Copy link
Member

chrischdi commented Jun 14, 2022

/repoen

The issue is still reproducible on the latest commit (529c857f74b225412836567200cd0ae003fbb3b3).

In my case I just want to add the two markers +listType=map and +listMapKey=id to allow the slice to get used together with Server-Side Apply.

In fact, when using a custom slice type it is not even possible to add the two markers.

The following variant simply get ignored by controller-gen:

type BarSpec struct {
  Foos Foos `json:"foos,omitempty"`
}

// +listType=map
// +listMapKey=id
type Foos []Foo

type Foo struct {
	ID    string `json:"id"`
	Other string `json:"other"`
}

The following variant leads to an error similar as in the first comment, when running make manifests:

type BarSpec struct {
	// +listType=map
	// +listMapKey=id
	Foos Foos `json:"foos,omitempty"`
}

type Foos []Foo

type Foo struct {
	ID    string `json:"id"`
	Other string `json:"other"`
}

The error:

❯ make manifests
/tmp/test-ct/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
/tmp/test-ct/api/v1beta1/bar_types.go:30:2: must apply listType to an array, found
/tmp/test-ct/api/v1beta1/bar_types.go:30:2: must apply listMapKey to an array, found
Error: not all generators ran successfully
run `controller-gen rbac:roleName=manager-role crd webhook paths=./... output:crd:artifacts:config=config/crd/bases -w` to see all available markers, or `controller-gen rbac:roleName=manager-role crd webhook paths=./... output:crd:artifacts:config=config/crd/bases -h` for usage
make: *** [manifests] Error 1

The only working version currently is:

type BarSpec struct {
	// +listType=map
	// +listMapKey=id
	Foos []Foo `json:"foos,omitempty"`
}

type Foo struct {
	ID    string `json:"id"`
	Other string `json:"other"`
}

@chrischdi
Copy link
Member

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jun 14, 2022
@chrischdi
Copy link
Member

/reopen

@k8s-ci-robot
Copy link
Contributor

@chrischdi: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants