Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Produce darwin/arm64 binaries for v3 #4612

Closed
camilamacedo86 opened this issue Apr 28, 2022 · 16 comments
Closed

Produce darwin/arm64 binaries for v3 #4612

camilamacedo86 opened this issue Apr 28, 2022 · 16 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@camilamacedo86
Copy link
Member

Is your feature request related to a problem? Please describe.
We cannot upgrade the stable go/v3 plugin in kubebuilder to use kustomize v4 and we would like to provide on it the support for darwin/arm64. In this way, we would like to ask for v3 binaries in this architecture.

Describe the solution you'd like

Be able to use the install.sh script to also install darwin/arm64 binaries for v3

@camilamacedo86 camilamacedo86 added the kind/feature Categorizes issue or PR as related to a new feature. label Apr 28, 2022
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Apr 28, 2022
@k8s-ci-robot
Copy link
Contributor

@camilamacedo86: This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@KnVerey
Copy link
Contributor

KnVerey commented May 4, 2022

Can you please provide more information on why you cannot upgrade to v4, and whether or not this is permanent? v4 is more than a year old now, and we do not currently have a long-term support policy for v3.

@camilamacedo86
Copy link
Member Author

Hi @KnVerey,

From v3 to v4, we have a MAJOR bump. Then that means breaking changes. ( besides, I know that you tried to make it us much backwards compatible as possible )

For Kubebuilder we have stable plugins which use v3; we will provide a new alpha version that uses v4 to allow people to begin to upgrade and experiment with that as we begin to use v4 and its new features.

But we do not wish to remove the support for the stable plugins ( which does scaffolds using kustomize v3 ) now, and we would like to allow users which are running the things in Apple Silicon still be able to use the scaffolds done with v3 and the stable versions.

So to do that, we would like to have v3 binaries in this architecture. Since producing the binaries in the architecture is not a really huge effort would be lovely if you could accept this one. I am doing the PR for that.

@camilamacedo86
Copy link
Member Author

camilamacedo86 commented May 6, 2022

Hi @KnVerey,

I was looking at it and see the code from the latest commit used to build the latest v3 release:

- amd64
- arm64
.

So should not this asset be generated already? Why do we have not it on the release page https://github.com/kubernetes-sigs/kustomize/releases/tag/kustomize%2Fv3.10.0

Would that only be adding a new Cloud Build trigger?

OR

The problem here is because the latest releases where not generated by pushing a new tag to the repo: See that the latest v3 tag is: https://github.com/kubernetes-sigs/kustomize/blob/v3.3.1/releasing/cloudbuild.sh (and does not have the changes on the cloudbuild.sh)

@KnVerey
Copy link
Contributor

KnVerey commented May 6, 2022

I think the reason it doesn't already exist is that go-releaser itself didn't support that architecture at the time of the release in question. darwin/arm64 support first appeared in v0.156.0, and the last v3 release used v0.155.0. We started producing darwin/arm64 with v4.2, where we upgraded to v0.172.1.

@camilamacedo86
Copy link
Member Author

HI @KnVerey,

Could we not update the goreleaser?

@KnVerey
Copy link
Contributor

KnVerey commented May 6, 2022

Yes, in theory we could make some new commits to the release branch and create a new tag. But the effort/risk isn't nothing, since we've never actually tried to do this before. For one thing, I recently discovered that the cloud build wasn't using the specified tag, so we'd need to cherry-pick a version of this change. There were major internal/dependency changes between v3 and v4, and dependencies could be another source of surprises since we do not vendor. Other similar dragons could be lurking, which makes me quite unenthusiastic about attempting this.

From v3 to v4, we have a MAJOR bump. Then that means breaking changes. ( besides, I know that you tried to make it us much backwards compatible as possible )

Yes, there were a few. I wasn't yet heavily involved in the project, but I know we had to drop support for some remote URL formats and changed underscored flags to use dashes. Is your project definitely affected by the specific changes that were made?

@camilamacedo86
Copy link
Member Author

camilamacedo86 commented May 8, 2022

HI @KnVerey,

We have a proposed solution for moving forward with kustomize v4. See: kubernetes-sigs/kubebuilder#2583 (there, you can check why we cannot just begin to provide kustomize v4 with the current stable plugin used to scaffold projects)

So, to allow Apple Silicon users to use the default currently and stable implementation (which is the goal that we are trying to achieve with this request) we would like to also have the kustomize v3 bin for this architecture.

In this way, we are moving forward to use and provide the solution with kustomize v4, but we are also allowing users still use the stable and current implementation in this case. Note that we will need to support the current implementation for a long period. Then, having the kustomize v3 bin in this format would be very helpful for us.

What is the easier way do you think that we could try to move forward:

  • Could we try just to upgrade the goreleaser in the branch release-3 and add the arch to see if we can move forward?
  • OR should not we solve the kustomize release issue since it has not been generating the binaries from the tags?

@gquillar
Copy link

gquillar commented May 9, 2022

Hi @KnVerey, @camilamacedo86, we have the same issue for ppc64le and architectures. We want to support operator-sdk on those architectures and operator-sdk uses kustomize v3.8.7. So we planned to update kustomize to v4.5.2 to have those architectures supported operator-framework/operator-sdk#5674 but Camila pointed out the breaking change issue.
We can consider using a new kustomize/v2-alpha plugin as proposed in kubernetes-sigs/kubebuilder#2583 , but we definitively would prefer to have a kustomize v3 binary supporting ppc64le/s390x.

@natasha41575
Copy link
Contributor

@camilamacedo86 @gquillar Is it an option for you to build these binaries yourself from the release branches' source code? Given our limited resources, we don't have a precedent for supporting old versions and generally only cherry pick commits for security-related issues.

@camilamacedo86
Copy link
Member Author

camilamacedo86 commented May 11, 2022

Hi @natasha41575, @KnVerey,

Thank you for your time.

a) Currently, the releases have not been done by the tags ( so it is tough to know what version we are releasing by ourselves ). Would it be possible for us to fix this one?
b) What would be the steps to build the bin? Could you provide the steps after git checkout the tag? What are the commands required? That would be very helpful.

Again, thank all for the support and attention.

@camilamacedo86
Copy link
Member Author

Hi @natasha41575, @KnVerey,

Could you please give a hand on this one?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 27, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 26, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 26, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

6 participants