Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Status of project and documentation #42

Closed
darwin67 opened this issue Jun 25, 2019 · 31 comments
Closed

Status of project and documentation #42

darwin67 opened this issue Jun 25, 2019 · 31 comments
Assignees
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@darwin67
Copy link

darwin67 commented Jun 25, 2019

Following the announcement of v1.15.0, my understanding is that the future development of cloud providers are being moved out of the kubernetes core.

Existing core including AWS has been moved to https://github.com/kubernetes/legacy-cloud-providers.
Which pretty much breaks the link in note and also seems like in-tree providers will be completely removed in the next couple releases.
We're currently on 1.14.3 and have tested 1.15.0 and can verify that the in-tree provider still works.

But the point I wanted to make is, this repo doesn't seem to have much activity compared to the other major cloud providers.
You also don't have any sample manifests for deploying the cloud-controller-manager and I haven't had any luck getting the aws provider to work as an external cloud provider ever since I first attempted at 1.13..
Asking on kubernetes slack also haven't work well so far, which I suspect either no one had got this working, or simply no one cares.
Doing a google search and reading docs of the CCM and other cloud providers also got me no where close to a working example of this external provider.

So these are my requests:

  1. Please do clarify on the status, or any plans on work with this project.
  2. Please update documentation and provide a close to working sample manifest to deploy this provider

Activity in this repo for the past 6+ months had mostly been cosmetic changes and I haven't seen anything related to feature updates, bug fixes or even documentations.

Please do forgive me if my tone is offending and I might just be paranoid for no good reason, but if you can at least clarify 1 for me, that can help me decide how to approach in the future. Either fork this repo and work on it on my own, or some other paths.

FYI:
We don't use EKS and have no plans on doing so in the future.

Thanks!

/triage support

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label Jun 25, 2019
@selslack
Copy link

This repo is kinda dead, yea.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 10, 2019
@selslack
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 21, 2019
@particledecay
Copy link

How fitting that @selslack comments that this repo is kinda dead, and the next comment is the bot adding a stale label lol...

Is there any status on this? I've had the same trouble as @darwin67 with getting the external cloud provider working on k8s 1.17, since in-tree is deprecated now. The Azure and OpenStack cloud provider repos actually have documentation on getting those working, but nothing for this one.

Is there anyone out there that's gotten this project working as an external cloud provider in recent versions of Kubernetes?

@brookssw
Copy link

would love to see action/support, or at least a definitive response from those that manage this repo. I was able to get the cloud controller and ebs driver working after hammering my head against it for a while, and building the cloud controller image myself, but it was far from pleasant, and the lack of support/responsiveness leads me to fear for future support of this config. Is Amazon abandoning kubernetes, or trying to force eveyrone to use EKS, or something else entirely?

@nckturner
Copy link
Contributor

This repository is the right location for the external cloud controller manager, and I'll be spending much more time investing in it this year. At some point, likely this year, we will migrate the source for the AWS cloud provider from upstream to this repo. At that point, development will shift from upstream to here. For now, we are importing the upstream cloud provider and relying on bug fixes upstream. That being said, significant work this year needs to be done on testing and documentation in this repository to make it usable, and that's one of my highest priority goals.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 12, 2020
@ncdc
Copy link
Member

ncdc commented Apr 13, 2020

@nckturner 👋! I'm wondering if you have any more updates since your last comment a few months ago? Thanks!

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 13, 2020
@StrongMonkey
Copy link

Same here looking for clear documentation. Does anyone figure how to deploy this into a kubernete daemonset as instructed in https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/#examples?

@nckturner
Copy link
Contributor

Hey, thanks for your interest! We are working on investing in documentation and publishing container images, but we're always looking for help! If your interested in contributing please let us (myself, @andrewsykim and @justinsb) know as we build out the documention!

@andrewsykim
Copy link
Member

/assign

@andrewsykim
Copy link
Member

You also don't have any sample manifests for deploying the cloud-controller-manager and I haven't had any luck getting the aws provider to work as an external cloud provider ever since I first attempted at 1.13..

@darwin67 regarding sample manifests, we added some in #93. Until we have a public image repo you have to build the image yourself though.

@darwin67
Copy link
Author

darwin67 commented May 1, 2020

@andrewsykim thanks for the update. great to see you joining as the owner and hope that this project will be getting more updates.
I'm no longer at the company when I filed this request so I don't really have the k8s clusters to provide feedbacks anymore, but happy to keep the issue opened until request is resolved.

@andrewsykim
Copy link
Member

Looking for some feedback on what the documentation for this project should look like, please comment #102 if you have thoughts/opinions.

@sargun
Copy link

sargun commented May 28, 2020

Is there any plan to hoist the legacy code from https://github.com/kubernetes/legacy-cloud-providers, and into this repo, so that the code can be edited in a central place? Alternatively, would you be unhappy if someone else did that @andrewsykim? I realize it's "ugly", but it seems like it'd unblock some contributions?

@andrewsykim
Copy link
Member

andrewsykim commented May 28, 2020

I may have missed some context here. So currently the "central place" is in https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/legacy-cloud-providers/aws. That is consumed by the in-tree provider and eventually here via k8s.io/legacy-cloud-providers. This ensures we only have to maintain 1 provider at the moment. In the near future we will cut the tie to legacy-cloud-providers and port the provider into this repo and develop it here. But that can only happen once the in-tree providers are removed.

Are you proposing to fork the current provider into this repo and develop them separately?

@sargun
Copy link

sargun commented May 28, 2020

@andrewsykim correct. The fact that any functionality change has to be made in that repo, and then this repo has to be updated is messy. It also makes maintaining our own patchset more difficult, as that other repo has a bunch of unrelated stuff.

IMHO, it would be easier to declare bankruptcy on that existing repo and say it's EOL, and have people move to binaries from this repo -- And hoist the relevant AWS code into this repo. There are still aspects of legacy-cloud-providers we might want to use -- like configuration, but I don't see any reason to keep the AWS-specific functionality there.

@andrewsykim
Copy link
Member

andrewsykim commented May 28, 2020

We need to be careful about breaking existing behavior. If we branch off we could lose bug fixes or accidentally break compatiblity for users migrating from in-tree to out-of-tree.

I would be in favor of just starting a v2 provider on a clean slate and redesign it from the ground up (i.e. enabled with --cloud-provider=aws/v2). It would only be supported for new clusters. We can take the good parts of the existing provider and replace the bad parts. Do folks have an appetite for this as opposed to building on top of the existing provider?

@sargun
Copy link

sargun commented May 28, 2020

I would much rather see an incremental approach to a v2. We have immediate interests in features such as being able to make the node name the i-, or adding EC2 healthcheck info into the node conditions we're not wanting to scorch the earth over.

I can put together a PR proposal, if you want. As far as I know this project has no official releases as of yet. We could do this, and release a 0.0.1alpha, or similar.

@andrewsykim
Copy link
Member

I would much rather see an incremental approach to a v2. We have immediate interests in features such as being able to make the node name the i-, or adding EC2 healthcheck info into the node conditions we're not wanting to scorch the earth over.

This is totally fair, but many of the common feature requests from users like the node name change is very difficult to change without breaking existing clusters. The migration semantics get complicated very quickly. Starting on a clean slate here could possibly be less work overall.

I can put together a PR proposal, if you want. As far as I know this project has no official releases as of yet. We could do this, and release a 0.0.1alpha, or similar.

Sure, I would be open to this and we can continue discussions there. Worth noting that we will likely cut an alpha version soon, we were just blocked on getting our GCR registry setup for a while (kubernetes/k8s.io#859).

@TBBle
Copy link

TBBle commented Jun 13, 2020

Would it make sense to slurp-over the existing code from https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/legacy-cloud-providers/aws, and keep a v1 branch from which changes made here are then replicated over to there until such time as "there" is removed? That would allow migrating discussion and feature development for the AWS cloud-provider here, since https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/legacy-cloud-providers has a pretty clear "Do not add new features here" note, which we're gleefully ignoring for things like kubernetes/kubernetes#79523. Edit: Ignore this, I just saw #61 which moved the other way.

It's not terribly clear to me what the timeline is for removing in-tree provider support, but clearly this project needs to be up-and-running and probably in wide use before that can happen for the in-tree AWS support.

I guess it depends on how different you envisage v1 and v2 being, from either a UX or code-design approach: whether cloud-provider-aws v1 and v2 ever need to be codeveloped, or if that division can be "in-tree"/"out-of-tree" forever.

@andrewsykim
Copy link
Member

I guess it depends on how different you envisage v1 and v2 being, from either a UX or code-design approach: whether cloud-provider-aws v1 and v2 ever need to be codeveloped, or if that division can be "in-tree"/"out-of-tree" forever.

My thinking here is: v1 (current implementation) is both in-tree / out-of-tree with almost identical behavior. v2 can be a complete rewrite from scratch where we take the good from v1 and redo the bad.

@andrewsykim
Copy link
Member

andrewsykim commented Jun 15, 2020

FYI folks, we cut the first alpha release https://github.com/kubernetes/cloud-provider-aws/releases/tag/v1.18.0-alpha.0

Please try it out and provide feedback, example manifest linked in the release notes.

@TBBle
Copy link

TBBle commented Jun 24, 2020

Another relevant question: Where should AWS Cloud Provider issues be lodged? The code lives in https://github.com/kubernetes/kubernetes/ but the code-ownership and future publishing vests here (I guess?). I'm noticing bug reports in both trackers, and sometimes for the same issue.

@sargun sargun mentioned this issue Jun 25, 2020
@sargun
Copy link

sargun commented Jun 25, 2020

@andrewsykim See here: #111

I still do not think "starting from scratch" is a great idea....

@andrewsykim
Copy link
Member

I still do not think "starting from scratch" is a great idea....

Starting a v2 provider from scratch wouldn't mean we abandon the existing one. There are some feature requests for the legacy provider, like the node and ELB name change, that are just too difficult to implement without breaking existing clusters. We can maintain both providers for the forseeable future.

@nckturner
Copy link
Contributor

@TBBle I think either works, maybe using this repo would make them easier to find and fit better with future goals for the project, but I doubt we will be able to prevent others from filing issues at k/k, so we'll have to be cognizant of both.

@sargun I appreciate your dilemma. I'm open to all options, but we really do have to be careful about breaking existing users. That being said we need a way to allow contributions that doesn't cause excessive friction. I'm guessing you've submitted your patches upstream at some point and they stagnated, could you link any PRs you have open? If not, let's at least start by opening PR's against k/k so we can discuss them, and decide between V2/copying code over into this repo/merging into upstream.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 22, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 21, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests