Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for on-premise Kubernetes environment #469

Closed
wants to merge 12 commits into from

Conversation

davidshtian
Copy link

Is this a bug fix or adding new feature?
Yes, this is add feature for supporting on-premise Kubernetes environment to deploy Amazon EFS CSI Driver, and this is related to issue 468.

What is this PR about? / Why do we need it?
As on-premise Kubernetes users also would like to leverage Amazon EFS service for shared file system, while currently there are several issues when deploying to on-premise environment, like EC2 metadata service dependency.

What testing is done?
I've tested using actual on-premise Kubernetes environment.

Decouple metadata service and turn to use ENV variables.
check if it is running in on-premise environment otherwise turn to to ECS
Fix syntax error.
Add on-premise using environment variables comments.
Add doc link to ON-PREMISE.md instruction.
Restore back to default settings.
@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Jun 2, 2021
@k8s-ci-robot
Copy link
Contributor

Welcome @davidshtian!

It looks like this is your first PR to kubernetes-sigs/aws-efs-csi-driver 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/aws-efs-csi-driver has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot
Copy link
Contributor

Hi @davidshtian. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Jun 2, 2021
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: davidshtian
To complete the pull request process, please assign justinsb after the PR has been reviewed.
You can assign the PR to them by writing /assign @justinsb in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Jun 2, 2021
@wongma7
Copy link
Contributor

wongma7 commented Jun 2, 2021

/ok-to-test

Over at https://github.com/kubernetes-sigs/aws-ebs-csi-driver we have been working on decoupling driver from instance metadata as well. And I was planning to port the solution here. The solution over there is to check, in order:

  1. EC2 instance metadata
  2. Kubernetes API (fall back to this because the node object has instance id, region, zone in spec.providerID)

Do your Nodes have spec.providerID set? If so, the solution could work here too.

Still, there is value in adding environment variable overrides in case spec.providerID is not set or kubernetes API access is blocked from CSI node pods.

The order could be something like:

  1. Environment variable override (this PR).
  2. ECS/fargate instance metadata
  3. EC2 instance metadata
  4. Kubernetes API (the node object has instance id, region, zone in spec.providerID)

Let me know what you think.

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Jun 2, 2021
@k8s-ci-robot
Copy link
Contributor

Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA.

It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.


Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-ci-robot k8s-ci-robot added cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. and removed cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Jun 3, 2021
@davidshtian
Copy link
Author

/ok-to-test

Over at https://github.com/kubernetes-sigs/aws-ebs-csi-driver we have been working on decoupling driver from instance metadata as well. And I was planning to port the solution here. The solution over there is to check, in order:

  1. EC2 instance metadata
  2. Kubernetes API (fall back to this because the node object has instance id, region, zone in spec.providerID)

Do your Nodes have spec.providerID set? If so, the solution could work here too.

Still, there is value in adding environment variable overrides in case spec.providerID is not set or kubernetes API access is blocked from CSI node pods.

The order could be something like:

  1. Environment variable override (this PR).
  2. ECS/fargate instance metadata
  3. EC2 instance metadata
  4. Kubernetes API (the node object has instance id, region, zone in spec.providerID)

Let me know what you think.

Hi~ The check order looks great! I've checked that there is no "spec.providerID" in on premise environment, it seems to be set by Cloud Controller Manager. Thanks~

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Jun 4, 2021
@davidshtian
Copy link
Author

/assign @justinsb

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 8, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 8, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@CNLHC
Copy link

CNLHC commented May 21, 2022

/ok-to-test

Over at https://github.com/kubernetes-sigs/aws-ebs-csi-driver we have been working on decoupling driver from instance metadata as well. And I was planning to port the solution here. The solution over there is to check, in order:

  1. EC2 instance metadata
  2. Kubernetes API (fall back to this because the node object has instance id, region, zone in spec.providerID)

Do your Nodes have spec.providerID set? If so, the solution could work here too.

Still, there is value in adding environment variable overrides in case spec.providerID is not set or kubernetes API access is blocked from CSI node pods.

The order could be something like:

  1. Environment variable override (this PR).
  2. ECS/fargate instance metadata
  3. EC2 instance metadata
  4. Kubernetes API (the node object has instance id, region, zone in spec.providerID)

Let me know what you think.

Hello @wongma7
It seems that this feature does not gain some progress yet. Are you still working on this feature? If you are too busy to work on this I am glad to sign the CLA and develop this.

@jonathanrainer
Copy link
Contributor

/ok-to-test

Over at https://github.com/kubernetes-sigs/aws-ebs-csi-driver we have been working on decoupling driver from instance metadata as well. And I was planning to port the solution here. The solution over there is to check, in order:

  1. EC2 instance metadata
  1. Kubernetes API (fall back to this because the node object has instance id, region, zone in spec.providerID)

Do your Nodes have spec.providerID set? If so, the solution could work here too.

Still, there is value in adding environment variable overrides in case spec.providerID is not set or kubernetes API access is blocked from CSI node pods.

The order could be something like:

  1. Environment variable override (this PR).
  1. ECS/fargate instance metadata
  1. EC2 instance metadata
  1. Kubernetes API (the node object has instance id, region, zone in spec.providerID)

Let me know what you think.

Hello @wongma7

It seems that this feature does not gain some progress yet. Are you still working on this feature? If you are too busy to work on this I am glad to sign the CLA and develop this.

Hi, I have a PR open for this in #681. Would very much appreciate a review :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants