Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Add --publish/--add-port flag for port mappings #32

Merged
merged 4 commits into from
May 9, 2019

Conversation

andyz-dev
Copy link
Contributor

@andyz-dev andyz-dev commented May 7, 2019

Add support for --publish option (--add-port is an alias to this option).

With those patches, we can create a cluster with the following command line:

$) bin/k3d create -n test --publish 8080:8080 --publish 9090:30090/udp --workers 1

$) docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8e8fd4582cfa rancher/k3s:v0.5.0 "/bin/k3s agent" 6 seconds ago Up 5 seconds 0.0.0.0:8081->8080/tcp, 0.0.0.0:9091->30090/udp k3d-test-worker-0
c08480a9efef rancher/k3s:v0.5.0 "/bin/k3s server --h…" 7 seconds ago Up 6 seconds 0.0.0.0:6443->6443/tcp, 0.0.0.0:8080->8080/tcp, 0.0.0.0:9090->30090/udp k3d-test-server

I have tested using this feature to deploy both Kubernetes pods using hostPort and k8s services using node port.

This class holds the parsed results of the --publish options. Its
methods helps the create clones of class, with mutations applied.

Currently, there are two methods: Offset() change the host ports by a
fixed amount.  Addport() adds one additional port to the class.
@iwilltry42 iwilltry42 changed the title Publish [Feature] Add --publish/--add-port flag for port mappings May 7, 2019
@iwilltry42 iwilltry42 added the enhancement New feature or request label May 7, 2019
@iwilltry42 iwilltry42 self-requested a review May 7, 2019 20:24
@goffinf
Copy link

goffinf commented May 7, 2019

Looks good. Did you decide not to incorporate the concept of ‘role’ for now ?

I see you have tested host port and nodeport, but possible to define an Ingress too (I don’t tend to use the other two approaches much) ?

I’m interested in using k3d as a local development environment for my team, and for that to work I would like to simulate our hosted environment as closely as possible so we can use similar (or same) objects and automation.

@andyz-dev
Copy link
Contributor Author

Looks good.

Thanks for look it over and the review comments.

Did you decide not to incorporate the concept of ‘role’ for now ?

Yes, I kept it out of this pull request to keep the code simpler and easier to review than otherwise.

Beyond the pull request, I like the 'role' concept in general and believe we should be able to add support to it without much changes to current code, once it is fully specced out.

I see you have tested host port and nodeport, but possible to define an Ingress too (I don’t tend to use the other two approaches much) ?

My immediate use case requires host port support, that's why it is tested first.

I think k3s has picked Traefik as the ingress controller. I am currently not a Traefik user. I'd appreciate feedbacks from current Traefik (or other ingress controllers) user on what the gaps
are, if any.

I believe the basics of deliver TCP/UDP traffic into k3s cluster is here. More testing will be very helpful.

I’m interested in using k3d as a local development environment for my team, and for that to work I would like to simulate our hosted environment as closely as possible so we can use similar (or same) objects and automation.

It is good to keep the bar high for k3d / k3s :-). What's still missing for your team besides ingress support? If it is a long list, what are the top three items on the list?

@iwilltry42
Copy link
Member

Thanks for the PR @andyz-dev, on first view it looks very clean 👍
I didn't check it in detail yet, but I also worked on a solution in parallel.

Just like @goffinf , I was wondering about the node roles, since I think that some people might set up fairly large clusters with k3d and then the port mapping might become an issue, where sometimes you wouldn't even need to open the port on each node.
BTW: Good job with the offset! I was reading it and was like "damn right, that's all not completely straight forward" 😆

My draft includes regex checking of the passed portmap string and then splitting it into the different parts.
The syntax of a portmap would be an extended docker syntax: ip:hostPort:containerPort/protocol@node, where only containerPort is mandatory and @node can be repeated multiple times.
Would you rather like me to update this PR or create a follow-up PR once yours got merged?

@andyz-dev
Copy link
Contributor Author

Thanks for the PR @andyz-dev, on first view it looks very clean 👍
I didn't check it in detail yet, but I also worked on a solution in parallel.

Thank you for the review.

Just like @goffinf , I was wondering about the node roles, since I think that some people might set up fairly large clusters with k3d and then the port mapping might become an issue, where sometimes you wouldn't even need to open the port on each node.

Agreed. I did not include the "node roles" support to keep the pull request easier to review.
Using "offset" as the default is the "least typing option", for developers who want to quickly bring up a (small) cluster. I suspect this is the common use case for k3d. I agree with you that node roles will support larger cluster better.

BTW: Good job with the offset! I was reading it and was like "damn right, that's all not completely straight forward" 😆

My draft includes regex checking of the passed portmap string and then splitting it into the different parts.
The syntax of a portmap would be an extended docker syntax: ip:hostPort:containerPort/protocol@node, where only containerPort is mandatory and @node can be repeated multiple times.

Sounds reasonable to me.

Would you rather like me to update this PR or create a follow-up PR once yours got merged?

I think it is better to merge this PR -- it is already more complex than most of the pull request for this project. Then we can work on follow-up PRs against it. Even If we end up making a release before your PR is merged in, it still OK, since your PR can be viewed as adding capabilities. and are backward compatible. WDYT?

@iwilltry42
Copy link
Member

I think it is better to merge this PR -- it is already more complex than most of the pull request for this project. Then we can work on follow-up PRs against it. Even If we end up making a release before your PR is merged in, it still OK, since your PR can be viewed as adding capabilities. and are backward compatible.

Totally fine for me 👍 Let me do a full review and maybe wait for another review, then we can merge it 👍

cli/container.go Outdated Show resolved Hide resolved
Copy link
Member

@iwilltry42 iwilltry42 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!
Should we already add a deprecation warning to the --port, -p flag that we're going to change to have the functionality of --publish in the next major release? 🤔

@andyz-dev
Copy link
Contributor Author

LGTM!
Should we already add a deprecation warning to the --port, -p flag that we're going to change to have the functionality of --publish in the next major release? 🤔

I think it is better to add it in the same PR as the --api-port. In that PR, we should also mention that the '-p' will become the short hand for --publish.

Copy link
Collaborator

@zeerorg zeerorg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kubectl wasn't able to connect to cluster after I started a server.
Error: Unable to connect to the server: EOF

Maybe because --https-listen-port is not forwarded correctly.
image

@iwilltry42
Copy link
Member

Kubectl wasn't able to connect to cluster after I started a server.
Error: Unable to connect to the server: EOF

Maybe because --https-listen-port is not forwarded correctly.
image

Might that be because the kubeconfig file has port 6443 in it, but you forwarded to 7666? 🤔

@zeerorg
Copy link
Collaborator

zeerorg commented May 8, 2019

@zeerorg
Copy link
Collaborator

zeerorg commented May 8, 2019

Might that be because the kubeconfig file has port 6443 in it, but you forwarded to 7666? 🤔

The kubeconfig points to the port 7666.

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1RENDQWN5Z0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFqTVJBd0RnWURWUVFLRXdkck0zTXQKYjNKbk1ROHdEUVlEVlFRREV3WnJNM010WTJFd0hoY05NVGt3TlRBNE1EY3lPREV3V2hjTk1qa3dOVEExTURjeQpPREV3V2pBak1SQXdEZ1lEVlFRS0V3ZHJNM010YjNKbk1ROHdEUVlEVlFRREV3WnJNM010WTJFd2dnRWlNQTBHCkNTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFETkRMVVZiQ2FjSVQ5ZXdJTVVvZVNGaHREUjFpYi8KRkR6dUlyL2IzY0Z5U2FEQkE3NGFSYnBzZXpYRFVhYUM2aUxpZzg2WmVXN1UwbGphQTlNam5qLzc0UnZMcXdjZgo4RU9hU05oMklxbEUzQkJkUUN1TTQ0aytqOVhITjdPdTNwNGJ4MlRkZXVwSHdSWkY1SGFvSXdBYTV3OUVDV2lJCkVhdFRaVDJVdjNaaW52TFgzaG1HSytVbVhocE9IREJLREJtZHJVakpRRE9ML3V2OWpFOGtwb25rSHA3UEF6Z08KeVcwSG5uREREZXJqbnJDL3Qra0M1WFVQMnpYc3FvNTI3NmYrc3pDcUYrWitYcTBCdUJQL0ppais2bnhXSTBFaQplUHBMVGgzRDdEU0VWZUVEZENjMWVQeXhXZTRnb0dBNjVVQk1QQnFaRjVkdXhwMkpKV0YyWkw5ekFnTUJBQUdqCkl6QWhNQTRHQTFVZER3RUIvd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01BMEdDU3FHU0liM0RRRUIKQ3dVQUE0SUJBUUFobUFlMEI1aUtOdDJscUszbmg5TVZnMEFzeTJBS0ZnRDVGQlFBNUhGK3RhcEFMV1R2bUhXMQpTSVdzWHo2dmFxWG0xcExwbmRISXZVK3BldXpiK1VkUEwzclhsSEpBSCtKYUtuR1FhNUxYVG5QWFhnQ3M4N21tCmszS3hzNitPV1VHV0NQWjNoOHlWYndBSGFFTXk5YmFuNDRTSXR4eDlEbmNJeG5manJBSTd0TUlhQkp1WUtrZFgKVkFCbldtR01YNXF5cE9YWXZOcUlwYm9Ld0ZGT2Z0ZXh1bEhiSDNaUkxIdVNheWRkZ241dGlVWDBHaG94T0kzdQpGeHcxTC91Z212QW9CRGYvQ3Vsd0tnUFJhQlFXTW43aHprWmwvT3YzR1NWSVNUdlVlcjhyemRyRklDNkZPNzF6CmZuREZmdEgyTjhBT212M2QyeEx6Qk5CNVBRUHFHMHowCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://localhost:7666
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    password: 1f0cafca279d38b8da3810be999f59f0
    username: admin

@iwilltry42
Copy link
Member

What's the command you used to create the cluster @zeerorg ?

@andyz-dev
Copy link
Contributor Author

andyz-dev commented May 8, 2019

Seems like https://github.com/rancher/k3d/pull/32/files#diff-794246841b704a4567b6a716f52c9134R153 is the culprit

@zeerorg, I see the problem now. It is my bad that I did not test with --port option using a different port.

In my mind, I see the need for host ports to be different to accommodate running multiple clusters. It is not clear to me why the target port has to be the same as the host port. Would it be more straight forward to have target port always running on the standard port? Multiple clusters run on different network anyways, right?

Good catch. It is indeed a bug that needs to be fixed. Thanks for finding it.

cli/container.go Outdated Show resolved Hide resolved
Inspired by the docker CLI, --publish take same input as docker CLI and
provides similar functions. For the k3s cluster server node, it behaves
the same as docker cli; it exports the k3d server ports to the host
ports.

Handling for worker nodes will be added in the subsequent patches.

This option can be used mutiple times for exposing more ports.

--add-port is an alias to this option.
Use postfix as int instead string. This make the following patch easier
to read.
All ports exposed by --publish will also be exported for all worker
nodes. The host port will be auto indexed based worker id.

For example: with the following command option:

k3d create --publish  80:80  --publish 90:90/udp --workers 1

The exposed ports will be:

host TCP port 80  -> k3s server TCP 80
host TCP port 90  -> k3s server TCP 90
host UDP port 81 -> k3s worker 0 UDP 80
host UDP port 91 -> k3s worker 0 UDP 90
@andyz-dev
Copy link
Contributor Author

FWIW, I have also verified that configuring ingress with k3s' default traefik ingress controller works as well with --publish.

@iwilltry42 iwilltry42 self-assigned this May 9, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants