-
-
Notifications
You must be signed in to change notification settings - Fork 462
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Add --publish/--add-port flag for port mappings #32
Conversation
This class holds the parsed results of the --publish options. Its methods helps the create clones of class, with mutations applied. Currently, there are two methods: Offset() change the host ports by a fixed amount. Addport() adds one additional port to the class.
Looks good. Did you decide not to incorporate the concept of ‘role’ for now ? I see you have tested host port and nodeport, but possible to define an Ingress too (I don’t tend to use the other two approaches much) ? I’m interested in using k3d as a local development environment for my team, and for that to work I would like to simulate our hosted environment as closely as possible so we can use similar (or same) objects and automation. |
Thanks for look it over and the review comments. Did you decide not to incorporate the concept of ‘role’ for now ? Yes, I kept it out of this pull request to keep the code simpler and easier to review than otherwise. Beyond the pull request, I like the 'role' concept in general and believe we should be able to add support to it without much changes to current code, once it is fully specced out.
My immediate use case requires host port support, that's why it is tested first. I think k3s has picked Traefik as the ingress controller. I am currently not a Traefik user. I'd appreciate feedbacks from current Traefik (or other ingress controllers) user on what the gaps I believe the basics of deliver TCP/UDP traffic into k3s cluster is here. More testing will be very helpful.
It is good to keep the bar high for k3d / k3s :-). What's still missing for your team besides ingress support? If it is a long list, what are the top three items on the list? |
Thanks for the PR @andyz-dev, on first view it looks very clean 👍 Just like @goffinf , I was wondering about the node roles, since I think that some people might set up fairly large clusters with k3d and then the port mapping might become an issue, where sometimes you wouldn't even need to open the port on each node. My draft includes regex checking of the passed portmap string and then splitting it into the different parts. |
Thank you for the review.
Agreed. I did not include the "node roles" support to keep the pull request easier to review.
Sounds reasonable to me.
I think it is better to merge this PR -- it is already more complex than most of the pull request for this project. Then we can work on follow-up PRs against it. Even If we end up making a release before your PR is merged in, it still OK, since your PR can be viewed as adding capabilities. and are backward compatible. WDYT? |
Totally fine for me 👍 Let me do a full review and maybe wait for another review, then we can merge it 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Should we already add a deprecation warning to the --port, -p
flag that we're going to change to have the functionality of --publish
in the next major release? 🤔
I think it is better to add it in the same PR as the --api-port. In that PR, we should also mention that the '-p' will become the short hand for --publish. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like https://github.com/rancher/k3d/pull/32/files#diff-794246841b704a4567b6a716f52c9134R153 is the culprit |
The kubeconfig points to the port 7666. apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1RENDQWN5Z0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFqTVJBd0RnWURWUVFLRXdkck0zTXQKYjNKbk1ROHdEUVlEVlFRREV3WnJNM010WTJFd0hoY05NVGt3TlRBNE1EY3lPREV3V2hjTk1qa3dOVEExTURjeQpPREV3V2pBak1SQXdEZ1lEVlFRS0V3ZHJNM010YjNKbk1ROHdEUVlEVlFRREV3WnJNM010WTJFd2dnRWlNQTBHCkNTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFETkRMVVZiQ2FjSVQ5ZXdJTVVvZVNGaHREUjFpYi8KRkR6dUlyL2IzY0Z5U2FEQkE3NGFSYnBzZXpYRFVhYUM2aUxpZzg2WmVXN1UwbGphQTlNam5qLzc0UnZMcXdjZgo4RU9hU05oMklxbEUzQkJkUUN1TTQ0aytqOVhITjdPdTNwNGJ4MlRkZXVwSHdSWkY1SGFvSXdBYTV3OUVDV2lJCkVhdFRaVDJVdjNaaW52TFgzaG1HSytVbVhocE9IREJLREJtZHJVakpRRE9ML3V2OWpFOGtwb25rSHA3UEF6Z08KeVcwSG5uREREZXJqbnJDL3Qra0M1WFVQMnpYc3FvNTI3NmYrc3pDcUYrWitYcTBCdUJQL0ppais2bnhXSTBFaQplUHBMVGgzRDdEU0VWZUVEZENjMWVQeXhXZTRnb0dBNjVVQk1QQnFaRjVkdXhwMkpKV0YyWkw5ekFnTUJBQUdqCkl6QWhNQTRHQTFVZER3RUIvd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01BMEdDU3FHU0liM0RRRUIKQ3dVQUE0SUJBUUFobUFlMEI1aUtOdDJscUszbmg5TVZnMEFzeTJBS0ZnRDVGQlFBNUhGK3RhcEFMV1R2bUhXMQpTSVdzWHo2dmFxWG0xcExwbmRISXZVK3BldXpiK1VkUEwzclhsSEpBSCtKYUtuR1FhNUxYVG5QWFhnQ3M4N21tCmszS3hzNitPV1VHV0NQWjNoOHlWYndBSGFFTXk5YmFuNDRTSXR4eDlEbmNJeG5manJBSTd0TUlhQkp1WUtrZFgKVkFCbldtR01YNXF5cE9YWXZOcUlwYm9Ld0ZGT2Z0ZXh1bEhiSDNaUkxIdVNheWRkZ241dGlVWDBHaG94T0kzdQpGeHcxTC91Z212QW9CRGYvQ3Vsd0tnUFJhQlFXTW43aHprWmwvT3YzR1NWSVNUdlVlcjhyemRyRklDNkZPNzF6CmZuREZmdEgyTjhBT212M2QyeEx6Qk5CNVBRUHFHMHowCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
server: https://localhost:7666
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
password: 1f0cafca279d38b8da3810be999f59f0
username: admin |
What's the command you used to create the cluster @zeerorg ? |
@zeerorg, I see the problem now. It is my bad that I did not test with --port option using a different port. In my mind, I see the need for host ports to be different to accommodate running multiple clusters. It is not clear to me why the target port has to be the same as the host port. Would it be more straight forward to have target port always running on the standard port? Multiple clusters run on different network anyways, right? Good catch. It is indeed a bug that needs to be fixed. Thanks for finding it. |
Inspired by the docker CLI, --publish take same input as docker CLI and provides similar functions. For the k3s cluster server node, it behaves the same as docker cli; it exports the k3d server ports to the host ports. Handling for worker nodes will be added in the subsequent patches. This option can be used mutiple times for exposing more ports. --add-port is an alias to this option.
Use postfix as int instead string. This make the following patch easier to read.
All ports exposed by --publish will also be exported for all worker nodes. The host port will be auto indexed based worker id. For example: with the following command option: k3d create --publish 80:80 --publish 90:90/udp --workers 1 The exposed ports will be: host TCP port 80 -> k3s server TCP 80 host TCP port 90 -> k3s server TCP 90 host UDP port 81 -> k3s worker 0 UDP 80 host UDP port 91 -> k3s worker 0 UDP 90
FWIW, I have also verified that configuring ingress with k3s' default traefik ingress controller works as well with --publish. |
Add support for --publish option (--add-port is an alias to this option).
With those patches, we can create a cluster with the following command line:
$) bin/k3d create -n test --publish 8080:8080 --publish 9090:30090/udp --workers 1
$) docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8e8fd4582cfa rancher/k3s:v0.5.0 "/bin/k3s agent" 6 seconds ago Up 5 seconds 0.0.0.0:8081->8080/tcp, 0.0.0.0:9091->30090/udp k3d-test-worker-0
c08480a9efef rancher/k3s:v0.5.0 "/bin/k3s server --h…" 7 seconds ago Up 6 seconds 0.0.0.0:6443->6443/tcp, 0.0.0.0:8080->8080/tcp, 0.0.0.0:9090->30090/udp k3d-test-server
I have tested using this feature to deploy both Kubernetes pods using hostPort and k8s services using node port.