-
-
Notifications
You must be signed in to change notification settings - Fork 462
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Enhancement] Ingress port mapping #11
Comments
Maybe issue #6 is a relevant issue in this regard ? Though for port-forwarding the current recommended way of doing it is using |
I would also opt for WDYT |
a working solution to specify the port forward during k3d creation would be indeed very helpful! |
Unsurprisingly I can confirm that using kubectl port-forward does work, but ... I would still much prefer to define Ingress resources |
1+ the actual behavior looks rather inconvenient and insufficient to me. if it's possible forward the API network connectivity ports to the public network, it should be done resp. be configurable for ingress ports as well. without this feature k3d is IMHO hardly usable for serious work. |
So I'd go with an additional (string-slice) flag like |
Being able to specify the node ‘role’ is more flexible if we are just talking about exposing ports in the general sense. I’m not sure I can think of a use case for using these for an Ingress object for Control Plane or Etcd (and as yet there is no separation of these roles - but that might happen in the future ?), but it’s still better to have the option. So here the prototype would be something like ...
Where role can be worker (default), controlplane, or etcd. (or just server if cp and etcd will always be combined) |
i'm not sure, if it really makes sense, to search for a more abstract / future proof / complicated command line syntax in this particular case? in fact, we just want to utilize same very simple docker-API "PortBindings" functionality in all of this cases -- isn't it? i therefore would simply extended the usability of the existing -p/--port command line -- i.e. make it usable multiple times [for API connectivity and an arbitrary list of ingress ports] and allow "container-port:host-port" pairs for usage scenarios with more than one instance in parallel. this would look rather similar to the expose syntax in dockers CLI resp. a natural and commonly expected simple wrapper behavior. |
I agree with you @mash-graz that we could re-use the I also think like supported by @goffinf that it would be a good thing to narrow it down to node roles, where setting the role would be optional. @mash-graz: any idea how we could transition from our current |
@iwilltry42 That works for me. |
yes -- i also have some doubts concerning the backwards compatibility of such a change.
nevertheless i could accept the |
You know what? Why not both? |
Borrowing from the Docker CLI, we could also consider using --publish for mapping host ports into the k3s node ports. In fact, I am working a pull request for it. It would be great to assign the -p short hand to this option as well. (I am also o.k. with --add-port if it is more preferred) I think it is useful to keep the api port spec separate from the --publish option. Since the worker nodes need to known where the API port is for joining the cluster. How about we change it to --api-port ,-a, which takes a string argument in the form of 'ip:host-port'? |
You're right there, I didn't think of that... |
if a working minor release could be realized with any usable solution, i'll be happy. :)
that's an interesting amendment, because in some cases it could indeed make sense to bind the exposed API connectivity to just one specified host-IP/network instead of 0.0.0.0 for security reasons! |
So to wrap things up, I'd suggest doing the following: For the next minor release:
For the next major release:
Any final comments on that? |
BTW: If you didn't already, you might want to consider joining our slack channel #k3d on https://rancher-users.slack.com after signing up here: https://slack.rancher.io/ 😉 |
@iwilltry42 I already have --publish working, just polishing it before sending out the pull request. I will also rename it to --add-port. I am not working on--api-port, nor on deprecating --port. Please feel free to take them up. |
@andyz-dev Alright, I'll base the next steps on the results of your merge, so I don't have to deal with all the merge conflicts 😁 |
hmmm... i really like the idea, to prevent the actual CLI behavior till the next major release in semantic versioning conformance, but nevertheless we should try to reduce the needed changes to the bare minimum. i would therfore suggest:
in this case users could use the new syntax from now on and any revoke or redefinition of btw: i was just looking again, how docker interprets all those possible colon separated variants: https://docs.docker.com/engine/reference/run/#expose-incoming-ports this myriad of variants is perhaps an overkill for our actual purpose, nevertheless i would at least try to stay somehow close and compatible to this well known conventions... |
@mash-graz , yep, like that procedure 👍 Regarding all the possibilities of port-mappings, I'm on your side there that we should stick close to what docker does. Though I'd really like to put the notion of node roles (or at some point in the future also node IDs) in there somehow so that we can specify which nodes should have those ports mapped to the host. |
@iwilltry42 @mash-graz O.K. I will stick with --publish for now, and add --add-port as its alias. |
yes -- it definitely makes sense, to catch the different ways of exposing services in k8s by more adequate/selective forwarding strategies in the long run... |
Thanks for the PR #32 @andyz-dev ! |
thanks @andyz-dev ! 👍 please correct me, if i'm totally wrong, but i don't think, this forwarding of all worker nodes is necessary or useful for typical ingress/LoadBalancer scenarios -- e.g. when k3ds traefik default installation will be utilized. in this case, all the internal routing is already concentrated/bound to just on single IP-addr port pair within the docker context. we only have to forward it from this internal docker network to the real public outer world -- i.e. one of the more common networks of the host. but again: maybe i'm totally wrong concerning this point? -- please don't hesitate to correct me! but your approach could make some sense for some of the other network exposing modes of k8s. |
I feel the same way. Any suggestion on how to simplify it?
In the product we are working on, we need our LB to run on the worker nodes. For HA, we usually run more than 2 LBs. I think there is a need to exposing ports on more than one worker nodes. I agree with you that exposing ports on all works is overkill. Would the "node role" concept proposed by @iwilltry42 work for you? May be we should add it soon.
Notice we are only stepping the host ports. |
@mash-graz I agree with you there, it got way more complex than I first expected it to be. For the port clashes I was thinking of something like a |
@goffinf Thanks for the input. Is there a reference to the issue you mentioned on k3s not being able to support more than one ingress node? I'd like to take a closer look and make sure multiple node input works on k3d. We should probably also take a fresh look at the ingress from top down (instead of coding up) to make sure the end result works for the community. @mash-graz Thanks for the input, I do value them very much and the end solution will be better if we think about them collectively. That's why @iwilltry42 made the pre-release available so that more folks can try it out and provide feed backs. FWIW, from tip of the master, on MAC OS, the show svc of traefik also gave the external IP of docker0, (on MAC docker0 runs inside a VM, of course). Sounds like you are already thinking of an alternative design, If helpful, I will be happy to continue the discussion (may be we can make use of the slack channel), or be a sounding board to help you flush out your design ideas. |
Hey there, now that was a long read 😁 So as far as I understand the main risk/concern in the current approach is that we expose ports with on each k3d node (containers) with an auto-offset on the host. The main goal of this should be to support single hostport to single containerport bindings to e.g. allow ingress or LB port forwarding for local development. Since the portmapping is the most flexible thing that we can do at creation time without too much engineering extra-overhead, I think we should stick with it, but in a different way then now. I would still propose to enhance the docker notation with the node specifier, i.e. Still, I think that @mash-graz idea of somehow connecting to the docker network is a super cool idea and I think that we might be able to support this at some point when we've had more time for researching it. |
After the first easy implementation, we can think of further getting rid of
|
thanks @iwilltry42 for preparing this patch!
yes -- that's more or less my point of view too. :) we should really try to concentrate on this simple one-to-one port mapping (LB/ingress), although the more complex case (auto offset...) should be supported as good as possible.
does it really make sense, to require the node-specifier as mandatory field? if we understand the one-to-one port mapping case as the most common usage scenario, we can simple suppose it as the default mode of operation, as long as no other node specifier is explicitly given on the command line. i still see the problem, how the one-to-one port mapping can be specified in an unambiguous manner by the proposed notation? i guess, it's a kind of fundamental misunderstanding or oversimplification, if we presuppose a congruence between docker container entities and k8s more complex network abstraction. both accomplish orthogonal goals by different means. utilizing port mappings/forwarding in workarounds to satisfy some practical access requirements, should be always seen as rather questionable and botchy cutoff. there are already some interesting tools and improvements of https://github.com/txn2/kubefwd they are most likely a more suitable choice, if one wants to handle demanding network access to remote clusters and k3s running within docker or VMs. in comparison with our docker specific approach this kind of solution comes with a few pros and cons: pros:
cons:
yes, i still think this variant could be a worthwhile and extraordinary user friendly option on linux machines. i'll try to test it and prepare a PR for this feature as soon as possible. |
I've had limited success with kubefwd, which I also thought might have some legs for this problem (having Kelsey Hightower endorse a product must give it some street creds I suppose). Anyway, my environment is Windows SubSystem for Linux (WSL). I appreciate that's not the case for everyone, but in corporates' its pretty common. Running kubefwd in a docker container, even providing an absolute path to the kubeconfig just results in connection refused ...
I know that kubeconfig works ...
Using the binary was better and did work ...
and a curl FROM WITHIN WSL works ...
HOWEVER ... http://nginx-demo:8081 is NOT available from the host (from a browser) itself unless you update the Windows hosts file to match the entry in /etc/hosts (WSL will inherit the Windows hosts file on start-up but doesn't add entries to it as it does with /etc/hosts) .... e.g. You need to add this to /c/Windows/System32/drivers/etc/hosts (which is what kubefwd added to /etc/hosts in this example)
You can use the 127.1.27.1 IP without altering the Windows hosts files but that not particularly satisfactory .. e.g. this will work from a browser on the host ... http://127.1.27.1:8081 In some ways this is WORSE than kubectl forward since at least there I can access the service on localhost:8081 without needing to mess with the Windows hosts file. So TBH, neither of these is especially attractive to me even if they do leverage features native to the platform. @iwilltry42 I'll try out your patch tomorrow (I have a bit of time off work). I do agree with much that @mash-graz has said, but I'm minded to at least move forwards even if what is implemented now becomes redundant later on. |
your kubeconfig seems to use an API server entry, which points to just edit the server entry of your kubeconfig and use the IP of your machines network card instead. concerning the mentioned windows but this mechanism does have same important advantages in practice. faked name entries like this are nearly indispensable, if you have to develop and test services behind L7 reverseproxies resp. name based dispatching. just forwarding ports to arbitrary IPs doesn't work in this case. |
Thanks for the feedback! @goffinf , maybe WSL2 will bring you some improvements soon 😉 |
yes -- i definitely would make it the default behavior! it may look a little bit crazy and ignorant, that i'm still insisting on this particular little implementation detail, but i think it makes indeed an important difference for end-users. in most cases they'll only want to forward LB/ingress http/s access on standard port 80 and 443 on the host sides public network -- that's at least my expectation -- , and this trivial usage scenario should be supported as simple as possible. it shouldn't need any unfamiliar and complex command line options and just work reliable and as expected by common mind out of the box.
these other alternatives do not render our port forwarding efforts redundant, because they are more designed to realize save remote access to network services of a cluster, instead of just making ports accessible on the public network -- i.e. they accomplish a slightly different purpose --, nevertheless it's interesting to study, how they overcome some of the obstacles and ambiguities related to this forwarding challenge. |
Your reasoning makes total sense @mash-graz , so I updated the default node to |
@mash-graz just following up with your comment ....
Unfortunately that doesn't appear to work. No kubectl commands succeed with that amendment and WSL also crashes. Obviously the default for k3s is localhost. I thought that I might be able to pass this via the bind-address server arg as you can with k3s,..
but I couldn't see anything in the k3d docs which suggests how k3s server args are exposed. Do you know ? |
@iwilltry42 So I have installed v1.2.0-beta.1 and run k3d with this ...
I can see port 8081 published on the server ..
I have a deployment and service for nginx where the service is listening on 8081
Would you expect to be able to successful call that service on 8081. If I try curl ...
Added an Ingress (no change) ...
What am I missing ? Thanks Fraser. |
You can pass k3s server args to k3d using the |
With the Manifest above I wouldn't expect it to work, since
You didn't map the ports for ingress, so that wouldn't work as well. I'll create a demo for this 👍 |
yes -- that's the correct answer to question, but i don't think, it will solve the troubles described by @goffinf. it doesn't matter to which IP the k3s server API is bound inside the container, because from the outside it's always reached via this port forwarding internally specified by k3d ( perhaps @goffinf is fighting some windows/WSL specific issues, but on linux i do not have any troubles to reach the API from outside of k3ds docker instance, neither locally on the host nor by remote access, and it doesn't make a difference if kubectl is used or kubefwd. |
@goffinf this is a simple example of what I tested with
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 80
That works for me. |
@goffinf or the same using a NodePort service:
...
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: nginx
spec:
ports:
- name: 80-80
nodePort: 30080
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: NodePort
|
@iwilltry42 I can confirm that using the latest version (1.2.0-beta.2) that the Ingress example works as expected with WSL. I can use curl localhost:8081 directly from WSL and within a browser on the host. Moreover Ingress works using a domain also. In this case I created the k3d cluster and mapped port 80:80 for the server (default) providing access to the Ingress Controller on that port rather the 8081 ...
Then defined the deployment, service and ingress as follows (noting the ingress now defines the host domain) ...
Using curl the services was reachable ..
So congrats, the publish capability and Ingress are working fine and very naturally iro k8s. Great work Changing the url to something non-existant returns the default backend 404 response as expected ...
Finally (again as expected but good to confirm) requests are properly load balanced across the 2 replicas that were defined in the deployment, alternating on each request. Regards Fraser. |
@iwilltry42 In your example which now appears on the github README, was there a reason you chose to use the --api-port arg (it doesn't seem to materially impact whether the example works or not, so I wasn't sure if you were showing that for some other reason ?)
|
Hey @goffinf , thank you very much for your feedback and for confirming the functionality of the new feature! UPDATE: I removed the |
Haha, beat me to it. I was going to suggest that it would not be confusing if you added a note. In general I prefer plenty of examples that show off one, or a small number of features, rather than a single example that has everything packed into it, especially where there might be a difference in behaviour for particular combinations. You’ve done that now, so that’s perfect. Talking of documentation and examples, the question I asked a few days ago around passing additional server args is I think worth documenting (i.e. using —server-arg or -x) and provides an opportunity to talk briefly about the integration between k3d and k3s. I don’t know whether it’s possible to mirror every k3s arg or not (if that is the case you could simply link through to the k3s docs rather than repeat it all I guess) ? I suspect others might also be interested in how, or indeed if, k3d will track the life-cycle of k3s and respond as/if/when new features are added or changed. IMO that’s an important consideration when selecting tools that app devs might adopt for a variety of reasons. Whilst everyone accepts the ephemeral nature of open source projects and, as in this case, if the user experience is relatively intuitive such that the skills investment isn’t high, it’s less of a concern, but ... it’s still nice to back tools that have a strong likelihood of a longer shelf-life and an active community. Just a thought. I note the new FAQ section. Happy to help out here although I am aware of how important it is to ensure that all docs are accurate and up-to-date. |
Well... with Anyways, I'm totally in for adding additional documentation and would be super happy about your contributions to them, since you appear to be a very active user :) Maybe we can come to the point where we'll be able to create a compatibility matrix for |
Precisely. I spend a good deal of time at my place of employment writing up a variety of docs, from best practice guides and standard prototypes to run books. I can’t claim to be brilliant at it, but I do recognise the importance of clear information which illustrate through descriptions and examples the key use cases, and, importantly set out the scope. The latter plays to your comment about any potential tie-in (or not) with k3s, since many no doubt view k3d as a sister project or one that implies some level of dependency. I think it would be good to set that out and the extent to which that is true, perhaps especially so as docker as a container run-time has somewhat less focus these days (you can take Darren’s comment about k3s ... of course I did a DinD implementation .. in a couple of ways I guess). I have noted from our conversations and other issues, both here and on k3s and k3os (I tend to read them all since there is much to be learned from other people’s concerns, as well as an opportunity to help sometimes) that there is still a level of ‘hidden’ configuration that is not obvious. That is not to say it’s deliberate, it most often to do with the time available to work on new features vs. documenting existing ones, and of course an assumed level of (pre) knowledge. Anyways, I am active because I think this project has merit and potential for use by me and my work colleagues. So anything I can do to help I will. I note Darren commented recently that WSL2 and k3d would be a very satisfactory combination, and I agree. But, since we aren’t in the business of vapourware, there’s still much to offer without WSL2 imo. I think the next non-rc release might provide a good moment to review docs and examples. |
I'm looking forward to your contributions to Anyways... I think this issue is growing a bit too big. I guess the main painpoint of this issue has been solved, right? So can it be closed then @goffinf? |
i finally could figure out an implementation of this alternative manner to expose the most common network access variants by a simple it has some pros (e.g. you don't have to specify all the ports resp. can be reconfigure them via k8s mechanism), but also cons (e.g. it will most likely only work on the linux platform). in fact it's only exposing the server on the host network, because remapping multiply workers resp. their control ports on one machine isn't a trivial taskt. connecting the workers to server on the host network is also a bit tricky, because most of dockers internal name services do not work across different networks or aren't available on linux machines. i therefore had use the gateway IP of our custom network as workaround to reach the host... i'm not sure, if it is really a useful improvement, after all this wonderful recent port mapping improvmenst, developed be @iwilltry42 and @andyz-dev, nevertheless i would be happy, if you could take a look at it. |
Thanks for your PR @mash-graz , I just have to dig a bit deeper into the networking part to leave a proper review. |
@iwilltry42 My thoughts exactly. This issue has served its purpose and an initial implementation has been delivered. Thank you. I am happy to close this down and raise any additional work as new issues. |
Using k3s and docker-compose I can set a port binding for a node and the create an ingress using that to route into a pod ... lets say I bind port 8081:80, where port 80 is used by an nginx pod ... I can use localhost to reach nginx ..
http://localhost:8081
How can this be achieved using k3d ?
The text was updated successfully, but these errors were encountered: