Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement] Ingress port mapping #11

Closed
goffinf opened this issue Apr 14, 2019 · 67 comments
Closed

[Enhancement] Ingress port mapping #11

goffinf opened this issue Apr 14, 2019 · 67 comments
Assignees
Labels
enhancement New feature or request

Comments

@goffinf
Copy link

goffinf commented Apr 14, 2019

Using k3s and docker-compose I can set a port binding for a node and the create an ingress using that to route into a pod ... lets say I bind port 8081:80, where port 80 is used by an nginx pod ... I can use localhost to reach nginx ..

http://localhost:8081

How can this be achieved using k3d ?

@zeerorg
Copy link
Collaborator

zeerorg commented Apr 17, 2019

Maybe issue #6 is a relevant issue in this regard ? Though for port-forwarding the current recommended way of doing it is using kubectl port-forward

@iwilltry42
Copy link
Member

I would also opt for kubectl port-forward as @zeerorg said.
But, since ingress can in many cases be the only service that needs ports mapped to the host, I could imagine adding an extra flag to k3d create for ingress port mapping. E.g. k3d create --ingress-http 30080 --ingress-https 30443 for mapping http/https ports to the host system.
Or a single flag for mapping any arbitrary port.

WDYT

@iwilltry42 iwilltry42 added the enhancement New feature or request label Apr 29, 2019
@mash-graz
Copy link

a working solution to specify the port forward during k3d creation would be indeed very helpful!

@goffinf
Copy link
Author

goffinf commented May 6, 2019

Unsurprisingly I can confirm that using kubectl port-forward does work, but ... I would still much prefer to define Ingress resources

@mash-graz
Copy link

but ... I would still much prefer to define Ingress resources

1+

the actual behavior looks rather inconvenient and insufficient to me.

if it's possible forward the API network connectivity ports to the public network, it should be done resp. be configurable for ingress ports as well. without this feature k3d is IMHO hardly usable for serious work.

@iwilltry42
Copy link
Member

So I'd go with an additional (string-slice) flag like --extra-port <host:k3s> here.
No since it's most wanted to use this for ingress, it should be enough to expose the ports specified here on the server node, right?
Or we take a more sophisticated approach and extend it to --extraport <host:k3s:node>, where node can be either server, workers, all or <node name>.
Any opinions on that?

@iwilltry42 iwilltry42 changed the title Ingress port mapping [Enhancement] Ingress port mapping May 7, 2019
@goffinf
Copy link
Author

goffinf commented May 7, 2019

Being able to specify the node ‘role’ is more flexible if we are just talking about exposing ports in the general sense. I’m not sure I can think of a use case for using these for an Ingress object for Control Plane or Etcd (and as yet there is no separation of these roles - but that might happen in the future ?), but it’s still better to have the option. So here the prototype would be something like ...

—add-port <role>:<port>

Where role can be worker (default), controlplane, or etcd. (or just server if cp and etcd will always be combined)

@mash-graz
Copy link

i'm not sure, if it really makes sense, to search for a more abstract / future proof / complicated command line syntax in this particular case?

in fact, we just want to utilize same very simple docker-API "PortBindings" functionality in all of this cases -- isn't it?

i therefore would simply extended the usability of the existing -p/--port command line -- i.e. make it usable multiple times [for API connectivity and an arbitrary list of ingress ports] and allow "container-port:host-port" pairs for usage scenarios with more than one instance in parallel. this would look rather similar to the expose syntax in dockers CLI resp. a natural and commonly expected simple wrapper behavior.

@iwilltry42
Copy link
Member

I agree with you @mash-graz that we could re-use the --port flag, but I think that right now it would break UX, since in the original sense --port X simply mapped port X to the K8s API port in the server container. This functionality we would break by introducing the --port c:h syntax, so we would at least need to find a transitional solution.

I also think like supported by @goffinf that it would be a good thing to narrow it down to node roles, where setting the role would be optional.
@goffinf: I think only --add-port <role>:<port> needs a notion of host and container port.
To stick to the docker syntax I'd go with --add-port <hostport>:<containerport>:role, say "map this port from my host to this port on those nodes".

@mash-graz: any idea how we could transition from our current --port <server-port> syntax to the syntax mentioned above? And would it fulfill your needs?

@goffinf
Copy link
Author

goffinf commented May 7, 2019

@iwilltry42 That works for me.

@iwilltry42 iwilltry42 self-assigned this May 7, 2019
@mash-graz
Copy link

@iwilltry42

any idea how we could transition from our current --port syntax to the syntax mentioned above? And would it fulfill your needs?

yes -- i also have some doubts concerning the backwards compatibility of such a change.
and indeed, an additional --add-port-option could solve this risk in a very reliable manner.
but is it really neccesarry?

  • if the -p/--port-option isn't specified on the command line it's interpreted just like a single usage of -p 6433, because s3d is hardly usable without exposing the kubernets API on the default port reachable from the public network.

  • all other useful invocations of this parameter will need the colon-notation, because they actual ingest port on the container network has to be specified anyway [only the port on host side, i.e. the number after the colon sign could be seen as optional in case of using the same port]. so we just would have to look for the colon-sign resp. differentiate between a single int argument and int:[int]-sequences

  • users should be even free to use crazy setups like.: k3d create --server-arg=https-listen-port=8999 -p 8999:6433 (or similar) in the suggested -p/--port syntax without breaking the system logic.

nevertheless i could accept the --add-port alternative just as well.

@iwilltry42
Copy link
Member

You know what? Why not both?
We could create a minor release of 1.x with the --add-port flag and a major release 2.0.0 (which can totally bring in breaking changes) with the extended --port flag.

@andyz-dev
Copy link
Contributor

Borrowing from the Docker CLI, we could also consider using --publish for mapping host ports into the k3s node ports. In fact, I am working a pull request for it. It would be great to assign the -p short hand to this option as well. (I am also o.k. with --add-port if it is more preferred)

I think it is useful to keep the api port spec separate from the --publish option. Since the worker nodes need to known where the API port is for joining the cluster. How about we change it to --api-port ,-a, which takes a string argument in the form of 'ip:host-port'?

@iwilltry42
Copy link
Member

You're right there, I didn't think of that...
I like you're suggestion with the api-port 👍

@mash-graz
Copy link

if a working minor release could be realized with any usable solution, i'll be happy. :)
i don't want convince anybody, just help to find a working solution resp. rethink it from another point of view...

How about we change it to --api-port ,-a, which takes a string argument in the form of 'ip:host-port'?

that's an interesting amendment, because in some cases it could indeed make sense to bind the exposed API connectivity to just one specified host-IP/network instead of 0.0.0.0 for security reasons!

@iwilltry42
Copy link
Member

So to wrap things up, I'd suggest doing the following:

For the next minor release:

  • add --add-port option
  • hint on deprecation of --port and breaking change in next major version

For the next major release:

  • re-use --port, -p flag for generic port mapping in the style of --port <hostport>[:<containerport>][:<node roles>], where default for <node role> would be all and <containerport> would be same as <hostport> if left blank.
  • introduce --api-port, -a <hostport>[:<containerport>] with old functionality of --port flag

Any final comments on that?
@andyz-dev , are you already working on a PR for this? If not, I'd have time now 👍

@iwilltry42
Copy link
Member

iwilltry42 commented May 7, 2019

BTW: If you didn't already, you might want to consider joining our slack channel #k3d on https://rancher-users.slack.com after signing up here: https://slack.rancher.io/ 😉

@andyz-dev
Copy link
Contributor

@iwilltry42 I already have --publish working, just polishing it before sending out the pull request. I will also rename it to --add-port.

I am not working on--api-port, nor on deprecating --port. Please feel free to take them up.

@iwilltry42
Copy link
Member

@andyz-dev Alright, I'll base the next steps on the results of your merge, so I don't have to deal with all the merge conflicts 😁

@mash-graz
Copy link

mash-graz commented May 7, 2019

hmmm... i really like the idea, to prevent the actual CLI behavior till the next major release in semantic versioning conformance, but nevertheless we should try to reduce the needed changes to the bare minimum.

i would therfore suggest:

  1. add --publish without a short option rigtht now for the ingress forwarding right now (=next minor release)
  2. support --api-port/-a in parallel to the existing/deprecated --port/-p

in this case users could use the new syntax from now on and any revoke or redefinition of --port/-p at one of the next major releases shouldn't affect them anymore.

btw: i was just looking again, how docker interprets all those possible colon separated variants:

https://docs.docker.com/engine/reference/run/#expose-incoming-ports

this myriad of variants is perhaps an overkill for our actual purpose, nevertheless i would at least try to stay somehow close and compatible to this well known conventions...

@iwilltry42
Copy link
Member

@mash-graz , yep, like that procedure 👍
Awaiting @andyz-dev's PR now for --publish or --add-port and will base the other changes on top of that.

Regarding all the possibilities of port-mappings, I'm on your side there that we should stick close to what docker does. Though I'd really like to put the notion of node roles (or at some point in the future also node IDs) in there somehow so that we can specify which nodes should have those ports mapped to the host.

@andyz-dev
Copy link
Contributor

@iwilltry42 @mash-graz O.K. I will stick with --publish for now, and add --add-port as its alias.

@mash-graz
Copy link

Regarding all the possibilities of port-mappings, I'm on your side there that we should stick close to what docker does. Though I'd really like to put the notion of node roles (or at some point in the future also node IDs) in there somehow so that we can specify which nodes should have those ports mapped to the host.

yes -- it definitely makes sense, to catch the different ways of exposing services in k8s by more adequate/selective forwarding strategies in the long run...

@iwilltry42
Copy link
Member

Thanks for the PR #32 @andyz-dev !
@goffinf and @mash-graz , maybe you want to have a look there as well 👍

@mash-graz
Copy link

mash-graz commented May 7, 2019

thanks @andyz-dev ! 👍
that's a much more complex patch than expected.

please correct me, if i'm totally wrong, but i don't think, this forwarding of all worker nodes is necessary or useful for typical ingress/LoadBalancer scenarios -- e.g. when k3ds traefik default installation will be utilized. in this case, all the internal routing is already concentrated/bound to just on single IP-addr port pair within the docker context. we only have to forward it from this internal docker network to the real public outer world -- i.e. one of the more common networks of the host.

but again: maybe i'm totally wrong concerning this point? -- please don't hesitate to correct me!

but your approach could make some sense for some of the other network exposing modes of k8s.
although i would at least suggest 1000-steps as port offset in this case to minimize conflicts -- e.g. other daemons listening on privileged standard ports (<=1024).

@andyz-dev
Copy link
Contributor

thanks @andyz-dev ! 👍
that's a much more complex patch than expected.

I feel the same way. Any suggestion on how to simplify it?

please correct me, if i'm totally wrong, but i don't think, this forwarding of all worker nodes is necessary or useful for typical ingress/LoadBalancer scenarios -- e.g. when k3ds traefik default installation will be utilized. in this case, all the internal routing is already concentrated/bound to just on single IP-addr port pair within the docker context. we only have to forward it from this internal docker network to the real public outer world -- i.e. one of the more common networks of the host.

In the product we are working on, we need our LB to run on the worker nodes. For HA, we usually run more than 2 LBs. I think there is a need to exposing ports on more than one worker nodes.

I agree with you that exposing ports on all works is overkill. Would the "node role" concept proposed by @iwilltry42 work for you? May be we should add it soon.

but again: maybe i'm totally wrong concerning this point? -- please don't hesitate to correct me!

but your approach could make some sense for some of the other network exposing modes of k8s.
although i would at least suggest 1000-steps as port offset in this case to minimize conflicts -- e.g. other daemons listening on privileged standard ports (<=1024).

Notice we are only stepping the host ports.
May be we can add a --publish-stepping option. Then again, this may be a moot point with "node role"

@iwilltry42
Copy link
Member

@mash-graz I agree with you there, it got way more complex than I first expected it to be.
But I cannot think of a simpler solution than what @andyz-dev did (good job!).
I already worked on combining his solution with the node role (and regexp validation of input), but it will make the whole thing even more complex (but also cool).
I'd go with an extended docker syntax like ip:hostPort:containerPort/protocol@node where only containerPort is mandatory and @node can be used multiple times (either node role or node name, while all is default).

For the port clashes I was thinking of something like a --auto-remap flag instead of doing it by default?
We could even go as far as automatically checking if ports are already being used with a simple net.Listen() to avoid crashes.

@andyz-dev
Copy link
Contributor

@goffinf Thanks for the input. Is there a reference to the issue you mentioned on k3s not being able to support more than one ingress node? I'd like to take a closer look and make sure multiple node input works on k3d. We should probably also take a fresh look at the ingress from top down (instead of coding up) to make sure the end result works for the community.

@mash-graz Thanks for the input, I do value them very much and the end solution will be better if we think about them collectively. That's why @iwilltry42 made the pre-release available so that more folks can try it out and provide feed backs. FWIW, from tip of the master, on MAC OS, the show svc of traefik also gave the external IP of docker0, (on MAC docker0 runs inside a VM, of course). Sounds like you are already thinking of an alternative design, If helpful, I will be happy to continue the discussion (may be we can make use of the slack channel), or be a sounding board to help you flush out your design ideas.

@iwilltry42
Copy link
Member

Hey there, now that was a long read 😁
I was a bit busy in the last days, but now I took the time to read through all the comments again to figure out what's the best thing to do.

So as far as I understand the main risk/concern in the current approach is that we expose ports with on each k3d node (containers) with an auto-offset on the host.
Since we often don't need this, this could just introduce security issues.

The main goal of this should be to support single hostport to single containerport bindings to e.g. allow ingress or LB port forwarding for local development.

Since the portmapping is the most flexible thing that we can do at creation time without too much engineering extra-overhead, I think we should stick with it, but in a different way then now.

I would still propose to enhance the docker notation with the node specifier, i.e. [<ip>:][<host-port>:]container-port[/<protocol>]@<node-specifier>, where container-port and node-specifier are mandatory (first change). The auto-offset of host ports would then be an optional feature (e.g. --port-auto-offset) and 1:1 port-mapping will be emphasized by enforcing the node-specifier in --publish.

Still, I think that @mash-graz idea of somehow connecting to the docker network is a super cool idea and I think that we might be able to support this at some point when we've had more time for researching it.
The network-mode=host feature we could add with a hint that it will only work for Linux users.
I think there are more people interested in this (#6).

@iwilltry42
Copy link
Member

After the first easy implementation, we can think of further getting rid of auto-offset, e.g. by allowing port ranges for host-port, so that we will generate a 1:1 mapping out of a single flag.
E.g.

  • we pass --workers 5 --publish 6550-6554:1234@workers to have port 1234 of each worker mapped to the host
  • We get 0.0.0.0:6550:1234/tcp@worker-0, 0.0.0.0:6551:1234/tcp@worker-1, ..., 0.0.0.0:6554:1234/tcp@worker-4
    Since this requires quite a bit more code and thoughts, I will keep it out of my initial PR.

@mash-graz
Copy link

mash-graz commented May 13, 2019

thanks @iwilltry42 for preparing this patch!

So as far as I understand the main risk/concern in the current approach is that we expose ports with on each k3d node (containers) with an auto-offset on the host.
Since we often don't need this, this could just introduce security issues.
The main goal of this should be to support single hostport to single containerport bindings to e.g. allow ingress or LB port forwarding for local development.

yes -- that's more or less my point of view too. :)

we should really try to concentrate on this simple one-to-one port mapping (LB/ingress), although the more complex case (auto offset...) should be supported as good as possible.

I would still propose to enhance the docker notation with the node specifier, ... , where container-port and node-specifier are mandatory.

does it really make sense, to require the node-specifier as mandatory field?

if we understand the one-to-one port mapping case as the most common usage scenario, we can simple suppose it as the default mode of operation, as long as no other node specifier is explicitly given on the command line.
this would make it easier to use resp. needs less lengthy command line options in practice.

i still see the problem, how the one-to-one port mapping can be specified in an unambiguous manner by the proposed notation?
a ...@server will only work for single master clusters, but in the future, when k3s will finally be able to support multi master clusters as well, it unfortunately will not always signify a unique node resp. a one-to-one mapping anymore...

i guess, it's a kind of fundamental misunderstanding or oversimplification, if we presuppose a congruence between docker container entities and k8s more complex network abstraction. both accomplish orthogonal goals by different means. utilizing port mappings/forwarding in workarounds to satisfy some practical access requirements, should be always seen as rather questionable and botchy cutoff.

there are already some interesting tools and improvements of kubectl port-forward available, which seem to solve similar kinds of network routing and node access in a comfortable and k8s idiosyncrasies respecting fashion:

https://github.com/txn2/kubefwd
https://github.com/flowerinthenight/kubepfm

they are most likely a more suitable choice, if one wants to handle demanding network access to remote clusters and k3s running within docker or VMs. in comparison with our docker specific approach this kind of solution comes with a few pros and cons:

pros:

  • much cleaner and unambiguous access to k8s services
  • works even in case of dynamic changes happening in the cluster

cons:

  • they need root privileges to forward privileged ports
  • it's a little bit less effective and slower in comparison to other variants of direct access
  • it can be only used, when the cluster is already running

The network-mode=host feature we could add with a hint that it will only work for Linux users.

yes, i still think this variant could be a worthwhile and extraordinary user friendly option on linux machines. i'll try to test it and prepare a PR for this feature as soon as possible.

@goffinf
Copy link
Author

goffinf commented May 14, 2019

I've had limited success with kubefwd, which I also thought might have some legs for this problem (having Kelsey Hightower endorse a product must give it some street creds I suppose). Anyway, my environment is Windows SubSystem for Linux (WSL). I appreciate that's not the case for everyone, but in corporates' its pretty common.

Running kubefwd in a docker container, even providing an absolute path to the kubeconfig just results in connection refused ...

docker run --name fwd -it --rm -v $PWD/.kube/config:/root/.kube/config txn2/kubefwd services -n default
2019/05/13 23:27:21  _          _           __             _
2019/05/13 23:27:21 | | ___   _| |__   ___ / _|_      ____| |
2019/05/13 23:27:21 | |/ / | | | '_ \ / _ \ |_\ \ /\ / / _  |
2019/05/13 23:27:21 |   <| |_| | |_) |  __/  _|\ V  V / (_| |
2019/05/13 23:27:21 |_|\_\\__,_|_.__/ \___|_|   \_/\_/ \__,_|
2019/05/13 23:27:21
2019/05/13 23:27:21 Version 1.8.2
2019/05/13 23:27:21 https://github.com/txn2/kubefwd
2019/05/13 23:27:21
2019/05/13 23:27:21 Press [Ctrl-C] to stop forwarding.
2019/05/13 23:27:21 'cat /etc/hosts' to see all host entries.
2019/05/13 23:27:21 Loaded hosts file /etc/hosts
2019/05/13 23:27:21 Hostfile management: Backing up your original hosts file /etc/hosts to /root/hosts.original
2019/05/13 23:27:21 Error forwarding service: Get https://localhost:6443/api/v1/namespaces/default/services: dial tcp 127.0.0.1:6443: connect: connection refused
2019/05/13 23:27:21 Done...

I know that kubeconfig works ...

kubectl --kubeconfig=$PWD/.kube/config get nodes
NAME                       STATUS   ROLES    AGE     VERSION
k3d-k3s-default-server     Ready    <none>   6h38m   v1.14.1-k3s.4
k3d-k3s-default-worker-0   Ready    <none>   6h38m   v1.14.1-k3s.4
k3d-k3s-default-worker-1   Ready    <none>   6h38m   v1.14.1-k3s.4
k3d-k3s-default-worker-2   Ready    <none>   6h38m   v1.14.1-k3s.4

Using the binary was better and did work ...

sudo kubefwd services -n default

2019/05/14 00:32:44  _          _           __             _
2019/05/14 00:32:44 | | ___   _| |__   ___ / _|_      ____| |
2019/05/14 00:32:44 | |/ / | | | '_ \ / _ \ |_\ \ /\ / / _  |
2019/05/14 00:32:44 |   <| |_| | |_) |  __/  _|\ V  V / (_| |
2019/05/14 00:32:44 |_|\_\\__,_|_.__/ \___|_|   \_/\_/ \__,_|
2019/05/14 00:32:44
2019/05/14 00:32:44 Version 1.8.2
2019/05/14 00:32:44 https://github.com/txn2/kubefwd
2019/05/14 00:32:44
2019/05/14 00:32:44 Press [Ctrl-C] to stop forwarding.
2019/05/14 00:32:44 'cat /etc/hosts' to see all host entries.
2019/05/14 00:32:44 Loaded hosts file /etc/hosts
2019/05/14 00:32:44 Hostfile management: Original hosts backup already exists at /home/goffinf/hosts.original
2019/05/14 00:32:44 WARNING: No backing pods for service kubernetes in default on cluster .
2019/05/14 00:32:44 Forwarding: nginx-demo:8081 to pod nginx-demo-76d6b7f896-855r2:80

and a curl FROM WITHIN WSL works ...

curl http://nginx-demo:8081
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
...
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

HOWEVER ... http://nginx-demo:8081 is NOT available from the host (from a browser) itself unless you update the Windows hosts file to match the entry in /etc/hosts (WSL will inherit the Windows hosts file on start-up but doesn't add entries to it as it does with /etc/hosts) .... e.g.

You need to add this to /c/Windows/System32/drivers/etc/hosts (which is what kubefwd added to /etc/hosts in this example)

127.1.27.1 nginx-demo nginx-demo.default nginx-demo.default.svc.cluster.local

You can use the 127.1.27.1 IP without altering the Windows hosts files but that not particularly satisfactory ..

e.g. this will work from a browser on the host ... http://127.1.27.1:8081

In some ways this is WORSE than kubectl forward since at least there I can access the service on localhost:8081 without needing to mess with the Windows hosts file.

So TBH, neither of these is especially attractive to me even if they do leverage features native to the platform.

@iwilltry42 I'll try out your patch tomorrow (I have a bit of time off work).

I do agree with much that @mash-graz has said, but I'm minded to at least move forwards even if what is implemented now becomes redundant later on.

@mash-graz
Copy link

mash-graz commented May 14, 2019

Running kubefwd in a docker container, even providing an absolute path to the kubeconfig just results in connection refused ...

 docker run --name fwd -it --rm -v $PWD/.kube/config:/root/.kube/config txn2/kubefwd services -n default
 ...
 2019/05/13 23:27:21 Error forwarding service: Get https://localhost:6443/api/v1/namespaces/default/services: dial tcp 127.0.0.1:6443: connect: connection refused

I know that kubeconfig works ...

your kubeconfig seems to use an API server entry, which points to localhost:6443.
this will only work on your local machine and not for remote access to your cluster. using virtual machines envrionments or docker sandboxes, have to be seen as kind of remote access in this respect. localhost doesn't connect to the same machine in this case...

just edit the server entry of your kubeconfig and use the IP of your machines network card instead.

concerning the mentioned windows hosts-file synchronization issues, i would answer: yes, it's just another dirty workaround.

but this mechanism does have same important advantages in practice. faked name entries like this are nearly indispensable, if you have to develop and test services behind L7 reverseproxies resp. name based dispatching. just forwarding ports to arbitrary IPs doesn't work in this case.
but again: it's just another tricky workaround, and by no means a clean and exemplary solution. ;)

@iwilltry42
Copy link
Member

Thanks for the feedback!
@mash-graz So you would make @server the default for the time that we don't have HA (= multiple masters) in k3s? As soon as we get HA mode, we could think of adding a loadbalancer in front so that we don't have multiple host ports open for the same ports on the master nodes.
Also, I'll have a look into the two projects you mentioned 👍

@goffinf , maybe WSL2 will bring you some improvements soon 😉

@mash-graz
Copy link

mash-graz commented May 14, 2019

@mash-graz So you would make @server the default for the time that we don't have HA (= multiple masters) in k3s? As soon as we get HA mode, we could think of adding a loadbalancer in front so that we don't have multiple host ports open for the same ports on the master nodes.

yes -- i definitely would make it the default behavior!

it may look a little bit crazy and ignorant, that i'm still insisting on this particular little implementation detail, but i think it makes indeed an important difference for end-users. in most cases they'll only want to forward LB/ingress http/s access on standard port 80 and 443 on the host sides public network -- that's at least my expectation -- , and this trivial usage scenario should be supported as simple as possible. it shouldn't need any unfamiliar and complex command line options and just work reliable and as expected by common mind out of the box.

Also, I'll have a look into the two projects you mentioned

these other alternatives do not render our port forwarding efforts redundant, because they are more designed to realize save remote access to network services of a cluster, instead of just making ports accessible on the public network -- i.e. they accomplish a slightly different purpose --, nevertheless it's interesting to study, how they overcome some of the obstacles and ambiguities related to this forwarding challenge.

@iwilltry42
Copy link
Member

iwilltry42 commented May 14, 2019

Your reasoning makes total sense @mash-graz , so I updated the default node to server in my PR #43
UPDATE: need to change a different thing later actually...

@goffinf
Copy link
Author

goffinf commented May 14, 2019

@mash-graz just following up with your comment ....

... your kubeconfig seems to use an API server entry, which points to localhost:6443. ... just edit the server entry of your kubeconfig and use the IP of your machines network card instead

Unfortunately that doesn't appear to work. No kubectl commands succeed with that amendment and WSL also crashes. Obviously the default for k3s is localhost.

I thought that I might be able to pass this via the bind-address server arg as you can with k3s,..

sudo k3s server --bind-address 192.168.0.29 ...

but I couldn't see anything in the k3d docs which suggests how k3s server args are exposed. Do you know ?

@goffinf
Copy link
Author

goffinf commented May 14, 2019

@iwilltry42 So I have installed v1.2.0-beta.1 and run k3d with this ...

k3d create --publish 8081:8081@server --workers 2

I can see port 8081 published on the server ..

docker container ls -a
CONTAINER ID        IMAGE                COMMAND                  CREATED              STATUS              PORTS                                            NAMES
c367af69df28        rancher/k3s:v0.5.0   "/bin/k3s agent"         59 seconds ago       Up 56 seconds                                                        k3d-k3s-default-worker-1
0211bedcfb27        rancher/k3s:v0.5.0   "/bin/k3s agent"         About a minute ago   Up 58 seconds                                                        k3d-k3s-default-worker-0
e30c8789d6da        rancher/k3s:v0.5.0   "/bin/k3s server --h…"   About a minute ago   Up About a minute   0.0.0.0:6443->6443/tcp, 0.0.0.0:8081->8081/tcp   k3d-k3s-default-server

I have a deployment and service for nginx where the service is listening on 8081

apiVersion: v1
kind: Service
metadata:
  name: nginx-demo
  labels:
    app: nginx-demo
spec:
#  type: NodePort
  ports:
    - port: 8081
      targetPort: 80
      name: http
  selector:
    app: nginx-demo

Would you expect to be able to successful call that service on 8081. If I try curl ...

curl http://localhost:8081 -v
* Rebuilt URL to: http://localhost:8081/
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8081 (#0)
> GET / HTTP/1.1
> Host: localhost:8081
> User-Agent: curl/7.58.0
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact

Added an Ingress (no change) ...

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-demo
  annotations:
    ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: nginx-demo
          servicePort: 8081

What am I missing ?

Thanks

Fraser.

@iwilltry42
Copy link
Member

@mash-graz just following up with your comment ....

... your kubeconfig seems to use an API server entry, which points to localhost:6443. ... just edit the server entry of your kubeconfig and use the IP of your machines network card instead

Unfortunately that doesn't appear to work. No kubectl commands succeed with that amendment and WSL also crashes. Obviously the default for k3s is localhost.

I thought that I might be able to pass this via the bind-address server arg as you can with k3s,..

sudo k3s server --bind-address 192.168.0.29 ...

but I couldn't see anything in the k3d docs which suggests how k3s server args are exposed. Do you know ?

You can pass k3s server args to k3d using the --server-arg/-x flag.
E.g. k3d create -x "--bind-address 192.168.0.29" or k3d create -x --bind-address=192.168.0.29

@iwilltry42
Copy link
Member

@iwilltry42 So I have installed v1.2.0-beta.1 and run k3d with this ...

k3d create --publish 8081:8081@server --workers 2

I can see port 8081 published on the server ..

docker container ls -a
CONTAINER ID        IMAGE                COMMAND                  CREATED              STATUS              PORTS                                            NAMES
c367af69df28        rancher/k3s:v0.5.0   "/bin/k3s agent"         59 seconds ago       Up 56 seconds                                                        k3d-k3s-default-worker-1
0211bedcfb27        rancher/k3s:v0.5.0   "/bin/k3s agent"         About a minute ago   Up 58 seconds                                                        k3d-k3s-default-worker-0
e30c8789d6da        rancher/k3s:v0.5.0   "/bin/k3s server --h…"   About a minute ago   Up About a minute   0.0.0.0:6443->6443/tcp, 0.0.0.0:8081->8081/tcp   k3d-k3s-default-server

I have a deployment and service for nginx where the service is listening on 8081

apiVersion: v1
kind: Service
metadata:
  name: nginx-demo
  labels:
    app: nginx-demo
spec:
#  type: NodePort
  ports:
    - port: 8081
      targetPort: 80
      name: http
  selector:
    app: nginx-demo

Would you expect to be able to successful call that service on 8081. If I try curl ...

With the Manifest above I wouldn't expect it to work, since NodePort is commented out, so no port is exposed on the node. But even then, NodePort range is 30000-32767, so one of those ports has to be set and exposed for it to work.

curl http://localhost:8081 -v
* Rebuilt URL to: http://localhost:8081/
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8081 (#0)
> GET / HTTP/1.1
> Host: localhost:8081
> User-Agent: curl/7.58.0
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact

Added an Ingress (no change) ...

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-demo
  annotations:
    ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: nginx-demo
          servicePort: 8081

What am I missing ?

You didn't map the ports for ingress, so that wouldn't work as well. I'll create a demo for this 👍

@mash-graz
Copy link

You can pass k3s server args to k3d using the --server-arg/-x flag.
E.g. k3d create -x "--bind-address 192.168.0.29" or k3d create -x --bind-address=192.168.0.29

yes -- that's the correct answer to question, but i don't think, it will solve the troubles described by @goffinf.

it doesn't matter to which IP the k3s server API is bound inside the container, because from the outside it's always reached via this port forwarding internally specified by k3d (0.0.0.0:6443->6443/tcp) which maps it to all interfaces on the host side by this 0.0.0.0 notation. it should be therefore reachable on the host as https://localhost:6443 just as by using the public server name resp. one of the external IPs of the machine.

perhaps @goffinf is fighting some windows/WSL specific issues, but on linux i do not have any troubles to reach the API from outside of k3ds docker instance, neither locally on the host nor by remote access, and it doesn't make a difference if kubectl is used or kubefwd.

@iwilltry42
Copy link
Member

@goffinf this is a simple example of what I tested with k3d (on Linux):

  1. Create a cluster, mapping the ingress port 80 to localhost:8081
    k3d create --api-port 6550 --publish 8081:80 --workers 2

  2. Get the kubeconfig file
    export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"

  3. Create a nginx deployment
    kubectl create deployment nginx --image=nginx

  4. Create a ClusterIP service for it
    kubectl create service clusterip nginx --tcp=80:80

  5. Create an ingress object for it with kubectl apply -f

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx
  annotations:
    ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: nginx
          servicePort: 80
  1. Curl it via localhost
    curl localhost:8081/

That works for me.

@iwilltry42
Copy link
Member

iwilltry42 commented May 15, 2019

@goffinf or the same using a NodePort service:

  1. Create a cluster, mapping the port 30080 from worker-0 to localhost:8082
    k3d create --publish 8082:30080@k3d-k3s-default-worker-0 --workers 2 -a 6550

...

  1. Create a NodePort service for it with kubectl apply -f
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  ports:
  - name: 80-80
    nodePort: 30080
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: NodePort
  1. Curl it via localhost
    curl localhost:8082/

@goffinf
Copy link
Author

goffinf commented May 15, 2019

@iwilltry42 I can confirm that using the latest version (1.2.0-beta.2) that the Ingress example works as expected with WSL. I can use curl localhost:8081 directly from WSL and within a browser on the host.

Moreover Ingress works using a domain also. In this case I created the k3d cluster and mapped port 80:80 for the server (default) providing access to the Ingress Controller on that port rather the 8081 ...

k3d create --publish 80:80 --workers 2
...
docker container ls
CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS                                        NAMES
eedb8c962387        rancher/k3s:v0.5.0   "/bin/k3s agent"         30 seconds ago      Up 27 seconds                                                    k3d-k3s-default-worker-1
96ca910c7949        rancher/k3s:v0.5.0   "/bin/k3s agent"         32 seconds ago      Up 29 seconds                                                    k3d-k3s-default-worker-0
e10a95dc10b4        rancher/k3s:v0.5.0   "/bin/k3s server --h…"   34 seconds ago      Up 32 seconds       0.0.0.0:80->80/tcp, 0.0.0.0:6443->6443/tcp   k3d-k3s-default-server

Then defined the deployment, service and ingress as follows (noting the ingress now defines the host domain) ...

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-demo-dom
  labels:
    app: nginx-demo-dom
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-demo-dom
  template:
    metadata:
      labels:
        app: nginx-demo-dom
    spec:
      containers:
      - name: nginx-demo-dom
        image: nginx:latest
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-demo-dom
  labels:
    app: nginx-demo-dom
spec:
  ports:
    - port: 8081
      targetPort: 80
      name: http
  selector:
    app: nginx-demo-dom
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-demo-dom
  annotations:
    ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - host: k3d-ingress-demo.com
    http:
      paths:
      - backend:
          serviceName: nginx-demo-dom
          servicePort: 8081

Using curl the services was reachable ..

curl -H "Host: k3d-ingress-demo.com" http://localhost

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
...
</html>

So congrats, the publish capability and Ingress are working fine and very naturally iro k8s. Great work

Changing the url to something non-existant returns the default backend 404 response as expected ...

curl -H "Host: k3d-ingress-demox.com" http://localhost
404 page not found

curl localhost
404 page not found

curl localhost/foo
404 page not found

Finally (again as expected but good to confirm) requests are properly load balanced across the 2 replicas that were defined in the deployment, alternating on each request.

Regards

Fraser.

@goffinf
Copy link
Author

goffinf commented May 15, 2019

@iwilltry42 In your example which now appears on the github README, was there a reason you chose to use the --api-port arg (it doesn't seem to materially impact whether the example works or not, so I wasn't sure if you were showing that for some other reason ?)

k3d create --api-port 6550 ...

@iwilltry42
Copy link
Member

iwilltry42 commented May 16, 2019

Hey @goffinf , thank you very much for your feedback and for confirming the functionality of the new feature!
No, it was just that 6443 is constantly in use on my machine and I just left it in there so that people see the --api-port flag instead of the --port flag which we want to "deprecate" (i.e. change functionality).
Do you think it's too confusing? Then I'd rather remove it 👍

UPDATE: I removed the -a 6550 from the NodePort example and added a note regarding the --api-port flag to the ingress example 👍

@goffinf
Copy link
Author

goffinf commented May 16, 2019

Haha, beat me to it. I was going to suggest that it would not be confusing if you added a note.

In general I prefer plenty of examples that show off one, or a small number of features, rather than a single example that has everything packed into it, especially where there might be a difference in behaviour for particular combinations. You’ve done that now, so that’s perfect.

Talking of documentation and examples, the question I asked a few days ago around passing additional server args is I think worth documenting (i.e. using —server-arg or -x) and provides an opportunity to talk briefly about the integration between k3d and k3s. I don’t know whether it’s possible to mirror every k3s arg or not (if that is the case you could simply link through to the k3s docs rather than repeat it all I guess) ?

I suspect others might also be interested in how, or indeed if, k3d will track the life-cycle of k3s and respond as/if/when new features are added or changed. IMO that’s an important consideration when selecting tools that app devs might adopt for a variety of reasons. Whilst everyone accepts the ephemeral nature of open source projects and, as in this case, if the user experience is relatively intuitive such that the skills investment isn’t high, it’s less of a concern, but ... it’s still nice to back tools that have a strong likelihood of a longer shelf-life and an active community. Just a thought.

I note the new FAQ section. Happy to help out here although I am aware of how important it is to ensure that all docs are accurate and up-to-date.

@iwilltry42
Copy link
Member

Well... with --server-arg you can pass any argument to the k3s server... but if it will work in the end, we cannot verify.
It'd be a huge amount of additional work to ensure/verify that all the k3s settings are working in a dockerized environment. E.g. to support the docker flag --docker for k3s, you'd have to put it in a dind image and/or pull through the docker socket from the host system.

Anyways, I'm totally in for adding additional documentation and would be super happy about your contributions to them, since you appear to be a very active user :)

Maybe we can come to the point where we'll be able to create a compatibility matrix for k3d and k3s 👍

@goffinf
Copy link
Author

goffinf commented May 16, 2019

Precisely. I spend a good deal of time at my place of employment writing up a variety of docs, from best practice guides and standard prototypes to run books. I can’t claim to be brilliant at it, but I do recognise the importance of clear information which illustrate through descriptions and examples the key use cases, and, importantly set out the scope. The latter plays to your comment about any potential tie-in (or not) with k3s, since many no doubt view k3d as a sister project or one that implies some level of dependency. I think it would be good to set that out and the extent to which that is true, perhaps especially so as docker as a container run-time has somewhat less focus these days (you can take Darren’s comment about k3s ... of course I did a DinD implementation .. in a couple of ways I guess).

I have noted from our conversations and other issues, both here and on k3s and k3os (I tend to read them all since there is much to be learned from other people’s concerns, as well as an opportunity to help sometimes) that there is still a level of ‘hidden’ configuration that is not obvious. That is not to say it’s deliberate, it most often to do with the time available to work on new features vs. documenting existing ones, and of course an assumed level of (pre) knowledge.

Anyways, I am active because I think this project has merit and potential for use by me and my work colleagues. So anything I can do to help I will.

I note Darren commented recently that WSL2 and k3d would be a very satisfactory combination, and I agree. But, since we aren’t in the business of vapourware, there’s still much to offer without WSL2 imo.

I think the next non-rc release might provide a good moment to review docs and examples.

@iwilltry42
Copy link
Member

iwilltry42 commented May 17, 2019

I'm looking forward to your contributions to k3d's docs :)
Maybe we can open a new issue/project for docs, where we can add parts, which users might like to see there 👍

Anyways... I think this issue is growing a bit too big. I guess the main painpoint of this issue has been solved, right? So can it be closed then @goffinf?

@mash-graz
Copy link

The network-mode=host feature we could add with a hint that it will only work for Linux users.
yes, i still think this variant could be a worthwhile and extraordinary user friendly option on linux machines. i'll try to test it and prepare a PR for this feature as soon as possible.

i finally could figure out an implementation of this alternative manner to expose the most common network access variants by a simple --host/--hostnetwork option and opened PR #53.

it has some pros (e.g. you don't have to specify all the ports resp. can be reconfigure them via k8s mechanism), but also cons (e.g. it will most likely only work on the linux platform).

in fact it's only exposing the server on the host network, because remapping multiply workers resp. their control ports on one machine isn't a trivial taskt. connecting the workers to server on the host network is also a bit tricky, because most of dockers internal name services do not work across different networks or aren't available on linux machines. i therefore had use the gateway IP of our custom network as workaround to reach the host...

i'm not sure, if it is really a useful improvement, after all this wonderful recent port mapping improvmenst, developed be @iwilltry42 and @andyz-dev, nevertheless i would be happy, if you could take a look at it.

@iwilltry42
Copy link
Member

Thanks for your PR @mash-graz , I just have to dig a bit deeper into the networking part to leave a proper review.

@goffinf
Copy link
Author

goffinf commented May 17, 2019

@iwilltry42 My thoughts exactly. This issue has served its purpose and an initial implementation has been delivered. Thank you. I am happy to close this down and raise any additional work as new issues.

@goffinf goffinf closed this as completed May 17, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants