Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

some automatic Nomad and/or to Consul basic "Intentions" idea #18286

Open
blinkinglight opened this issue Aug 22, 2023 · 18 comments
Open

some automatic Nomad and/or to Consul basic "Intentions" idea #18286

blinkinglight opened this issue Aug 22, 2023 · 18 comments

Comments

@blinkinglight
Copy link

blinkinglight commented Aug 22, 2023

My primary idea looks like this:

service {
        name = "loadbalancer" 
        port = "80"
        provider = "nomad" 
        address_mode = "driver"
        intentions {
          scope = "myappscope" // in this scope, app could talk to each other loadbalancer
          ingress = undefined / true / ["myappscope", "or service"] 
          egress = undefined / true / ["myappscope", "or service"]
          iamloadbalancer = true/false // if true, the service is a loadbalancer and we know loadbalancer ip 
          loadbalanceme = true/false/["myappscope", "or service"] // if true, the service can accept traffic from 
                                                                    // loadbalancer and we know loadbalancer ip
        }
      }

      service {
        name = "appnameservice"
        port = "80"
        provider = "nomad"
        address_mode = "driver"
        intentions {
          scope = "mysecondapp"
          loadbalanceme = true
        }
        tags = [
          "traefik.enable=true" //....
        ]
      }

scope - app in same scope could talk to each other
ingress/egress - could be controlled in scope or without scope
loadbalanceme - iamloadbalancer could reach this node ( lets 1 balancer talk to 3 apps isolated from each other but not load balancer)

this block could be transfered to Consul intentions
problem - iptables / ipset rules ( open services )

@blinkinglight
Copy link
Author

#9993

@lgfa29
Copy link
Contributor

lgfa29 commented Aug 22, 2023

Hi @blinkinglight 👋

Thanks for the report! I think #9993 does cover the same idea you're suggestion right? So I'm going to close this one as duplicate, but feel free to add any additional comments and suggestions there 🙂

@lgfa29 lgfa29 closed this as completed Aug 22, 2023
@lgfa29 lgfa29 added stage/duplicate theme/consul/connect Consul Connect integration labels Aug 22, 2023
@blinkinglight
Copy link
Author

partly - yes. but not everyone uses / wants / needs to use consul and consul connect ;)

@lgfa29
Copy link
Contributor

lgfa29 commented Aug 22, 2023

Oh sorry, I misunderstood your initial comment 😄

Currently Nomad only provides service discovery functionality, which does control any of the networking configuration necessary to allow/deny traffic. This would require some kind of Nomad service mesh functionality, which we do not planned.

But I will keep this open just in case.

Thanks for the suggestion and apologies for the confusion 😅

@blinkinglight
Copy link
Author

"This would require some kind of Nomad service mesh functionality, which we do not planned."
i think its already does that, right? it makes iptables rules but without "scope"

@lgfa29
Copy link
Contributor

lgfa29 commented Aug 22, 2023

Not quite, it only creates some basic iptable rules to allow external traffic to allocations running in bridge mode.
https://developer.hashicorp.com/nomad/docs/networking#bridge-networking

It doesn't really know where traffic is going to/from.

@blinkinglight
Copy link
Author

thats why i suggesting new block ;) so it could know

@lgfa29
Copy link
Contributor

lgfa29 commented Aug 22, 2023

Hum...so that block you indicate to the client to create iptable rules to allow/deny traffic from the IP:port of one allocation to another?

It sounds doable, but I'm worry about how this would scale. Managing iptable rules is quite tricky and slow 😅

@blinkinglight
Copy link
Author

blinkinglight commented Aug 22, 2023

Hum...so that block you indicate to the client to create iptable rules to allow/deny traffic from the IP:port of one allocation to another?

erm. atleast ip to/from ip and if its not defined - nomad should do things same as now

It sounds doable, but I'm worry about how this would scale. Managing iptable rules is quite tricky and slow 😅

yeah, i know that iptables are slow ;) but "how to scale" ? simple. upgrade to consul connect, other cni network etc.

@blinkinglight
Copy link
Author

btw - weave-npc ( sadly in k8s ) uses ipset and iptables. so maybe it could be useful

@lgfa29
Copy link
Contributor

lgfa29 commented Aug 23, 2023

atleast ip to/from ip and if its not defined

I think that would be too coarse since multiple services can be running on the same machine, so blocking an entire IP because one service is not allowed to talk to another specific service may prevent other traffic that is allowed.

btw - weave-npc ( sadly in k8s ) uses ipset and iptables. so maybe it could be useful

I'm not sure how weave-npc works, but yeah it seems very specific to Kubernetes. One important distinction between Nomad and Kubernetes is that Nomad allocations live in the host IP whereas Kubernetes pods have their own unique address.

@blinkinglight
Copy link
Author

blinkinglight commented Aug 23, 2023

atleast ip to/from ip and if its not defined

I think that would be too coarse since multiple services can be running on the same machine, so blocking an entire IP because one service is not allowed to talk to another specific service may prevent other traffic that is allowed.

how about weave or other CNI network ? each service could have its own IP reachable between servers ( or maybe docker network on vlan )

btw - weave-npc ( sadly in k8s ) uses ipset and iptables. so maybe it could be useful

I'm not sure how weave-npc works, but yeah it seems very specific to Kubernetes. One important distinction between Nomad and Kubernetes is that Nomad allocations live in the host IP whereas Kubernetes pods have their own unique address.

@lgfa29
Copy link
Contributor

lgfa29 commented Aug 23, 2023

Once you get to specific CNI plugins then traffic policies are outside of the scope of Nomad. For example, Cilium+Netreap has Cilium Policies and, as you mentioned, Consul Service Mesh has intentions.

It would not be feasible for Nomad to adjust and configure network policies for specific CNI plugins.

@blinkinglight
Copy link
Author

do nomad have event when it gets IP from driver of service (with metadata, name space etc.) ? that would be more than enough to write plugin I think

@blinkinglight
Copy link
Author

somehow i missed "serviceregistered" events. found them ;)

@blinkinglight
Copy link
Author

blinkinglight commented Aug 26, 2023

hey,
so after some digging in to nomad and weave code.... finished with this idea.

config {
        image        = "hashicorpdev/counter-api:v3"
        dns_servers  = ["10.123.0.100"]
        network_mode = "weave"
        network_opts {
                me = "10.123.255.0/24"
                link = {
                        otherapp1 = "10.123.254.0/24"
                        otherapp2 = "10.123.253.0/24"
                        otherapp3 = "10.123.252.0/24"
                }
        }
      }

and found 2 problems to work in that way:
1 - nomad needs some kind of IPAM to track what IP's are issued
2 - it still needs overylay network / vlan / vxvlan and ip routing table modifications with maybe iptables on host

but concept kind of works.

root@sun:~# docker exec -it 04bccc7db9ed ip ro
default via 172.31.0.1 dev eth0
10.123.0.0/16 dev ethwe0 scope link  src 10.123.255.3
10.123.252.0/24 dev ethwe scope link  src 10.123.252
10.123.253.0/24 dev ethwe scope link  src 10.123.253.1
10.123.254.0/24 dev ethwe scope link  src 10.123.254.1
172.31.0.0/24 dev eth0 scope link  src 172.31.0.3
224.0.0.0/4 dev ethwe0 scope link
root@sun:~# docker exec -it 04bccc7db9ed ifconfig ethwe0
ethwe0    Link encap:Ethernet  HWaddr CE:5C:6B:8C:19:DA
          inet addr:10.123.255.3  Bcast:10.123.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:65535  Metric:1
          RX packets:9 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:614 (614.0 B)  TX bytes:42 (42.0 B)

root@sun:~# ./mynomad service info counter-api
Job ID  Address            Tags  Node ID   Alloc ID
cd      10.123.255.3:9001  []    926c66df  2228973c

so curious, if it can happen in nomad as core feature? ;)

p.s. - it plays very nice with DNS+nomad service discovery ( service.namespace.service.nomad )

@lgfa29
Copy link
Contributor

lgfa29 commented Aug 28, 2023

1 - nomad needs some kind of IPAM to track what IP's are issued
2 - it still needs overylay network / vlan / vxvlan and ip routing table modifications with maybe iptables on host

Wouldn't Weave be responsible for these?

I'm not too familiar with CNI/custom network world, so apologies for any silly questions 😅

@blinkinglight
Copy link
Author

blinkinglight commented Aug 29, 2023

1 - nomad needs some kind of IPAM to track what IP's are issued
2 - it still needs overylay network / vlan / vxvlan and ip routing table modifications with maybe iptables on host

Wouldn't Weave be responsible for these?

https://github.com/weaveworks/weave/blob/8c8476381d48820891356497bfcee6337e99a401/proxy/create_container_interceptor.go#L51 so, if you specify network_mode = "weave" - it gets IP from predefined IP range from network interface / ipam and ignores WEAVE_CIDR

if you specify network_mode = bridge - and WEAVE_CIDR then nomad gets in its registry not weave IP so...

I'm not too familiar with CNI/custom network world, so apologies for any silly questions 😅

the problem i found that if you create CNI weave network with lets say 10.0.0.0/8 and you use network_mode weave, you cant specify smaller peace of it so it will allocate ip from /8. so my idea is but i am also not sure how it looks in system design when Docker Driver calls custom IPAM server with /8 and asks smaller peace for scope lets say /24 and then calls network_mode ="dockernetwork" + --ip [allocatedip] . if needed - allocates aditional IP's from scopes in "link to" in that way it will work with weave, custom user defined network on -o parent=vlan123 etc.

thats really challenging task, because we need to manage ip netns

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Development

No branches or pull requests

2 participants