Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kube-proxy currently incompatible with `iptables >= 1.8` #71305

Open
drags opened this issue Nov 21, 2018 · 75 comments · May be fixed by #82966

Comments

@drags
Copy link

commented Nov 21, 2018

What happened:

When creating nodes on machines with iptables >= 1.8 kube-proxy is unable initialize and route service traffic. The following is logged:

kube-proxy-22hmk kube-proxy E1120 07:08:50.135017       1 proxier.go:647] Failed to ensure that nat chain KUBE-SERVICES exists: error creating chain "KUBE-SERVICES": exit status 3: iptables v1.6.0: can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
kube-proxy-22hmk kube-proxy Perhaps iptables or your kernel needs to be upgraded.

This is compat issue in iptables which I believe is called directly from kube-proxy. This is likely due to module reorganization with iptables move to nf_tables: https://marc.info/?l=netfilter&m=154028964211233&w=2

iptables 1.8 is backwards compatible with iptables 1.6 modules:

root@vm77:~# iptables --version
iptables v1.6.1
root@vm77:~# docker run --cap-add=NET_ADMIN drags/iptables:1.6 iptables -t nat -Ln
iptables: No chain/target/match by that name.
root@vm77:~# docker run --cap-add=NET_ADMIN drags/iptables:1.8 iptables -t nat -Ln
iptables: No chain/target/match by that name.



root@vm83:~# iptables --version
iptables v1.8.1 (nf_tables)
root@vm83:~# docker run --cap-add=NET_ADMIN drags/iptables:1.6 iptables -t nat -Ln
iptables v1.6.0: can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
root@vm83:~# docker run --cap-add=NET_ADMIN drags/iptables:1.8 iptables -t nat -Ln
iptables: No chain/target/match by that name.

However kube-proxy is based off of debian:stretch which iptables-1.8 may only make it to as part of stretch-backports

How to reproduce it (as minimally and precisely as possible):

Install a node onto a host with iptables-1.8 installed (ex: Debian Testing/Buster)

Anything else we need to know?:

I can keep these nodes in this config for a while, feel free to ask for any helpful output.

Environment:

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.4", GitCommit:"bf9a868e8ea3d3a8fa53cbb22f566771b3f8068b", GitTreeState:"clean", BuildDate:"2018-10-25T19:06:30Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}```
  • Cloud provider or hardware configuration:

libvirt

  • OS (e.g. from /etc/os-release):
PRETTY_NAME="Debian GNU/Linux buster/sid"
NAME="Debian GNU/Linux"
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
  • Kernel (e.g. uname -a):
Linux vm28 4.16.0-1-amd64 #1 SMP Debian 4.16.5-1 (2018-04-29) x86_64 GNU/Linux
  • Install tools:

kubeadm

  • Others:

/kind bug

@drags

This comment has been minimized.

Copy link
Author

commented Nov 21, 2018

/sig network

@k8s-ci-robot k8s-ci-robot added sig/network and removed needs-sig labels Nov 21, 2018
@drags

This comment has been minimized.

Copy link
Author

commented Nov 28, 2018

@kubernetes/sig-network-bugs

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

commented Nov 28, 2018

@drags: Reiterating the mentions to trigger a notification:
@kubernetes/sig-network-bugs

In response to this:

@kubernetes/sig-network-bugs

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@danderson

This comment has been minimized.

Copy link

commented Dec 1, 2018

For the record, this probably breaks at least Calico and Weave as well, based on my abject failures to get pod<>pod networking to function on Debian Buster (which has upgraded to iptables 1.8). I'm filing bugs for that now, but this breaking change to iptables may be worth a wider broadcast to the k8s community.

@uablrek

This comment has been minimized.

Copy link
Contributor

commented Dec 17, 2018

kube-proxy itself seems compatible with iptables >=1.8 so the slogan in this issue is somewhat misleading. I have made basic tests and see no problems when using the correct version of the user-space iptables (and ipv6 with ip6tables) and the supporting libs. I don't think this problem can be fixed by altering some code in kube-proxy.

Tested versions; iptables v1.8.2, linux 4.19.3

The problem seems to be that that iptables user-space program (and libs) is (and has always been) dependent on the kernel version on the host. When the iptables user-space program is in a container with a old version this problem is bound to happen sooner or later, and it will happen again.

The kernel/user-space dependency is one of the problem that nft is supposed to fix. A long-term solution may be to replace iptables with ntf or bpf.

@uablrek

This comment has been minimized.

Copy link
Contributor

commented Dec 17, 2018

Iptables v1.8.2 have 2 modes (depending on soft-links);

# iptables -V
iptables v1.8.2 (nf_tables)

and;

# iptables -V
iptables v1.8.2 (legacy)

kube-proxy seem to work fine with both.

BTW; I have not tested any network policies, that is not kube-proxy of course but is is iptables.

@drags

This comment has been minimized.

Copy link
Author

commented Dec 17, 2018

While the title is somewhat murky the fact is that kube-proxy is distributed using images based on debian-stretch and pulls in the iptables userspace from that distribution. When those images are run on hosts with a newer iptables this fails.

To be clear: this isn't a defect in the code, it's a defect in packaging/release.

@thockin

This comment has been minimized.

Copy link
Member

commented Dec 18, 2018

kube-proxy is distributed using images based on debian-stretch and pulls in the iptables userspace from that distribution. When those images are run on hosts with a newer iptables this fails

Do you mean it breaks on a newer kernel? The iptables binary is part of kube-proxy so what would the on-host iptables have to do with anything?

I don't understand.

@danderson

This comment has been minimized.

Copy link

commented Dec 18, 2018

There are 2 sets of modules for packet filtering in the kernel: ip_tables, and nf_tables. Until recently, you controlled the ip_tables ruleset with the iptables family of tools, and nf_tables with the nft tools.

In iptables 1.8, the maintainers have "deprecated" the classic ip_tables: the iptables tool now does userspace translation from the legacy UI/UX, and uses nf_tables under the hood. So, the commands look and feel the same, but they're now programming a different kernel subsystem.

The problem arises when you mix and match invocations of iptables 1.6 (the previous stable) and 1.8 on the same machine, because although they look identical, they're programming different kernel subsystems. The problem is that at least Docker does some stuff with iptables on the host (uncontained), and so you end up with some rules in nf_tables and some rules (including those programmed by kube-proxy and most CNI addons) in legacy ip_tables.

Empirically, this causes weird and wonderful things to happen - things like if you trace a packet coming from a pod, you see it flowing through both ip_tables and nf_tables, but even if both accept the packet, it then vanishes entirely and never gets forwarded (this is the failure mode I reported to Calico and Weave - bug links upthread - after trying to run k8s on debian testing, which now has iptables 1.8 on the host).

Bottom line, the networking containers on a machine have to be using the same minor version of the iptables binary as exists on the host.

@uablrek

This comment has been minimized.

Copy link
Contributor

commented Dec 18, 2018

@danderson Do you think it would be sufficient to enforce (if possible) the host version of iptables to "legacy";

# iptables -V
iptables v1.8.2 (legacy)

and keep the >=1.8 version?

I build and install iptables myself and the "mode" is determined by a soft-link;

# ls -l /usr/sbin/iptables
lrwxrwxrwx    1 root     root            20 Dec 18 08:47 /usr/sbin/iptables -> xtables-legacy-multi*

I assume the same applies for "Debian Testing/Buster" and others, but I don't knoe for sure.

@thockin

This comment has been minimized.

Copy link
Member

commented Dec 18, 2018

@danderson thanks. That was very succinct.

What a crappy situation. How are we to know what is on the host? Can we include BOTH binaries in our images and probe the machine to see if either has been used previously (e.g. lsmod or something in /sys) ?

@danderson

This comment has been minimized.

Copy link

commented Dec 18, 2018

As a preface, one thing to note: iptables 1.8 ships two binaries, iptables and iptables-legacy. The latter always programs ip_tables. So, there's fortunately no need to bundle two versions of iptables into a container, you can bundle just iptables 1.8 and be judicious about which binary you invoke... At least until the -legacy binary gets deleted, presumably in a future release.

Here's some requirements I think an ideal solution would have:

  • k8s networking must continue to function, obviously.
  • should be robust to the host iptables getting upgraded while the system is running (e.g. apt-get upgrade in the background).
  • should be robust to other k8s pods (e.g. CNI addons) using the "wrong" version of iptables.
  • should be invisible to cluster operators - k8s should just keep working throughout.
  • should not require a "flag day" on which everything must cut over simultaneously. There's too many things in k8s that touch iptables (docker, kube-proxy, CNI addons) to enforce that sanely, and k8s's eventual consistency model doesn't make a hard cutover without downtime possible anyway.
  • at the very least, the problem should be detected and surfaced as a fatal node misconfiguration, so that any automatic cluster healing can attempt to help.

So far I've only thought up crappy options for dealing with this. I'll throw them out in the hopes that it leads to better ideas.

  • Mount chunks of the host filesystem (/usr/sbin, /lib, ...) into kube-proxy's VFS, and make it chroot() to that quasi-host-fs when executing iptables commands. That way it's always using exactly the binary present on the host. Introduces obvious complexity, as well as a bunch of security risks if an attacker gets code execution in the kube-proxy container.
  • Using iptables 1.8 in the container, probe both iptables and iptables-legacy for the presence of rules installed by the host. Hopefully, there will be rules in only one of the two, and that can tell kube-proxy which one to use. This is subject to race conditions, and is fragile to host mutations that happen after kube-proxy startup (e.g. apt-get upgrade that upgrades iptables and restarts the docker daemon, shifting its rules over to nf_tables). Can solve it with periodic reconciling (i.e. "oops, host seems to have switched to nf_tables, wipe all ip_tables rules and reinstall them in nf_tables!")
  • Punt the problem up to kubeadm and an entry in the KubeProxyConfiguration cluster object. IOW, just document that "it's your responsibility to correctly tell kube-proxy which version of iptables you're using, or things will break." Relies on humans to get things right, which I predict will cause a rash of broken clusters. If we do this, we should absolutely also wire something into node-problem-detector that fires when both ip_tables and nf_tables have rules programmed.
  • Have a cutover release in which kube-proxy starts using nf_tables exclusively, through the nft tools, and mandate that host OSes for k8s must do everything in nf_tables, no ip_tables allowed. Likely intractable given the variety of addons and non-k8s software that does stuff to the firewall (same reason iptables has endured all these years even though nftables is measurably better in every way).
  • Find some kernel hackers and ask them if there's any way to make ip_tables and nf_tables play nicer together, so that userspace can just continue tolerating mismatches indefinitely. I'm assuming this is ~impossible, otherwise they'd have done it already to facilitate the transition to nf_tables.
  • Create a new DaemonSet whose sole purpose is to be an RPC-to-iptables translator, and get all iptables-using pods in k8s to use it instead of talking direct to the kernel. Clunky, expensive, and doesn't solve the problem of host software touching stuff.
  • Just document (via a Sonobuoy conformance test) that this is a big bag of knives, and kick the can over to cluster operators to figure out how to safely upgrade k8s in place given these constraints. I can at least speak on behalf of GKE and say that I sure hope it doesn't come to that, because all our options are strictly worse. I can also speak as the author of MetalLB and say that the support load from people with broken on-prem installs will be completely unsustainable for me :)

Of all of these, I think "probe with both binaries and try to conform to whatever is already there" is the most tractable if kube-proxy were the only problem pod... But given the ecosystem of CNI addons and other third-party things, I foresee never ending duels of controllers flapping between ip_tables and nf_tables endlessly, all trying to vaguely converge on a single stack, but never succeeding.

@uablrek

This comment has been minimized.

Copy link
Contributor

commented Dec 19, 2018

When using iptables 1.8.2 in nf_tables mode ipset (my version; v6.38) is still used by kube-proxy. But in nft ipset is "built-in".

It seems to work anyway but I can't understand how, or maybe my testing is insufficient.

I will try to test better and make sure the ipset's are used so they are not just defined and not used and my tests just happens to work.

But is any one can explain the relation between iptables in nf_tables mode and ipset please give a reference to some doc.

@uablrek

This comment has been minimized.

Copy link
Contributor

commented Dec 19, 2018

Ipset is only used in proxy-mode=ipvs. I get hits on ipset rules so they work in some way;

Chain KUBE-SERVICES (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  *      *      !11.0.0.0/16          0.0.0.0/0            match-set KUBE-CLUSTER-IP dst,dst /* Kubernetes service cluster ip + port for masquerade purpose */
   23  1380 KUBE-MARK-MASQ  all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set KUBE-EXTERNAL-IP dst,dst /* Kubernetes service external ip + port for masquerade and filter purpose */
   23  1380 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set KUBE-EXTERNAL-IP dst,dst PHYSDEV match ! --physdev-is-in ADDRTYPE match src-type !LOCAL /* Kubernetes service external ip + port for masquerade and filter purpose */
@uablrek

This comment has been minimized.

Copy link
Contributor

commented Dec 19, 2018

When using nf_tables mode rules are added indefinitely to the KUBE-FIREWALL chain;

Chain KUBE-FIREWALL (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            mark match 0x8000/0x8000 /* kubernetes firewall for dropping marked packets */
    0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            mark match 0x8000/0x8000 /* kubernetes firewall for dropping marked packets */
    0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            mark match 0x8000/0x8000 /* kubernetes firewall for dropping marked packets */
    0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            mark match 0x8000/0x8000 /* kubernetes firewall for dropping marked packets */
....

in both proxy-mode ipvs and iptables.

@Vonor

This comment has been minimized.

Copy link

commented Dec 30, 2018

I experienced the same issue in #72370. As a workaround I found this in the oracle docs, which made the pods be able to communicate with each other as well as with the outside world again.

@danwinship

This comment has been minimized.

Copy link
Contributor

commented Jan 25, 2019

I discussed iptables/nft incompatibility in #62720 too, although that was before the iptables binary got rewritten...

It seems like for right now, the backward-compatible answer is "you have to make sure the host is using iptables in legacy mode".

@praseodym

This comment has been minimized.

Copy link
Contributor

commented Mar 16, 2019

FWIW, I hit this issue as well when deploying Kubernetes on Debian Buster. I've included some logging in #75418.

@mcoreix

This comment has been minimized.

Copy link

commented Apr 3, 2019

this works for me update-alternatives --set iptables /usr/sbin/iptables-legacy

@thockin

This comment has been minimized.

Copy link
Member

commented Sep 9, 2019

@bradfitz

When invoked, iptables-proxy-client forwards its arguments and environment (or necessary subset)
off to a bind-mounted Unix socket or link local service running on the host, communicating with a
new host-side iptablesd daemon/service that executes the iptables/etc command on the host (using
the host's preferred kernel mechanism) and replies with the stdout/stderr/exit code back to the
client.

We discussed a bit offline, but for the record, some thoughts:

We would have to do this on every hostNetwork: true && privileged: true pod (also maybe consider net capabilities). That doesn't seem SO bad - it's a small set. Do we have to handle RuntimeClass that isn't cgroups-based? Probably not going to intersect with hostNetwork && privileged.

I don't think we need to handle non-privileged pods (or without the caps we care about) - they should not be able to use iptables anyway, but we should triple check that.

What about things that run iptables inside their own netns (e.g. istio's capture)? @danwinship do they need to use the same backend, or can it be mixed mode at that scope? @louiscryan FYI

We would have to run this new iptablesd daemonset on every node. Need telemetry, provisioning, etc. That should be a wash wrt actual memory consumption, but only if we can reclaim some from kube-proxy and/or calico and/or ...

This plays very badly with PodSecurityPolicy unless we trap in WAY below it (at CRI) which is a much harder fix. Doing it as admission is more transparent, but possibly subject to ordering bugs.

@lachie83

This comment has been minimized.

Copy link
Member

commented Sep 9, 2019

👋 Friendly ping from 1.16 release lead. I wanted to let you know that we are planning to cut 1.16.0-rc.1 tomorrow and go into code-thaw. Please let me know if this fix needs to be considered as 1.16 release blocking.

@danwinship

This comment has been minimized.

Copy link
Contributor

commented Sep 9, 2019

NB: We'd need every single container that uses iptables to participate in this...

Every single container that uses iptables in the root network namespace. It's fine for, eg, istio, to use whatever iptables mode it wants in the pod namespace. (Though if you have multiple sidecars in a pod they all need to use the same mode...)

We'd also need some EOL plan - when can we stop doing this?

Probably as long as we care about people running Kubernetes on RHEL/CentOS 7. (People will probably be running RHEL 7 longer than people are running CentOS 7, but we might care about those users less. Either way, by the time we stop caring about that, everyone else should be using nft mode.)

I don't think we need to handle non-privileged pods (or without the caps we care about) - they should not be able to use iptables anyway, but we should triple check that.

That is correct. Pods need to be hostNetwork and either privileged or CAP_NET_ADMIN for them to matter.

@thockin

This comment has been minimized.

Copy link
Member

commented Sep 9, 2019

The only thing we can do in the near term is tell people to use legacy mode.

Even 1.8.2 (as present in debian-buster) is broken. #82361

How best to document this?

@danwinship

This comment has been minimized.

Copy link
Contributor

commented Sep 9, 2019

I'm proposing we write a new static binary (let's call it iptables-proxy-client) in, say, Go, and then whenever we start a container we bind mount iptables-proxy-client at all possible paths that iptables and friends are commonly seen at in various distros/containers.

Do we do anything even remotely similar to this currently? (The "overwriting binaries in other people's containers without telling them" part, not the proxying part.)

It's nice in that it solves the problem for everyone all at once but... not nice in every other way 🙂

Random idea halfway between the two current approaches: add a new volume type "hostBinaries" or "kubernetesHelpers" or something, and if you mount a volume of that type into your pod, you'll find that it contains iptables binaries that do the right thing via unspecified means. (And in the future, maybe also contains other binaries to solve similar host/pod interaction issues? Kind of a more powerful downward API sort of thing.)

@thockin

This comment has been minimized.

Copy link
Member

commented Sep 9, 2019

I agree it's kind of awful, but so is the problem...

We'd STILL need to run that daemonset, which is a significant change.

@danwinship

This comment has been minimized.

Copy link
Contributor

commented Sep 9, 2019

We should be able to give containers a working set of iptables-legacy or iptables-nft binaries directly rather than needing a proxy. Just give them an entire chroot rather than just the binaries. (ie, build a Debian container image containing only the iptables package and the packages it depends on (eg, glibc), and then mount that somewhere in the pod). Then instead of overwriting their /usr/sbin/iptables with a proxy binary, you overwrite it with a shell script that does chroot /iptables-binary-volume-sadkjf -- iptables "$@", etc. Or that works with the hostBinaries volume idea too; the volume would just contain the chroot within it in addition to the wrapper scripts.

@bradfitz

This comment has been minimized.

Copy link

commented Sep 9, 2019

It's nice in that it solves the problem for everyone all at once but... not nice in every other way.

Agreed. I proposed it because it'd fix everything all at once, rather than waiting for all network add-ons to update to a work either the old way (just exec iptables) or some new way (find some new downward API directory to chroot into). How long would that take?

We'd STILL need to run that daemonset, which is a significant change.

Put it in the kubelet? Too gross? It's already in the business of calling iptables, no?

praseodym added a commit to praseodym/kubernetes-website that referenced this issue Sep 9, 2019
praseodym added a commit to praseodym/kubernetes-website that referenced this issue Sep 9, 2019
praseodym added a commit to praseodym/kubernetes-website that referenced this issue Sep 9, 2019
praseodym added a commit to praseodym/kubernetes-website that referenced this issue Sep 10, 2019
@squeed

This comment has been minimized.

Copy link
Contributor

commented Sep 10, 2019

Reminder (as this discussion shakes out): if you're calling iptables, there is a nonzero chance you will need to load a module. So, it has become part of the lore that best-practice iptables-callers already always need to bind-mount /lib/modules from the host.

If we're going down the route of half-magic bind-mounts, then I think I'd rather see the kubelet assemble it out of bind-mounts from the host, rather than needing a specific container.

@danwinship

This comment has been minimized.

Copy link
Contributor

commented Sep 10, 2019

if you're calling iptables, there is a nonzero chance you will need to load a module.

No, there isn't. Kubelet will always have created iptables rules before starting any pods.

I'd rather see the kubelet assemble it out of bind-mounts from the host, rather than needing a specific container.

I thought about that, but the reason we didn't try to do that before is that there are no safe assumptions you can make about what the distro-installed version of iptables does and doesn't need from the host filesystem. (eg, there's no a priori reason to think that /etc/alternatives would be needed, since the iptables source code itself does not refer to any such thing.) If you want to use the system iptables you have to mount the entire filesystem.

@fasaxc

This comment has been minimized.

Copy link
Contributor

commented Sep 10, 2019

if you're calling iptables, there is a nonzero chance you will need to load a module.

No, there isn't. Kubelet will always have created iptables rules before starting any pods.

Each match and action type has its own module, loaded on demand. So if you do -m set then you'll trigger loading of the xt_set module. If you do -j DNAT, then you'll load xt_DNAT and so on.

@danwinship

This comment has been minimized.

Copy link
Contributor

commented Sep 10, 2019

There are two different kind of kernel module loading here: the kind that Casey was worrying about is that if you run /usr/sbin/iptables and it finds that the ip_tables (or ip6_tables) module is not loaded, then it will explicitly invoke modprobe to load it, and that only works if modprobe and ip_tables.ko are available.

For the thing you're talking about, the iptables binary isn't what loads the module. When you send a rule to the kernel using -m set, the kernel netfilter code will decide that it needs to have the xt_set module, and so it will send a request to userspace, and that request gets received and handled by udevd in the root net/pid/mount/etc namespace, regardless of where the original iptables call came from. So the container doesn't need access to the modules in that case.

praseodym added a commit to praseodym/kubernetes-website that referenced this issue Sep 10, 2019
@praseodym

This comment has been minimized.

Copy link
Contributor

commented Sep 10, 2019

@danwinship @thockin Looks like it would be good to document any workarounds for 1.16 release notes since folks are hitting this already? (see #82361 for example)

I've created kubernetes/website#16271 to add documentation to the Installing kubeadm page. There is some discussion in that PR (comment) on whether we should add this information in other places too.

Adding to release notes is already on the agenda: #81930 (comment)

praseodym added a commit to praseodym/kubernetes-website that referenced this issue Sep 12, 2019
raynix added a commit to raynix/ansible-kubeadm that referenced this issue Sep 20, 2019
@danwinship danwinship referenced a pull request that will close this issue Sep 21, 2019
@vbouchaud

This comment has been minimized.

Copy link

commented Sep 22, 2019

For future readers not able to make kube-proxy work for some reasons, you might want to look for a replacement: https://github.com/cloudnativelabs/kube-router (it does the job plus some other stuff, please take a look at the documentation before doing anything)

@kfox1111

This comment has been minimized.

Copy link

commented Sep 25, 2019

Kind of a creepy idea, but you could use nsenter to run the iptables command on the host in the hosts environment.

@champtar

This comment has been minimized.

Copy link

commented Oct 1, 2019

I've started to play with CentOS 8, it comes with iptables v1.8.2 (nf_tables) but without iptables (legacy)
https://access.redhat.com/solutions/4377321
Haven't found how OpenShift 4 works yet

@aojea

This comment has been minimized.

Copy link
Contributor

commented Oct 1, 2019

I've started to play with CentOS 8, it comes with iptables v1.8.2 (nf_tables) but without iptables (legacy)
https://access.redhat.com/solutions/4377321
Haven't found how OpenShift 4 works yet

The iptables user space tools are provided in a container #71305 (comment)

@danwinship

This comment has been minimized.

Copy link
Contributor

commented Oct 1, 2019

The iptables 1.8.2 in RHEL/CentOS 8 has the necessary bugfixes from 1.8.3 backported

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
You can’t perform that action at this time.