Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

iptables: adjust run scripts for more configuration flexibility #31145

Closed
wants to merge 1 commit into from

Conversation

heliocat
Copy link
Contributor

@heliocat heliocat commented May 27, 2021

The single configuration file approach that the iptables services
provide precludes using it in more complicated buildouts such as ones
defined with config management tools. This change takes a hybrid
approach of the old method (to preserve backwards compatibility, etc)
and the method taken with void-ansible-roles/network.

Changes:
No longer flush tables prior to loading new data - rely on finish in all
cases
Load data from /etc/iptables/iptables.rules and all found
/etc/iptables.d/*.rules
Ditto ip6 equivalents (ip6rules.rules, ip6tables.d/*.{,6}rules)
Flush nat table in both v4 and v6 mode (nat table supported on v6 since
kernel 3.7)

Caveats: the ip6tables.d match is overly explicit since dash does not
provide brace expansion and there is no particularly clean way to match
a single character or empty when expanding globs.

@ailiop-git

General

Have the results of the proposed changes been tested?

  • I use the packages affected by the proposed changes on a regular basis and confirm this PR works for me
  • I generally don't use the affected packages but briefly tested this PR

Does it build and run successfully?

(Please choose at least one native build and, if supported, at least one cross build. More are better.)

  • I built this PR locally for my native architecture, (x86_64)
  • I built this PR locally for these architectures (if supported. mark crossbuilds):
    • aarch64-musl
    • armv7l
    • armv6l-musl

@heliocat
Copy link
Contributor Author

@the-maldridge Relevant to your interests.

Copy link
Member

@the-maldridge the-maldridge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This also introduces a behavior change in that the service will now always enter OK state if no rules are present. This is a pretty serious change to make to the firewall, so it either needs to be justified or reverted.

srcpkgs/iptables/files/ip6tables/run Outdated Show resolved Hide resolved
srcpkgs/iptables/files/iptables/run Outdated Show resolved Hide resolved
srcpkgs/iptables/files/ip6tables/run Outdated Show resolved Hide resolved
@heliocat heliocat force-pushed the iptables branch 3 times, most recently from 8959846 to d3f94da Compare May 27, 2021 05:02
@heliocat
Copy link
Contributor Author

Run scripts updated to exit without any rules loads. I don't consider the previous behavior of exiting 0 on incorrect use to be good form so I've updated the exit code. Either way the finish scripts shipped here don't care about $1 so this is a behavior change with no impact.

@ailiop-git
Copy link
Contributor

I generally dislike splitting config files over those blah.d dirs (especially when they're scattered around the hier like sysctl.d..), but that's just a personal preference. If this change indeed serves a real and actual need right now, then so be it.

My only two other comments would be:

The naming of the rule var may be misleading; those are ruleset fragments (technically tables since this is the minimum granule that iptables can operate on but oh well), rather than individual filtering rules being loaded at every iteration.

Also, why not just cat /etc/iptables/iptables.rules /etc/iptables.d/*.rules | iptables-restore so that the entire aggregated ruleset will be loaded in one-go rather than invoking iptables-restore repeatedly? This would simplify and avoid the need to add the --noflush flag and the counter/exit code issue altogether.

@ailiop-git
Copy link
Contributor

Also, why not just cat /etc/iptables/iptables.rules /etc/iptables.d/*.rules | iptables-restore so that the entire aggregated ruleset will be loaded in one-go rather than invoking iptables-restore repeatedly? This would simplify and avoid the need to add the --noflush flag and the counter/exit code issue altogether.

Actually --noflush will still be required (otherwise iptables-restore will flush any tables are provided multiple times within the same aggregated ruleset even within a single invocation).

@heliocat
Copy link
Contributor Author

heliocat commented May 27, 2021

I generally dislike splitting config files over those blah.d dirs (especially when they're scattered around the hier like sysctl.d..), but that's just a personal preference. If this change indeed serves a real and actual need right now, then so be it.

The void-infratructure project and the void-ansible-roles/network subproject uses blah.d directories and a dhcpcd hook script to assemble a composite firewall definition out of rules dropped into .d directories. My goal here is to move that logic out of a custom ansible-managed set of scripts and into the iptables package itself in order to have a cleaner migration path away from dhcpcd as the network manager for static addressed hosts.

My only two other comments would be:

The naming of the rule var may be misleading; those are ruleset fragments (technically tables since this is the minimum granule that iptables can operate on but oh well), rather than individual filtering rules being loaded at every iteration.

I don't have an opinion either way, if this is something you feel strongly about it's a fairly trivial change.

Also, why not just cat /etc/iptables/iptables.rules /etc/iptables.d/*.rules | iptables-restore so that the entire aggregated ruleset will be loaded in one-go rather than invoking iptables-restore repeatedly? This would simplify and avoid the need to add the --noflush flag and the counter/exit code issue altogether.

You already noted the --noflush comment. For the counter, some kind of detection is still needed to avoid @the-maldridge's earlier issue about wrong usage (no rules at all) leaves the service "up":

# cat /etc/iptables/iptables.rules /etc/iptables.d/*.rules | iptables-restore -n ; echo $?
cat: /etc/iptables/iptables.rules: No such file or directory
cat: '/etc/iptables.d/*.rules': No such file or directory
0

In the previous iteration this was handled by the single file check but you can't rely on that now since I'm trying to support rules in either or both locations. Similarly, switching to bash and pipefail doesn't work because cat will exit 1 if any input file doesn't exist, which will be the case for installs that only use iptables.rules (all classic installs). The only way to get the behavior I want is to set errexit and nullglob in bash, and then call the pipeline with timeout in order to catch a null-read in the case where no rules files exist. I consider that a heavier and harder to read change than an accumulator, though I will admit that count is a semantically poor name so I've staged a change updating it to seen.

Let me know about how you feel on rule vs. ruleset, and if you have a better approach to glob-based file detection in dash. The best I can come up with is a loop and (if necessary) accumulator but there might be something I'm missing.

The single configuration file approach that the iptables services
provide precludes using it in more complicated buildouts such as ones
defined with config management tools. This change takes a hybrid
approach of the old method (to preserve backwards compatibility, etc)
and the method taken with void-ansible-roles/network.

Changes:
No longer flush tables prior to loading new data - rely on finish in all
  cases
Load data from /etc/iptables/iptables.rules and all found
  /etc/iptables.d/*.rules
Ditto ip6 equivalents (ip6rules.rules, ip6tables.d/*.{,6}rules)
Flush nat table in both v4 and v6 mode (nat table supported on v6 since
  kernel 3.7)
No-rule bailouts are handled with a post-load accumulator instead of
  exiting entirely when a rules file doesn't exist. The run script uses
  exit code `2' in that case to differentiate between a failed load and
  wrong use

Caveats: the ip6tables.d match is overly explicit since dash does not
provide brace expansion and there is no particularly clean way to match
a single character or empty when expanding globs.
@the-maldridge
Copy link
Member

The variable should probably be named 'fragment' and in general I think there's also a case here in the fragment case that isn't being checked, which is that if iptables-restore fails for any fragment that should abort the load.

@heliocat
Copy link
Contributor Author

heliocat commented May 28, 2021

The variable should probably be named 'fragment' and in general I think there's also a case here in the fragment case that isn't being checked, which is that if iptables-restore fails for any fragment that should abort the load.

iptables-restore file || exit 1 should do just that (unless I misunderstood something).

@ailiop-git
Copy link
Contributor

The variable should probably be named 'fragment' and in general I think there's also a case here in the fragment case that isn't being checked, which is that if iptables-restore fails for any fragment that should abort the load.

iptables-restore file || exit 1 should do just that (unless I misunderstood something).

We probably also need to flush everything in case a fragment fails, since it may leave the ruleset in an undesirable state, and also subsequent sv starts/restores will end up duplicating ruleset entries.

@heliocat
Copy link
Contributor Author

That will happen, runsv always runs the finish script after run exits, regardless of reason.

@heliocat
Copy link
Contributor Author

heliocat commented Jun 7, 2021

@ailiop-git do you have any other unaddressed concerns?

@the-maldridge can you drop the Change Requested flag and merge if there are no other issues from myself or ailiop? I think I've addressed everything.

@heliocat
Copy link
Contributor Author

Putting this on hold while I think about the best path to the final state. Should have done this like two weeks ago but...

@github-actions
Copy link

Pull Requests become stale 90 days after last activity and are closed 14 days after that. If this pull request is still relevant bump it or assign it.

@github-actions github-actions bot added the Stale label May 20, 2022
@github-actions github-actions bot closed this Jun 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants