Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add random_fully and rpfilter support #892

Merged
merged 2 commits into from
Mar 3, 2020

Conversation

treydock
Copy link
Contributor

This is the rule from Kubernetes that was breaking the iptables-save parsing:

-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE --random-fully

This was the full error:

Debug: Puppet::Type::Firewall::ProviderIptables: [instances]
Debug: Executing: '/usr/sbin/iptables-save'
Error: /Firewallchain[OUTPUT:filter:IPv4]: Failed to generate additional resources using 'generate': Parser error: random was meant to be a boolean but received value: true-fully.
/opt/puppetlabs/puppet/cache/lib/puppet/provider/firewall/iptables.rb:589:in `block in rule_to_hash'
/opt/puppetlabs/puppet/cache/lib/puppet/provider/firewall/iptables.rb:587:in `each'
/opt/puppetlabs/puppet/cache/lib/puppet/provider/firewall/iptables.rb:587:in `rule_to_hash'
/opt/puppetlabs/puppet/cache/lib/puppet/provider/firewall/iptables.rb:385:in `block in instances'
/opt/puppetlabs/puppet/cache/lib/puppet/provider/firewall/iptables.rb:380:in `each'
/opt/puppetlabs/puppet/cache/lib/puppet/provider/firewall/iptables.rb:380:in `instances'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/type.rb:1164:in `block in instances'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/type.rb:1163:in `collect'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/type.rb:1163:in `instances'
/opt/puppetlabs/puppet/cache/lib/puppet/type/firewallchain.rb:235:in `generate'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction/additional_resource_generator.rb:22:in `generate_additional_resources'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction.rb:96:in `block in evaluate'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction.rb:96:in `each'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction.rb:96:in `evaluate'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/resource/catalog.rb:239:in `block (2 levels) in apply'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util.rb:519:in `block in thinmark'
/opt/puppetlabs/puppet/lib/ruby/2.4.0/benchmark.rb:308:in `realtime'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util.rb:518:in `thinmark'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/resource/catalog.rb:238:in `block in apply'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/log.rb:161:in `with_destination'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction/report.rb:146:in `as_logging_destination'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/resource/catalog.rb:237:in `apply'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/configurer.rb:186:in `block (2 levels) in apply_catalog'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util.rb:519:in `block in thinmark'
/opt/puppetlabs/puppet/lib/ruby/2.4.0/benchmark.rb:308:in `realtime'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util.rb:518:in `thinmark'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/configurer.rb:185:in `block in apply_catalog'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util.rb:232:in `block in benchmark'
/opt/puppetlabs/puppet/lib/ruby/2.4.0/benchmark.rb:308:in `realtime'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util.rb:231:in `benchmark'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/configurer.rb:184:in `apply_catalog'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/configurer.rb:366:in `run_internal'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/configurer.rb:234:in `block in run'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/context.rb:65:in `override'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet.rb:260:in `override'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/configurer.rb:211:in `run'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/agent.rb:59:in `block (5 levels) in run'
/opt/puppetlabs/puppet/lib/ruby/2.4.0/timeout.rb:76:in `timeout'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/agent.rb:58:in `block (4 levels) in run'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/agent/locker.rb:21:in `lock'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/agent.rb:52:in `block (3 levels) in run'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/agent.rb:130:in `with_client'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/agent.rb:49:in `block (2 levels) in run'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/agent.rb:87:in `run_in_fork'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/agent.rb:48:in `block in run'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/application.rb:179:in `controlled_run'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/agent.rb:46:in `run'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/application/agent.rb:372:in `onetime'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/application/agent.rb:350:in `run_command'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/application.rb:375:in `block in run'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util.rb:667:in `exit_on_fail'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/application.rb:375:in `run'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/command_line.rb:139:in `run'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/command_line.rb:77:in `execute'
/opt/puppetlabs/puppet/bin/puppet:5:in `<main>'

@treydock treydock requested a review from a team as a code owner January 31, 2020 14:17
@treydock treydock changed the title Add random_fully support [WIP] Add random_fully support Jan 31, 2020
@@ -492,6 +494,8 @@ def self.rule_to_hash(line, table, counter)
values.sub(%r{\s-f(?!l)(?=.*--comment)}, ' -f true')
elsif resource_map[bool].eql?(%r{'--physdev-is-\S+'})
values.sub(%r{'#{resource_map[bool]} "! "'}, "#{resource_map[bool]} true")
elsif bool == :random
values.sub(%r{'#{resource_map[bool]}(\s|$)'}, "#{resource_map[bool]} true")
Copy link

@rcythr rcythr Feb 14, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on the failing tests it looks like this needs to incorporate the same logic for negation that the else case handles.

Suggested change
values.sub(%r{'#{resource_map[bool]}(\s|$)'}, "#{resource_map[bool]} true")
values.sub(%r{'#{resource_map[bool]}(\s|$)(?!"!"))'}, "#{resource_map[bool]} true")

@treydock
Copy link
Contributor Author

The current set of changes are working on my RHEL7 systems running Kubernetes with Calico without generating errors or warnings. Should I add a new type property for rpfilter or is it sufficient to parse the value but not make it manageable?

@treydock
Copy link
Contributor Author

Rules from calico that were breaking this module:

-A cali-PREROUTING -m comment --comment "cali:V6ooGP15glg7wm91" -m mark --mark 0x40000/0x40000 -m rpfilter --invert -j DROP

@rcythr
Copy link

rcythr commented Feb 21, 2020

I didn't realize this was still marked [WIP].

@treydock what is left before this is ready for review?

@treydock treydock changed the title [WIP] Add random_fully support Add random_fully support Feb 21, 2020
@treydock
Copy link
Contributor Author

@rcythr I removed WIP, I don't think there is anything left except maybe to find out if I should also add a property to handle -m rpfilter --invert, I just added it to the parser to suppress warnings.

@david22swan
Copy link
Member

@treydock Hey, taking a look at your PR I'm mostly happy with just two small comments. Firstly I'm not sure that I like the inclusion of -m rpfilter --invert to suppress warnings as a default behaviour without the choice to enable/disable it. If you could add a property to handle it I would feel much better.
Secondly for a feature such as this we would require the addition of test coverage to ensure that it works properly across our supported OS. Some small unit test's and a simple check in the happy_path.rb acceptance test would likely be fine.
Overall though this looks to be a great piece of work and I look forward to getting it merged.

@treydock treydock force-pushed the random-fully branch 2 times, most recently from 536f75b to 2cb9200 Compare February 24, 2020 19:21
@treydock treydock changed the title Add random_fully support Add random_fully and rpfilter support Feb 24, 2020
@treydock treydock force-pushed the random-fully branch 2 times, most recently from 5dfcc82 to 23bfd65 Compare February 24, 2020 21:05
@treydock
Copy link
Contributor Author

@david22swan Looking closer the initial change of adding --random-fully is only supported on Kernel >= 3.13. I have since downgraded my Kubernetes systems to RHEL 7 so I won't be able to test that particular change. I am not sure if my approach to gating the feature behind kernelversion fact is correct. I did add a property for rpfilter and added an acceptance test.

@david22swan
Copy link
Member

@treydock Thanks for adding the rpmfilter property. For the acceptance test could you also add some coverage for random-fully as it's the main feature of this pr?

@treydock
Copy link
Contributor Author

@david22swan The random-fully requires Kernel >= 3.13. Since this is the kernel and not the OS in docker I don't know that the travis environment would have this newer kernel.

@treydock
Copy link
Contributor Author

Saw this in travis output: Runtime kernel version: 4.15.0-1028-gcp. Will try to add acceptance tests

@treydock
Copy link
Contributor Author

I don't think any of the containers used in the current tests are new enough to support --random-fully. I only encountered it on RHEL8.

This is using litmus EL7 container and got same error with Ubuntu 18.04 container

[root@fa0d361bb247 /]# iptables -A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE --random-fully
iptables v1.4.21: unknown option "--random-fully"

This is the rule from Kubernetes on RHEL8 that caused problems that prompted this PR:

-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE --random-fully

@david22swan
Copy link
Member

In that case could you add the test to the exceptions tests and put an exclusion around it so that it only targets Rhel/CentOS 8, that way it will at least get ran on our release test's for now. I've cloned down your changes locally and they look to work fine on both though I didn't dig too deep.
Also could you update the random_fully description to mention that it is only on 3.13 and later.
Sorry to keep throwing you hoops to jump through.

@treydock
Copy link
Contributor Author

I am running into an issue where this is not resulting in --random-fully getting passed to iptables:

firewall { '901 - set random-fully':
  table        => 'nat',
  chain        => 'POSTROUTING',
  jump         => 'MASQUERADE',
  random_fully => true,
}

When I added random_fully I essentially copied the random logic but if I change random_fully to random in the example above then I get iptables with --random so I'm not sure why --random-fully is not getting added to the arguments list.

@david22swan
Copy link
Member

david22swan commented Feb 25, 2020

Given a quick look, it may have something to do with this piece of code,

# Trick the system for booleans

It's doing a fix to allow for boolean values, including random

@treydock
Copy link
Contributor Author

I thought that block of code was only used by self.instances method to prefetch the rules but not by insert which is used to add new rules.

@david22swan
Copy link
Member

True, but it may be related and is something that look's as if it should be included regardless. Just a thought.

@treydock treydock force-pushed the random-fully branch 2 times, most recently from 5cff043 to ea12d1d Compare February 26, 2020 18:27
@treydock
Copy link
Contributor Author

I found the problem, I had not properly setup to random_fully feature. I think the only OSes that will support --random-fully is EL8 and Debian 10. The feature requires kernel >= 3.13 and iptables >= 1.6.2. Added mention of versions required to property docs.

@treydock
Copy link
Contributor Author

treydock commented Mar 2, 2020

@david22swan Any more changes needed?

@david22swan
Copy link
Member

Apologies but we are currently getting some failures on the module itself and they will need to be fixed before we can merge your work.
Once we have them under control however I would feel happy to merge your work.

@treydock
Copy link
Contributor Author

treydock commented Mar 3, 2020

@david22swan Would it be possible to re-run the failed tests? They timed out with no errors of failures so I'm not sure what actually failed or if it was some transient problem.

@treydock
Copy link
Contributor Author

treydock commented Mar 3, 2020

I think the issue in Travis was transient. I just ran bundle exec rake litmus:acceptance:parallel against the Ubuntu images and all 3 passed. Please re-run the tests if possible.

@david22swan
Copy link
Member

@treydock The failed test's were caused by a ruby version issue in our travis file that has since been updated.
If you could rebase your changes the test's should now run green and I will be free to merge.

@treydock
Copy link
Contributor Author

treydock commented Mar 3, 2020

@david22swan I've rebased and tests now pass.

Copy link
Member

@david22swan david22swan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM
Thank you for your work :)

@david22swan david22swan merged commit ab1e8f9 into puppetlabs:master Mar 3, 2020
@rcythr
Copy link

rcythr commented Mar 25, 2020

@david22swan How does the release timeline work for this project? Would it be possible to tag a new release version of this project soon?

Sorry for bringing this thread back from the dead, but this is the change I'm desperate to pull down sooner rather than later.

@david22swan
Copy link
Member

@rcythr For modules we usually release once the amount of changes have hit a tipping point, enough time has passed by or on direct request. Since you pinged about this I'll pass it along and a release should be made soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants