-
Notifications
You must be signed in to change notification settings - Fork 755
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NAT reflection is broken #6650
Comments
What happens if you change |
@AdSchellevis oh, good question - I don't know! The rule has a selectbox containing (short commercial break while I try that...) That made no difference. Just attaching a couple of screenshots to ensure that I tried what you had in mind...? ...obviously I tried with my real public IP and not actually |
It was just a hunch, but if the result is the same, I would use the packet capture to test where traffic is going. Rules like " reply-to" and "route-to" (policy based routing) might interfere with the traffic. To debug the generated ruleset, you can inspect the contents of |
I will take a look at some packet capturing, and the rules debug file, but there are no policies in place. My entire setup is exactly as described above, so there is no other configuration to interfere with the routing here. |
Most interesting is, when you do a packet capture on your .5 with port 443 and connect from internal via your wan address, which source do you see? |
Okay, so I've been looking through all logs and whatnot to try to get more info on this. @AdSchellevis see Firewall loggingFirstly, I enabled logging for the HTTPS rule, and then I checked Firewall -> Log Files -> Live View, and applied filters for Interesting, I saw entries in the log for my computer's IP I don't know if that looks correct or not. Rules debug listNext, looking at Contents of `/tmp/rules.debug`
There's nothing sensitive in there, so that's a complete and unmodified dump. Packet capturingMoving on, I ran the following command on the host machine being forwarded to: sudo tcpdump --interface br0 host 10.0.0.5 and tcp port 443 Note, my actual command had some greps in there too, to filter out external traffic (as the system is in active use). Once I had eliminated those packets, this is what I saw: Packets appearing 20 seconds after the browser request was made
Packets appearing 26 seconds after the browser request was made
Packets appearing a little while after
Those last few lines appeared while I was writing this up, so I re-ran it again: Packets appearing 8 seconds after the browser request was made
Packets appearing 28 seconds after the browser request was made
Packets appearing 47 seconds after the browser request was made
Running it again several times, it seems that the most common pattern is for packets to appear at around 20, 28, and 48 seconds, but it does vary. So to answer the specific question, it seems that the OPNsense system is the one seen by Going back to the firewall live logs, it seems that both the OPNsense system ( Does any of that help? Is there anything else I can provide? |
Just a quick addendum - after I realised the image available for download is not the latest version, I ran an update and repeated my setup and checks (i.e. I rolled back my ZFS image to before the port forwarding setup, updated the system, and then added the rules afresh). The behaviour did not change - i.e. the problem is still extant. |
I'm having the same/similar issue, installed a fresh OPNsense with latest ISO yesterday. Setting up port forwarding wit associated rule has no effect. The rule exists on the rules tab but it has no effect ?! Setting Also replicating the rule that OPNsense automatically creates using floating rules does work too. I never had to do this before and on my main OPNsense it's still working with the automatically created rules. |
@kub3let that's very interesting. I edited my rules and changed the "Filter rule association" to "Pass" as you suggest, but although that removed the floating rules, it did not result in any change of behaviour for me. Is there anything else that is different about your setup? You mention that replicating the automatic OPNsense-generated floating rule also works, so I tried adding one. Does this look right? You can see one of them (HTTPS) in the list below, with the automatically-generated rules: Unfortunately, this still didn't make any difference. I tried fiddling around a bit, but with no joy. I'm not sure whether the floating rules are at fault, or the NAT ones. |
Without a packet capture its just a guessing game :( |
@mimugmail I gave you two separate sets of packet captures. Did I not provide enough information? Please let me know how to get what you need, if so. |
@danwilliams I'm not using nat reflection but I think the issue could still be the same, I prefer split dns over reflection. So all I did was a generic OPNsense install, configure DHCP etc. then setup port forwarding e.g. This results in the following which previously worked: But the Rule has no effect... If I create a floating rule like this it starts working: --> I can not select any interface on the floating rule or it stops working, even selecting all available interfaces does not work ! <-- Really weird bug, seems like the Interface selection is broken !@danwilliams please remove the interface selection on your floating rule. But I think using pass on the port forwarding should still have bypassed it for you. |
In order to provide feedback more easily, I would like to suggest a couple of things, also to limit the noise in the thread:
The questions you are trying to answer are basically:
Ohw, and by the way, to use reflection, don't select the lan interface if your waiting for an automatic (reflection) rule there, I think I saw lan+wan in one of your screenshots. (you can replace the actual ip with everything you post as long as it's consistent) |
@kub3let I'm not sure you're talking about the same thing as me, after all?
As noted in my original bug report, port forwarding is working fine, including with split DNS. I.e. external clients can connect without issue. My problem is that internal NAT reflection is not working. Therefore I'm afraid your problem must be different to mine. |
@AdSchellevis Thanks for your reply, I've given that a go...
Happy to do that. Note, I had already tried the actual WAN IP in place of "WAN address", which made no difference, but I understand if that makes testing easier for you. So I have changed it again, in the same way as I tried on Tuesday. I've actually tried a few things... Note: I tried all these tests purely with HTTP over port 80, to not obscure anything with encryption. 1. Interface: WAN+LAN, Destination: WAN address
First test: telnet knock
Second test: telnet message
Third test: wget
2. Interface: WAN only, Destination: WAN address
First test: telnet knock
Second test: telnet message
Third test: wget
3. Interface: WAN only, Destination: Single host or network 1.2.3.4/32
First test: telnet knock
Second test: telnet message
Third test: wget
Explanation of tests
Thoughts on results
So to consider the questions you raised:
Yes - as shown previously, packets are making their way from the requesting device to the responding device.
I don't really know how to answer this - can you tell from what I have provided? It appears that the source is the firewall's, but I don't know if that's what we are expecting here. If it is, then yes, that is happening. The problem therefore remains that the packets are not returning back out, which is in keeping with the observation regarding the rules created (i.e. that the rules created are no different when specifying reflection than they are when not enabling it - which always struck me as incorrect, but I could be wrong there).
As explained in my original bug report, I originally tried just WAN as per documentation and guides, and when this didn't work I added LAN (so LAN+WAN) based on comments that suggested it would not work otherwise. Both setups and related attempts are described and corresponding details supplied, including configuration. However, using WAN only has not changed the behaviour, and neither has specifying the precise WAN IP instead of just "WAN address". I remain of the opinion that there are some rules that are needed for reflection, which are somehow not getting created... what do you expect to be put in place for this? I.e. can I check something specific that should be happening for the responses to get delivered back to my internal device? |
My community support time is limited (don't have time to read everything), but if |
@AdSchellevis the As the interface is a bridge, there is a possibility this is interfering with the packet capture, so I will set up a port forward to a machine that is using a straightforward non-bridge interface and repeat the packet capture. |
if you can avoid other machines from sending traffic to port 80 on 10.0.0.5, it might help to capture all on port 80. |
There are no other machines sending traffic to port 80 on 10.0.0.5, as HTTPS is the commonly-used protocol and HTTP simply issues a redirect. Hence the test is the only thing using it. However, I cannot open up the filter more on that machine, as there are many IP addresses. The command should already capture bi-directional traffic, so I'm setting up a test on a different machine for comparison. |
@AdSchellevis Okay, so this is what I have done:
That was rather unexpected.
So, at this point I am thinking, what is the difference? There are two differences that I can see:
Some more information: The physical server that OPNsense is on runs both KVM VMs and also Docker containers. It has several network interfaces. OPNsense runs in a KVM VM, and is set up to "own" one passed-through PCI NIC, for the WAN, and sit on the bridge for the internal network traffic. (As already described, but good to recap.) One of the Docker containers runs HAProxy. This presents an open port on the same bridged interface as OPNsense has access to. HAProxy then channels everything out to the wider network - including a KVM VM sat on a host-only bridge ( So the full traffic route is: Client -> WAN IP Is it possible that this is the cause of the problem? If so, do you have any idea why? I don't immediately see a reason why traffic would not be able to easily flow between the two. There are no other similar issues - everything else has been talking to each other as expected. Perhaps there's something about the OPNsense rules that would cause it? Or is it more likely to be a network matter outside of OPNsense? To clarify the devices: PHYS NIC 1: bridge
PHYS NIC 2: PCI pass-through
My original thinking was that sharing the bridge was a good thing, as virtio speeds can be taken advantage of. But if it causes a conflict, I can change that. Are there any known constraints that would prevent OPNsense from sharing a bridge with other services on the host, on different IPs? Based on this, the next thing I will try will be to change over to a different physical NIC - I might assign a third NIC as passthrough for sole use by OPNsense, for the internal LAN network. Perhaps it's better for OPNsense to totally own its cards. (Is this a known issue or recommendation?) Thanks for the help on this! |
I had the same problem with NAT reflection but I resigned and used Split-DNS instead. Could a PPPoE WAN connection be the common denominator to this problem? I also couldn't get it working with a PPPoE connection. |
@levelad what's interesting is that if I forward to a port on another machine on the LAN, it works. But if I try to forward to a port on the same host machine, it doesn't. This is a recent discovery in trying to diagnose this. Is/was your setup also on the same host machine? |
@danwilliams I didn't test another machine or another port. But the same setup was working before with an old firewall (Sophos UTM aka Astaro Security Gateway). |
@danwilliams to be honest, I don't know what your issue is, just don't think it is related to the firewall directly. My recent local test using hardware did work without issues as well. |
Maybe this helps. I do all hairpin NAT configurations manually on the opnsense. (Manual outbound NAT rule generation activated). There's 2 common scenarios:
|
@AdSchellevis |
@AdSchellevis okay, so this is quite interesting. I spent some time over the weekend trying various different configurations. Long story short:
Important things to consider in context:
So in summary, it seems that there is some issue with NAT reflection under the specific circumstances listed, which perhaps should lead to an advisory in the documentation if it's by design, otherwise perhaps it helps narrow the possibilities to look at if attempting a fix. I don't know exactly which bit is the problem - shared interface, the interface being a bridge, or the use of KVM (although I doubt it's KVM-related). @Monviech I read your message with great interest, as it certainly sounds like it could be relevant. Unfortunately, although I spent some time trying what you detailed, I was not able to get anywhere. I think I just don't know enough about NAT/DNAT/SNAT, and therefore was likely doing the wrong thing. As assigning a dedicated passthrough interface worked, and solved my immediate problem, I had limited time for further testing (as I've already spent days on this!). So you could very well be correct, but maybe it's one that @AdSchellevis and co. can pick up and understand and put into OPNsense as a fix (e.g. if additional rules need to be set up automatically) or as a documentation update (say if those rules are only needed under certain circumstances and should not be automatic). I might have another try at some point if I get time. @levelad I don't think the issue is specifically related to PPPoE, as (from what I have observed) this issue occurs on the LAN interface and does not appear to be affected by the WAN interface. The port forwarding works from external to internal and back, and appears to work for internal to internal in WAN interface context, but then the packets don't manage to come back the other way. So internal requests work, but not responses. I.e. WAN -> LAN works, WAN <- LAN works; LAN -> (WAN) LAN works, but LAN <- (WAN) LAN doesn't work. All this is only a problem when the LAN interface is shared with KVM, i.e. a bridge on the host. TL;DRNAT reflection does not work out-of-the box when sharing a network interface that is a bridge on the host, when using KVM virtio. It is unclear whether the issue is the sharing of the interface, the fact that is a bridge, or the use of KVM. Advice is therefore to assign a dedicated passthrough PCI device to avoid this issue entirely. |
...interestingly, this other ticket might be describing the same problem, as it sounds very similar and mentions SNAT and DNAT. I had found and read it before posting my original bug report (and referred to it discreetly) but with @Monviech's additional info I am now wondering if it is actually more closely related than I had thought. But I don't understand the outcome or conclusion of that ticket, or why exactly it was closed (which appears to be contentious). |
What I don't understand is why an interface type would change how NAT works. NAT - SNAT (Source Network Address Translation), DNAT (Destination Network Address Translation), PAT (Port Address Translation) - rewrites source and destination IP Addresses and/or Ports in OSI Layer 3 based on NAT rules. You can change the packets however you want with these rules. A virtual interface connected to a virtual hypervisor switch/bridge operates in OSI Layer 1 and 2, it shouldn't change anything about how IP works in OSI Layer 3.
|
@fichtner Agreed, I think OPNsense can do better. If it's a situation which is outside of the remit of OPNsense to resolve for some reason (e.g. as speculated in one of my previous messages, if it's a situation than can be identified but for which you do not want to add rules) then there should at least be some identification made, and notes added to the documentation, so that people know what to do in such circumstances. This would fall under my option (b) (which covers your proposed option (c) as well). I don't think it's a case of commercial vs community support - or at least, it shouldn't be. If the focus of the community edition is providing the best tool possible, then clearly there is a chance here to improve the docs at the very least. Commercial support is not generally about situations like this, which could (and it seems do) occur for a number of people under common circumstances - in my experience it tends to be more about very specific enterprise situations or where help is needed for something that community members would do themselves. KVM, bridged interfaces, and for that matter PPPoE are all very common and likely to be used by community members, hence I think this is a community-context problem to address. Just my opinion, of course. In summary, you could spend essentially zero time on this by adding a warning to the docs that NAT reflection may not work if sharing a bridged interface under KVM, and to use a dedicated passthrough card instead. I would be happy with that, and would equally be happy to spend some time helping to narrow the focus if desired. |
To be honest, I don't think that would help people very much as it's highly likely there are also people using KVM that do not have this issue. We can try to describe all possible surroundings, but without being precise, it often doesn't help much.
From a product perspective it's not a community vs commercial question (both of our products are equal in that regard) from a support perspective in my humble opinion it is. We just can not spend an endless amount of time on issues that highly likely lie outside of the scope of OPNsense. Realistically we already spend way more time on this ticket than reasonable, but given the time you put into it, it felt good to try to help you analyze your issue to see if we could isolate something that either warranted a fix in the code or documentation. When we do run into issues like these while doing commercial support, we often try to update the documentation or record it in a ticket if it helps others, unfortunately for you we haven't seen this one. I remember an issue (with kvm) a long time ago where it discarded traffic for some reason (opnsense/src#85), sometimes these things tend to be version specific as well in our experience (either on the hypervisor or the drivers they need in the guest). |
I've checked KVM libvirt. It has 3 different operating modes. Please tell me which mode your libvirt interface is configured as. I want to try and analyze the issue in my freetime, to see if it is indeed a kvm bridge problem or something else.
|
I'm not a network engineer but isn't a virtual machine always behind a virtual bridge as shown in this figure: In my case it was a virtual Windows 10 on a Windows Server 2022 Standard host machine. Hyper-V also has 3 Virtual Switch types:
Probably similar to the KVM ones. Edit: Only with the External type in Hyper-V the switch is bound to the NIC. That's also the type in use. |
Yes you are right, but qemu seems to have more options. In Ubuntu 22.04 in which I installed kvm right now, I have 3 different options to define a network in EDIT:
I think I'll try the Open vSwitch for the Opnsense to test it. |
I made a test kvm hypervisor with Open vSwitch Network Configuration:
Opnsense Configuration As Test client I use the hypervisor with the ip address 192.168.1.2 on br0 The test is pinging 1.2.3.4 and checking with tcpdump if the rules are matched accordingly for NAT Reflection. host ping
host tcpdump
ANALYSIS: The Nat reflection works, I can't find the bug. |
@Monviech thanks for the effort but that is again a static IPv4 WAN. What @danwilliams? and I are proposing is that it doesn't work on PPPoE WAN and the target a virtual machine (virtual bridge/virtual switch). |
@Monviech I'm confused why your floating firewall rule has the "WAN address" as target and not the redirect target IP (192.168.1.10) from the port forward. |
@vpx23 EDIT: EDIT EDIT: Thanks for pointing out the mistake. I will improve the pppoe post. EDIT EDIT EDIT: |
@Monviech could you please change the source of the outbound NAT to "LAN net" and ping the public IP from a 3rd device, e.g. a client in the same subnet? Then we'd have a more real world 3 party setup instead of only 2 parties, thanks. |
I have done enough. Anybody can replicate the setup I did above and mess around with the rules themselves. From all the different setups I tested, I'm pretty sure there is no bug. If I continue now, there will always be a "but what about this scenario" and things will never end. |
The purpose of what I am currently trying to do is to resolve the cause precisely, but I cannot do that alone.
I appreciate that - and your time on this.
That's quite interesting - although in this case there are no special drivers in the guest, so it's whatever is in the kernel. But yes, the scope can be tricky.
Indeed it does. I apologise if my original report was not clear in this regard - the bridge created on the host (
No - there are always a few options, naming varies slightly between hypervisors but all present the core three of bridge, NAT, and host-only, plus sometimes additional variations. In this situation I had specified bridge so that the virtual NIC could reach the Internet (bridge and NAT both do this, but not host-only) but also accept incoming traffic (bridge and host-only both do this, but not NAT, and host-only accepts it through an internal subnet IP whereas bridge gets one assigned on the LAN subnet). I am not 100% sure what "route" means in KVM land, as I've never used nor had cause to use it, so my "host-only" reference is a VMware term, but they are all comparable. Your Hyper-V options correspond to the VMware terminology, I believe (with "External" being comparable to "bridge" in KVM and VMware).
Not really - QEMU has the same basic options as all the others (my knowledge here spans KVM, VMware, VirtualBox, and XEN) but the naming can differ slightly.
That's very interesting - I have looked over quickly, and the settings appear to align with what I had, but I will look in more detail when I get opportunity. My question here is, coming out of that test, what else would be useful for me to try on this end? One difference I can see, though... It is interesting that you specified
I don't necessarily disagree with you, but my suspicion is that it's the internal side of the routing - but I can't rule out the PPPoE being the cause, as I don't know how the reflection is handling that. So I have no idea, but I'm guessing it's not PPPoE.
That's cool that you've set up a PPPoE test case. One question about it - I see you have added a SNAT rule, which appears under "Outbound". I am lost at that point - I did not have that, either manually or automatically. Are you saying this is a necessary step? Could that be the root of the problem? (I've ignored the edit conversation as that happened before I read the updates, so I'm looking at the latest version here.)
This is not a bad idea, but unnecessary for my particular case, as I was able to observe it with the parties @Monviech put in place.
Thank you - I appreciate your time on this. Even though this is no longer a problem for me, as I've solved it by moving to a dedicated passthrough PCI card, I believe we both share the goal of improving the situation for the community - plus I hate to have an unresolved issue! I think you may have identified the cause, with your addition of a SNAT rule - as mentioned above. Could you confirm if this could in fact be the culprit? |
NAT Reflection/NAT Loopback/Hairpin NAT is basically SNAT. Here is a very good description: https://help.mikrotik.com/docs/display/ROS/NAT#NAT-HairpinNAT |
@levelad how do you interpret the situation of those rules being added manually - do you also think that could be the cause? I.e. should OPNsense be adding them? I can't understand why @Monviech added them - if they are necessary, either OPNsense should be adding them, or there should be docs to tell us to? What do you think? |
@danwilliams I think there is a bug in the automatic creation of the outbound SNAT rules for NAT reflection. I tried everything (global and local setting) but they were never created. |
I've checked the option:
And I was sure to have this enabled:
I created following DNAT rule: I checked the resulting NAT and RDR rules with
Analysis: There aren't any SNAT rules created by "Automatic outbound NAT for Reflection". I explained it in a prior post #6650 (comment) that there's two scenarios. NAT Reflection with hosts in the same broadcast domain, and NAT reflection with hosts in different broadcast domains. In the same broadcast domain, because the clients can resolve the arp and communicate directly with each other, reflected traffic is asynchronous. That will prevent protocols like https or ssh from working, because TCP depends on synchronous traffic. The firewall has to answer reflected NAT requests from the same broadcast domain with it's own interface IP address. If this isn't the way it's intended to work (like with policies not getting generated automatically), then it is a bug. Result: The OPNsense doesn't automatically generate the SNAT rules needed for nat reflection in the same broadcast domain. Protocols that need synchronous traffic (like TCP) won't work properly. Workaround: The SNAT rules have to be created manually. |
@Monviech I would expect nat rules as well if both reflection options are enabled, the nat rules are almost the same in terms of logic: core/src/opnsense/mvc/app/library/OPNsense/Firewall/DNatRule.php Lines 131 to 150 in edcc29a
You could try to grep the description ("Nat Refl..") from |
@AdSchellevis The rules look fine. I commented the SNAT rule and its exactly whats needed.
After all the tests I did, even testing special use cases explained above, there still doesn't seem to be a bug. But for me, I'm kinda done, can't continue cause there's nothing left to prove. I think it's proven that NAT Reflection works, except in some very specific weird edge cases that are out of scope for OPNsense. |
I am having the same issues since upgrading to this version of opnsense that @AdSchellevis is reporting, and i am unable to get it to work with the workaround described - probably i am doing something wrong. Can @Monviech put some documentation, or even better, getting a practical example with some pictures so i can understand what is happening and how to fix it? I have been tracking a package since it enters my fw, gets forwarded to the vm, the vm generates a response, it returns it, the package enters the firewall via the interface facing that vm, but it never reaches the outbound facing interface (where it first entered the firewall). Thanks so much. |
I have written a Tutorial in the Opnsense Forum as result of this github thread: |
Hi @Monviech. Thanks for the reply. I did exactly as stated on both my "prod" firewall and one i built just for testing this and the issue is still ocurring. I can see the packages entering via the front interface with portfrwd, arriving on the VM, returning to opnsense and then getting discarded and not leaving via the interface they came from (ovpnc10). I have both DNAT and SNAT set on the front interface depending on the flux but ir appears that opnsense is dropping the tcp traffic. This are my rules for this particular configuration
The VM that is required to communicate has the default gateway in rules, replaced with the gateway from ovpnc10 so all of its traffic is forced to leave via that particular interface. Also, i have Reflection for port forwards enabled. Do you think this is warrants a new bug report? This started happening when i went from 23.1.8 to 21.1.11. |
Does ovpnc interface mean it's OpenVPN? Maybe it's related to this issue? |
Hi @Monviech HTTP request nated ip --> internal ip (sinkhole) --> porfrwrd to transparent SQUID --> Wireguard --> destination. I am getting many RST on the portfrwd part. And the request never reaches the entrance the wireguard tunnel. The corruption appears to be happening in the NAT part of this configuration. |
@Monviech just solved my issue. It was enough setting the openvpn configuration like this and got my reflection working again. |
You might want to review this: #7022 (comment) |
This issue has been automatically timed-out (after 180 days of inactivity). For more information about the policies for this repository, If someone wants to step up and work on this issue, |
Note: It is perhaps not "new", but there are no open tickets relating to it. Some very similar, potentially the same, have been closed with unclear reasons. It may be that this ticket provides additional material which will help achieve a resolution.
Describe the bug
Using a clean, brand-new installation of the latest OPNsense, NAT reflection does not work.
Background
It has been a few years since I last set up pfSense, and in the intervening time it appears OPNsense has grown in popularity (I had not previously heard of it). The debacle with the pfSense codebase (dodgy Wireguard patches from Netgate to the BSD kernel, that kinda thing) made interesting reading, and I therefore deleted my freshly-downloaded copy of pfSense and instead turned to OPNsense. Setup was painless, and within a short space of time I was up and running, and had replaced my Ubiquiti EdgeRouter with equivalent settings for Internet connectivity and DHCP leases. Just some simple port forwarding to set up to have a complete replacement, and...
The problem
Having configured a pretty vanilla setup, with a PPPoE WAN and a single-subnet LAN, I was surprised when the port forwards I set up did not work. Reading a little around the subject I realised that they were actually working, and external access was fine, but I had to enable NAT reflection in the main/global settings in order to get internal resolution. I dutifully did this, and descended into a rabbit-hole for several hours, trying lots of things with no success. Along the way I found several reports by other people who have apparently encountered the same, or a very similar, issue.
In a nutshell (TLDR)
If you set up NAT port forwarding, even if you have NAT reflection enabled in the main settings and on the forwarding rule, there is no internal resolution of traffic that is directed towards the WAN interface.
To reproduce
Bear with me, as this is a detailed account of how to start from scratch and verify the issue, along with notes and data.
Initial setup
Define VM
Create VM configuration
This setup is running on a VM using KVM on Linux, specifically, Ubuntu Server 23.04.
Create a new virtual machine using virt-manager. Mine is:
br0
(this is a host bridge that is present on the LAN)enp60s0
Intel NIC (this is the interface for the WAN)Ensure there are no references to
enp60s0
in the host's netplan config (as this card is a passthrough device to be owned by the OPNsense VM).Autostart
This is so it will always come to life with the host machine.
Install OPNsense
Boot VM
Install by following the steps laid out, but notably:
br0
bridge)Configure via CLI
Using the CLI, log in and, using the menu:
10.0.0.1
to the LAN interfaceConfigure via web GUI
Log into the web UI and run through the setup wizard.
router
lan
10.0.0.2
(the network currently runs a separate DNSmasq server that will later be moved into OPNsense)xxxxx
yyyyy
It should connect and assign the correct public IP address from the ISP, which will be referred to from now on as
1.2.3.4
.Configure OPNsense
Static IPs for DHCP clients
Port forwarding
10.0.0.5
I added three rules, for:
Examination of configuration
The steps above represent my latest, most complete attempt, which is I believe the most accurate and therefore the best to represent as a test case for reproduction.
It is worth noting that, along the way, I tried multiple variations:
Note: Although it seems reasonable that the LAN interface might need specifying as well as the WAN interface, I stumbled upon this by accident when reading around the subject. Some comments somewhere. It does make sense, and as the target is the WAN address, should not conflict with the LAN address services. Unfortunately this still didn't work for me, but if it is indeed a requirement then maybe it should be documented more clearly - for instance in the UI, which itself suggests the WAN interface is what is usually needed.
First config changes
These are the changes resulting from my first attempt, i.e.:
I am including these lines for completeness, because I believe that path should work - if not done properly first time, a correction through editing should surely be sufficient.
Configuration diff from 7/4/23 08:08:45 to 7/4/23 10:25:06
Latest config changes
These are the changes resulting from my latest attempt, which is the most complete and correct, i.e. as described in my steps to reproduce:
I believe that these steps should have resulted in a correct outcome.
Configuration diff from 7/4/23 08:08:45 to 7/4/23 13:25:07
Difference between config changes
What's strange is that there's very little changed between the two.
When specifying WAN alone, the rules end up in Firewall -> Rules -> WAN. When specifying WAN plus LAN, the same rules get shown in Firewall -> Rules -> Floating. I don't know if that's expected, or what "floating" means exactly - I was perhaps expecting to see some rules under both WAN and LAN, but other than appearing in a slightly different place, the rules seem the same.
The later config has
<natreflection>purenat</natreflection>
added to each of the rules, and the rules also say they are "floating" and "quick". They also obviously specifylan,wan
and not justwan
. But these differences do not appear to have actually changed anything, and I am not sure if they are needed. What is clear is that either a) the rules in place are not working correctly, or b) there are some missing.Expected behavior
My expectation was as follows:
1.2.3.4
, and a DNS entry defined against a domain, pointing at it, external traffic should see it and route to it. This does indeed happen, correctly, and external clients can use the Gitea system without issue.1.2.3.4
should be translated and redirected to10.0.0.5
. This is not happening.Additionally, we could say that I expect more rules - but I am not 100% sure of that, as it may be that the ones in place are not working correctly.
Describe alternatives you considered
I have considered using unbound DNS in order to get ahead of the requests and make the clients send those requests to the internal
10.0.0.5
IP instead of the external IP. However, this is not particularly practicable, as a) there are various TLS certificates and similar in place that complicate that approach, and b) there are some setups (moving away from the simple Gitea setup here) that need to correctly use proper DNS as part of their automated testing. So this is not really a solution for me.Another alternative might be to go back to my EdgeRouter, which worked fine... or, maybe, swap over to pfSense, which apparently does not have this specific issue. But I am reluctant to do either of those things.
Screenshots
There are some screenshots that might be useful; not sure how much they add to the steps and config above, but here goes:
List of rules: Firewall -> NAT -> Port Forward
Rule configuration: Firewall -> NAT -> Port Forward -> (Rule)
List of rules: Firewall -> Rules -> Floating
Main/global settings: Firewall -> Settings -> Advanced
Relevant log files
I'm not sure what log files would be of use here - I have trawled through what is available but not seen anything notable to submit. Please let me know if you would like something specific.
Additional context
It's worth noting that this is a brand-new setup on a brand-new system. It's not an upgrade, it's not a port or import of settings from elsewhere, and it doesn't have anything unusual going on - i.e. it should not be an edge case. It's just bog-standard port forwarding. Therefore it should hopefully be easier to validate than situations where there are additional factors taking effect.
Environment
Host system
VM configured for OPNsense
The text was updated successfully, but these errors were encountered: