Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tunnel Clients replcating #1109

Closed
KD4WLE opened this issue Mar 4, 2024 · 12 comments
Closed

Tunnel Clients replcating #1109

KD4WLE opened this issue Mar 4, 2024 · 12 comments
Labels
bug Something isn't working waiting for feedback Waiting for feedback from issuer

Comments

@KD4WLE
Copy link

KD4WLE commented Mar 4, 2024

Describe the bug

After switching some clients to Wireguard, unable to delete the old tunnel on the CLIENT. Old entry will reappear after removal or renaming. Seems to occur on multiple nodes.

Tunnel client(s) is on a Proxmox VM

  • [Yes ] Does this issue still occur on the latest version? (Required)

Expected behavior

Old tunnel entry should be removed

Screenshots

Please include a screenshot of your system information (if possible) if the specific system environment is relevant to the bug.

Screenshot 2024-03-04 at 6 04 49 PM
@KD4WLE KD4WLE added the bug Something isn't working label Mar 4, 2024
@aanon4
Copy link
Contributor

aanon4 commented Mar 4, 2024

Can you clarify a few things:

  1. You say the old entries reappear when deleted, but that seems different from what I see here which is that they get duplicated. Is it that when you delete an entry, you end up with two instead? If so, when do you see two? Immediately after the deletion/after you press save/after you refresh the page .. or something else?
  2. Have you seen this on other nodes?
  3. Does this only happen for legacy (non wireguard tunnels) or for all kinds?
  4. Is it a specific entry which causes problems? Like maybe just the top one?
  5. Can you provide detailed instructions to reproduce this? Step by step so I can do exactly what you're doing.
  6. Please provide support data for the node immediately after a failed attempt to delete a tunnel.
    Thanks

@KD4WLE
Copy link
Author

KD4WLE commented Mar 4, 2024 via email

@aanon4
Copy link
Contributor

aanon4 commented Mar 5, 2024

So is this only happening when the server name is the same? I'm wondering ... you create a new tunnel with the same server before deleting the old one. Not saying that shouldnt work, but I'm wondering if that's key?

@aanon4
Copy link
Contributor

aanon4 commented Mar 5, 2024

PS. I didnt see any support files.

@KD4WLE
Copy link
Author

KD4WLE commented Mar 5, 2024 via email

@KD4WLE
Copy link
Author

KD4WLE commented Mar 5, 2024 via email

@KD4WLE
Copy link
Author

KD4WLE commented Mar 5, 2024

@aanon4
Copy link
Contributor

aanon4 commented Mar 6, 2024

I've made a change which will provide me with tunnel information in the support data (it's not there today) dump. The change will be available tomorrow (March 6th). Could you update and re-run your steps to generate the error and reset the support data?
Also, because there's a process continually unsuccessfully trying to reconnect to the server, the log file fills up very quickly pushing out the more relevant information, so please gather the support data as quickly as you can after the delete fails.

@KD4WLE
Copy link
Author

KD4WLE commented Mar 6, 2024

20240306-2cda75e applied. Support Data attached.

supportdata-N4TDX-Mims-Swarm-202403061655.tar.gz

@aanon4
Copy link
Contributor

aanon4 commented Mar 6, 2024

Thanks.
So at the top of the file /etc/config.mesh/vtun there is a 'config server' entry which doesn't have a name. I'm not sure how it got there because the code doesn't create these without names and the property order is not how the code creates them (which doesnt matter functionally, except it means it wasn't created by the current code).
Anyway, can you try deleting this by hand, run /usr/loca/bin/node-setup and reboot, and see if your node behaves correctly again?

@KD4WLE
Copy link
Author

KD4WLE commented Mar 6, 2024 via email

@aanon4
Copy link
Contributor

aanon4 commented Mar 6, 2024

I have a half dozen proxmox nodes, including tunnel server and supernodes, and I've not seen this problem on any of them (I just quickly checked all their config file and all the entries have names). Also, the tunnel code doesnt care in anywway about the hardware, so it's not clear why this would be confined to proxmox nodes. I'm going to assume your proxmox nodes are being made from scratch and not cloned?

So I'm closing this. Please reopen if you find a way to reproduce from a fresh install. I'll poke around in the code some more in case there's some other bit creating tunnels (I cant imagine there is).

@aanon4 aanon4 closed this as completed Mar 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working waiting for feedback Waiting for feedback from issuer
Projects
None yet
Development

No branches or pull requests

2 participants