Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TC/Traffic Control: Error: Invalid handle. #154

Closed
valcryst opened this issue Oct 8, 2020 · 1 comment · Fixed by #181
Closed

TC/Traffic Control: Error: Invalid handle. #154

valcryst opened this issue Oct 8, 2020 · 1 comment · Fixed by #181

Comments

@valcryst
Copy link

valcryst commented Oct 8, 2020

OS: CentOS 8
TC: iproute-tc 5.3.0-1.el8
Tunneldigger: Master (installed 1 week ago)

Problem:
If a node with Traffic Limits connects to the tunnelbroker i see this inside the logs:

Okt 08 09:14:45 broker2.ff-en.de python[1699]: [INFO/tunneldigger.limits] Setting downstream bandwidth limit to 5000 kbps on tunnel 61000.
Okt 08 09:14:45 broker2.ff-en.de python[1699]: Error: Invalid handle.

TC Output also looks different.

CentOS 8 / New Tunneldigger

qdisc fq_codel 0: dev l2tp58025-58025 root refcnt 2 limit 10240p flows 1024 quantum 1434 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev l2tp58026-58026 root refcnt 2 limit 10240p flows 1024 quantum 1434 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev l2tp58027-58027 root refcnt 2 limit 10240p flows 1024 quantum 1434 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev l2tp81062-81062 root refcnt 2 limit 10240p flows 1024 quantum 1434 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev l2tp58030-58030 root refcnt 2 limit 10240p flows 1024 quantum 1434 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev l2tp58031-58031 root refcnt 2 limit 10240p flows 1024 quantum 1434 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev l2tp58033-58033 root refcnt 2 limit 10240p flows 1024 quantum 1434 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev l2tp22068-22068 root refcnt 2 limit 10240p flows 1024 quantum 1434 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev l2tp58042-58042 root refcnt 2 limit 10240p flows 1024 quantum 1434 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev l2tp58043-58043 root refcnt 2 limit 10240p flows 1024 quantum 1434 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev l2tp75004-75004 root refcnt 2 limit 10240p flows 1024 quantum 1434 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev l2tp75011-75011 root refcnt 2 limit 10240p flows 1024 quantum 1434 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev l2tp75017-75017 root refcnt 2 limit 10240p flows 1024 quantum 1434 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev l2tp75024-75024 root refcnt 2 limit 10240p flows 1024 quantum 1434 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev l2tp75033-75033 root refcnt 2 limit 10240p flows 1024 quantum 1434 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev l2tp75035-75035 root refcnt 2 limit 10240p flows 1024 quantum 1434 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev l2tp75041-75041 root refcnt 2 limit 10240p flows 1024 quantum 1434 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev l2tp75050-75050 root refcnt 2 limit 10240p flows 1024 quantum 1434 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev l2tp75052-75052 root refcnt 2 limit 10240p flows 1024 quantum 1434 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev l2tp75056-75056 root refcnt 2 limit 10240p flows 1024 quantum 1434 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev l2tp75063-75063 root refcnt 2 limit 10240p flows 1024 quantum 1434 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev l2tp75064-75064 root refcnt 2 limit 10240p flows 1024 quantum 1434 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev l2tp75065-75065 root refcnt 2 limit 10240p flows 1024 quantum 1434 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev l2tp75068-75068 root refcnt 2 limit 10240p flows 1024 quantum 1434 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev l2tp75081-75081 root refcnt 2 limit 10240p flows 1024 quantum 1434 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev l2tp75083-75083 root refcnt 2 limit 10240p flows 1024 quantum 1434 target 5.0ms interval 100.0ms memory_limit 32Mb ecn

Ubuntu 16.04 / Old Tunneldigger (2017/18)

qdisc noqueue 0: dev lo root refcnt 2
qdisc mq 0: dev ens160 root
qdisc pfifo_fast 0: dev ens160 parent :1 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev ens160 parent :2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc mq 0: dev ens192 root
qdisc pfifo_fast 0: dev ens192 parent :1 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev ens192 parent :2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc noqueue 0: dev br-ha root refcnt 2
qdisc pfifo_fast 0: dev tun-map root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc noqueue 0: dev bat-ha root refcnt 2
qdisc pfifo_fast 0: dev l2tp10781 root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev l2tp10611 root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev l2tp10861 root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev l2tp11071 root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev l2tp10421 root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev l2tp10711 root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev l2tp10921 root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev l2tp10601 root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev l2tp10871 root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev l2tp10821 root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev l2tp10961 root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev l2tp10441 root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev l2tp10031 root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev l2tp10011 root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev l2tp10301 root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev l2tp10841 root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1

Is there a specific version of TC we have to install, to get this working again?

Regards

EDIT: Old rules stay forever and wont get removed.

@RalfJung
Copy link
Member

RalfJung commented Oct 8, 2020

I have seen these errors for quite a while already, and I think they are harmless -- traffic control still works for me even with these errors.

I think these errors come from here:

self.tc('qdisc del dev %s root handle 1: htb default 0' % self.interface, ignore_fails=True)

Due to os.system, the error still gets printed to stderr, but no exception is raised due to ignore_fails.

TC Output also looks different.

What is the exact command you used to generate that output? It is well possible that current CentOS just uses a different version/implementation of that tool than old Ubuntu so the same state looks different. Unfortunately I don't have any idea how to interpret this output.^^

The important bit is, does traffic shaping actually work? If the only problem is those error messages, that's a bit annoying, but not a problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants