Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.Sign up
IMQ - FAQ
1. What can I do with IMQ ?
The imq device has two common usage cases:
With linux only egress shaping is possible (except for the ingress queue which can only do rate limiting). IMQ enables you to use egress qdiscs for real ingress shaping.
Shaping over multiple interfaces:
Qdiscs get attached to devices. A consequence of this is that one qdisc can only handle traffic going to the interface it is attached to. Sometimes it is desireable to have global limits on multiple interfaces. With IMQ you can use iptables to specify which packets the qdiscs sees, so global limits can be placed.
2. Is it stable?
It seems to be pretty stable, a lot of people are using it without problems. There is one case which is not entirely clear at this time, enqueueing packets going to a gre tunnel and also enqueueing the encapsulated packets to the same imq device results in the kernel assuming the gre device to be deadlooped.
Another thing to note is that touching localy generated traffic may cause problems.
3. When do packets reach the device (qdisc) ?
The imq device registers NF_IP_PRE_ROUTING (for ingress) and NF_IP_POST_ROUTING (egress) netfilter hooks. These hooks are also registered by iptables. Hooks can be registered with different priorities which determine the order in which the registered functions will be called. Packet delivery to the imq device in NF_IP_PRE_ROUTING happens directly after the mangle table has been passed (not in the table itself!). In NF_IP_POST_ROUTING packets reach the device after ALL tables have been passed. This means you will be able to use netfilter marks for classifying incoming and outgoing packets.
Packets seen in NF_IP_PRE_ROUTING include the ones that will be dropped by packet filtering later (since they already occupied bandwidth), in NF_IP_POST_ROUTING only packets which already passed packet filtering are seen.
4. Common seen messages/errors
kernel: ip_queue: initialisation failed: unable to create queue
kernel: ip6_queue: initialisation failed: unable to create queue
The imq device feeds itself packets through netfilter queueing mechanism. At the moment there can only be one netfilter queue per protocol family so this means imq came first and ip(6)_queue cannot register as PF_INET(6) netfilter queue.
kernel: nf_hook: Verdict = QUEUE.
You have compiled your kernel with
CONFIG_NETFILTER_DEBUG=y. Turn it off to get rid of these messages.
iptables v1.2.6a: Couldn't load target IMQ:/usr/local/lib/iptables/libipt_IMQ.so: cannot open shared object file: No such file or directory
You haven't patched/rebuilt/installed iptables correct. The iptables IMQ target shared libs are only built if your kernel tree has been patched to include the IMQ target using patch-o-matic before. If you took the precompiled shared libraries you haven't copied them to the right place.
5. Should I not compile IMQ as a kernel module?
You can use IMQ as module safely:
Under 2.4 compiling (and using) IMQ as a dynamically loadable kernel module is perfectly fine and heavily tested.
Under 2.6 if you use patch linux-2.6.7-imq1 or newer ones you are OK, modules work fine.
If you are still using older patches there were problems:
• compiling IMQ device driver (imq.o) as a kernel module caused the kernel compilation to stop with an error; this issue was solved ages ago
• unloading (=rmmod='ing) the IMQ device driver module (imq.o) caused a kernel panic; this issue has been solved also
• when used as a kernel module, the IMQ driver did nothing; this issue came from the previous (kernel panic when unloading), and has been solved
We recommend not using any IMQ patch earlier than linux-2.6.7-imq1 (for 2.6 kernels) or linux-2.4.26-imq.diff (2.4 kernels). (These patches apply on earlier versions as well, than their names suggest.)
6. I need more than two IMQ devices. How can I create more?
Note that if you ever need more than 16 devices, there is no other option than to dive into the source code, because there is a hard-coded limit:
#define IMQ_MAX_DEVS 16.
Remember, you can set the number of imq devices (imq0, imq1, ...) before the IMQ device driver is initialized.
That means that if you use IMQ as
• compiled into kernel, you can give (via your bootloader) your kernel the parameter imq.numdevs = n (of course, without any spaces on both sides of the equal sign), where n is the number of devices you want
• loadable kernel module, you can give modprobe the parameter numdevs = n (of course, without any spaces on both sides of the equal sign), where n is the number of devices you want. Remember, that first you have to unload the module, to load it with a different numdevs parameter.
You can safely initialize IMQ with more devices created than is actually used. The only disadvantage is some dozen of bytes kernel memory allocated per device.
7. When does IMQ (and filters attached to the IMQ device) see the packets - relative to NAT?
The default behaviour is that in PREROUTING, IMQ sees the packets before NAT, and in POSTROUTING, IMQ sees packets after NAT.
With the default behaviour, on a NATing (masquerading) machine, you should set up your IMQ devices like this, to enable QoS u32 filters to see packets before NAT (that is, classify according to private IP addresses).
If eth1 is the interface facing the network with private addresses, then you should say:
iptables -t mangle -A PREROUTING -i eth1 -j IMQ --todev 0
iptables -t mangle -A POSTROUTING -o eth1 -j IMQ --todev 1
Now QoS u32 filters attached to imq0 and imq1 both see the private IP addresses of packets originating from, or going to your private network, accordingly.
8. Can I change the way IMQ hooks in netfilter? (Can I make IMQ see packets in PREROUTING/POSTROUTING before/after (de)NAT?)
Yes, you can.
With 2.6 series kernels, you can easily change it at configuration time. (Look at Device drivers_/Networking support/_IMQ behavior (PRE/POSTROUTING).)
With 2.4 series kernels, currently the only way is to edit the source, and recompile your kernel.
To hook after NAT in PREROUTING: find the priority member (the last member) of the imq_ingress_ipv4 structure in
linux/drivers/net/imq.c, and change from
NF_IP_PRI_MANGLE + 1 to NF_IP_PRI_NAT_DST + 1. Similarly, change the priority member (last member, too) of imq_ingress_ipv6 from
NF_IP6_PRI_MANGLE + 1 to
NF_IP6_PRI_NAT_DST + 1.
To hook before NAT in POSTROUTING: find the priority member (the last member) of the imq_egress_ipv4 structure in
linux/drivers/net/imq.c, and change from
NF_IP_PRI_NAT_SRC - 1. Similarly, change the priority member (last member, too) of imq_egress_ipv6 from
NF_IP6_PRI_NAT_SRC - 1.
9. I got kernel panics when using IMQ. I'm using a 2.6-series kernel (but not 2.6.8 or 22.214.171.124).
Make sure you do not send locally generated traffic (traffic generated by userspace programs or the kernel itself - eg. GRE, IPsec tunnels) into the IMQ device, there's a known bug affecting 2.6-series kernels.
Make sure you use the latest stable patches and a sensibly recent kernel, and that the patch matches the kernel.
Compile error ( structure has no member named `imq_flags')
Make ends in:
net/ipv4/netfilter/ipt_IMQ.c: In function imq_target':`
net/ipv4/netfilter/ipt_IMQ.c:19: error: structure has no member named imq_flags'`
make: *** [net/ipv4/netfilter/ipt_IMQ.o] Error 1
make: *** [net/ipv4/netfilter] Error 2
make: *** [net/ipv4] Error 2
make: *** [net] Error 2
CONFIG_IMQ in your kernel .config
10. What is Multi Queue patch
Patch doesn't actively dispatch separate CPUs but depends on either RPS or multi-IRQ NIC to have packets running through multiple CPUs. Multi-queues on IMQ then tries to avoid serialization by single qdisc lock.
Script for multi-queue https://github.com/imq/linuximq/blob/master/kernel/v2.6-multiqueue/load-imq-multiqueue.sh