net4people / bbs Public
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DNS tunnel that can do DoH and DoT #30
Comments
As of tag v0.20200426.0 in the source code, dnstt-server lets you control the maximum UDP payload size with the |
Download speed testsI did some experiments of download performance of the DNS tunnel. tl;dr a DNS tunnel can go faster than you may think, but the choice of resolver matters a lot. I tried downloading a 10 MB file through the tunnel, using a selection of resolvers and DNS transports. I cut off the download after 10 minutes. "none" is the special case of no intermediate recursive resolver (the tunnel client sends queries directly to the tunnel server). The server was located in Fremont, US and the client in Tokyo, JP. There was about 100 ms of latency between the two hosts. Download rates are the median of 5 trials. The dnstt tag was v0.20200430.0. See below for source code, data, pcaps, etc. Cloudflare's DoH and DoT resolvers are both fast. Google's DoH resolvers is much faster than its DoT server (I noticed the DoT server terminating TCP connections every 200 KB or so). Comcast's DoH and DoT resolvers have about the same middling performance. Quad9's DoT resolver is notably slow; there's clearly something wrong there, whether it's the resolver or how the tunnel uses it. For comparison, the download rate of an untunneled, direct TCP transfer was 4666.3 KB/s.
I repeated the experiment with iodine, an existing DNS tunnel. iodine works over plaintext UDP only. dnstt is faster than iodine in every case, except for the Quad9 DoT resolver. It is possible to run iodine over a DoH proxy; I didn't try that myself but Sebastian Neef reports 4–12 KB/s when tunneling iodine through dnscrypt-proxy.
This graph shows the 5 trials under each experimental condition and gives an idea of the variance. Steeper lines are better. The source code for these experiments is available in the following repo. I used git-annex to store the data files (there are over 3 GB of pcap files). You will have to Also posted at https://www.bamsoftware.com/software/dnstt/performance.html. Update 2020-05-05: I updated the tables and figure to exclude a preliminary test run that I did not intend to include in the first place. The change did not affect any of the qualitative observations. The Cloudflare/DoH case increased by about 7 KB/s from 126.7 KB/s to 133.5 KB/s; none of the other cases changed by more than 3 KB/s. |
Web pageI've set up a web page for dnstt: And I wrote notes on the protocol: |
|
This utility works during internet shutdown in Turkmenistan. It successfully establishes direct UDP connection to the destination server (without using any public resolver) and transfers up to 2 mbit/s of download. |
Shadowsocks plugin (proof of concept)It would not be hard to adapt the dnstt code for a Shadowsocks plugin. Here I show Bash scripts that wrap dnstt-client/dnstt-server in a Shadowsocks plugin interface: A more portable/permanent solution would be to fork the dnstt code and swap the command-line interface for a Shadowsocks plugin environment variable interface. |
IMO fork is even unnecessary, we can add SIP003 support without breaking command line interface. Just check if SIP003 environment variable exist when start, if they exist, then dnstt is running as plugin, command line can be ignored, else it's running independent, read command line normally. Same for other tunnel program. |
Performance tuning, v1.20210803.0I just released v1.20210803.0 of dnstt.
The main feature of this release is some parameter tuning for a small improvement in performance in some configurations. See the full post. I'm working on Champa, a circumvention tunnel based on AMP cache. Like dnstt, Champa uses a Turbo Tunnel model, with KCP and smux as an inner session layer. While working on Champa, I discovered that adjusting some buffer and window sizes could greatly improve download performance. I suggested that the same idea might improve performance in Snowflake, and I spent some time experimenting to see if it could help dnstt as well. In summary, I was able to improve download speeds, but only in some configurations, and only a little bit. I was encouraged in initial tests with plaintext UDP and without a recursive resolver, which I was able to make go quite fast, even over 1 MB/s. But this is a configuration we don't care about, because it's not covert. In a recommended configuration with a recursive resolver and an encrypted transport, I was really only able to speed up Cloudflare/DoT, by about 25%. I started by re-running the experiment with v0.20200430.0, the version used in the previous round of tests, in order to have a fresh basis of comparison. Since then, the Comcast/DoT server ceased operation, and Cloudflare/UDP went from one of the fastest configurations to the slowest. I repeated the experiment with v1.20210803.0, which has the performance tweaks.
The Google/DoT, Quad9/DoH, Quad9/UDP rows need some comment. In looking at the second-by-second download rates, we see that in 2 out of 3 trials, Google/DoT was initially going somewhat faster in the new version than in the old version, but then stalled and made no further progress. This was caused by a TCP disconnection (which itself is not unusual when using the Google DoT resolver) followed by a failure to reestablish the connection due to a name lookup error. This could be made more robust, but it does not really bear on bandwidth measurements. In the old Quad9/DoH and the new Quad9/UDP graphs, in 2 of the 3 trials there is a pattern of the download making progress, then stalling, then making progress, then stalling, and so on. I don't know what may be causing this phenomenon, except to guess that it may be rate limiting on a subset of backend server. In both cases, the 1 trial without the stop-and-start pattern has similar performance as in the corresponding graph. As before, I've made the test code and raw data available, so you should be able to reproduce the table and graph, or run your own experiments. You will need git-annex to download a subset of the data files. |


dnstt is a new DNS tunnel that works with DNS over HTTPS and DNS over TLS resolvers, designed according to the Turbo Tunnel idea.
https://www.bamsoftware.com/software/dnstt/
How is it different from other DNS tunnels?
A DNS tunnel like this can be useful for censorship circumvention. Think of a censor that can observe the client⇔resolver link, but not the resolver⇔server link (the vertical line in the diagram). Traditional UDP-based DNS tunnels are generally considered to be easy to detect because of the unusual format of the DNS messages they generate—that, and the fact that every DNS message must be tagged with domain name of the tunnel server, because that's how the recursive resolver in the middle knows where to forward them. But with DoH or DoT, the DNS messages on the client⇔resolver are encrypted, so the censor cannot trivially see that a tunnel is being used. (Of course, it may still be possible to heuristically detect a tunnel based on volume and timing of the encrypted traffic—encryption alone doesn't solve that.)
I intend this software release to be a demonstration of the potential this kind of design for a tunnel. Currently the software doesn't provide a TUN/TAP network interface, or even a SOCKS or HTTP proxy interface. It only connects a local TCP socket to a remote TCP socket. Still, you can fairly easily set it up to work like an ordinary SOCKS or HTTP proxy, see below.
DNS zone setup
A DNS tunnel works by having the tunnel server act as an authoritative resolver for a specific DNS zone. The resolver in the middle acts as a proxy by forwarding queries for subdomains of that zone to the tunnel server. To set up a DNS tunnel, you need a domain name and a host where you can run the server.
Let's say your domain name is example.com and your host's IP addresses are 203.0.113.2 and 2001:db8::2. Go to your name registrar's configuration panel and add three new records:
The
tnsandtlabels can be anything you want, but thetnslabel should not be a subdomain of thetlabel (everything under that subdomain is reserved for tunnel payloads). Thetlabel should be short because there is limited space in a DNS message, and the domain name takes up part of it.Tunnel server setup
Run these commands on the server host; i.e. the one at tns.example.com / 203.0.113.2 / 2001:db8::2 in the example above.
First you need to generate crypto keys for the end-to-end tunnel encryption.
Now run the server.
127.0.0.1:8000is the TCP address ("remote app" in the diagram above) to which incoming tunnelled stream will be forwarded.The tunnel server needs to be reachable on port 53. You could have it bind to port 53 directly (
-udp :53), but that would require you to run the server as root. It's better to run the server on a non-privileged port as shown above, and use port forwarding to forward port 53 to it. On Linux, these command will forward port 53 to port 5300:You also need something for the tunnel server to connect to. It could be a proxy server or anything else. For testing, you can use an Ncat listener:
Tunnel client setup
Copy server.pub (the public key file) from the server to the client. You don't need server.key (the private key file) on the client.
Choose a DoH or DoT resolver. There is a list of DoH resolvers here:
And a list of DoT resolvers here:
To use a DoH resolver, use the
-dohoption:For DoT, use
-dot:127.0.0.1:7000specifies the client end of the tunnel. Anything that connects to that port ("local app" in the diagram above) will be tunnelled through the resolver and connected to127.0.0.1:8000on the tunnel server. You can test it using an Ncat client; run this command, and anything you type into the client terminal will appear on the server, and vice versa.How to make a standard proxy
You can make the tunnel work like an ordinary proxy server by having the tunnel server forward to a standard proxy server. I find it convenient to use Ncat's HTTP proxy server mode.
On the client, configure your applications to use the local end of the tunnel (
127.0.0.1:7000) as an HTTP/HTTPS proxy:I tried with Firefox connecting to an Ncat HTTP proxy through the DNS tunnel, and it works.
Local testing
If you just want to see how it works, without going to the trouble of setting up a DNS zone or a network server, you can run both ends of the tunnel on localhost. This way uses plaintext UDP DNS, so needless to say it's not covert to use a configuration like this across the Internet. Because there's no intermediate resolver in this case, you can use any domain name you want; it just has to be the same on client and server.
When it's working, you will see log messages like this on the server:
And this on the client:
Caveats
A DoH or DoT tunnel is covert to an outside observer, but not to the resolver in the middle. If the resolver wants to stop you from using a tunnel, they can do it easily, by not recursively resolving requests for the DNS zone of the tunnel server. The tunnel is still secure against eavesdropping or tampering by a malicious resolver, though; the resolver can deny service but cannot alter or read the contents of the tunnel.
For technical reasons, the tunnel requires the resolver to support a UDP payload size of at least 1232 bytes, which is bigger than the minimum of 512 guaranteed by DNS. I suspect that most public DoH or DoT servers meet this requirement, but I haven't done a survey or anything.
I haven't done any systematic performance tests, but I've done some cursory testing with the Google, Cloudflare, and Quad9 resolvers. With Google and Cloudflare I can get more than 100 KB/s download when piping files through Ncat. The Cloudflare DoH resolver occasionally sends a "400 Bad Request" response (the tunnel client automatically throttles itself when it sees an unexpected status code like that). The Quad9 resolvers seem to have notably worse performance than the others, but I don't know why.
The text was updated successfully, but these errors were encountered: