Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Turbo Tunnel candidate protocol evaluation #14

Open
wkrp opened this issue Oct 2, 2019 · 4 comments
Open

Turbo Tunnel candidate protocol evaluation #14

wkrp opened this issue Oct 2, 2019 · 4 comments

Comments

@wkrp
Copy link
Member

wkrp commented Oct 2, 2019

This report evaluates selected reliable-transport protocol libraries for their suitability as an intermediate layer in a censorship circumvention protocol. (The Turbo Tunnel idea.) The three libraries tested are:

The evaluation is mainly about functionality and usability. It does not specifically consider security, efficiency, and wire-format stability, which are also important considerations. It is not based on a lot of real-world experience, only the sample tunnel implementations discussed below. For the most part, I used default settings and did not explore the various configuration parameters that exist.

The core requirement for a library is that it must provide the option to abstract its network operations—to do all its sends and receives through a programmer-supplied interface, rather than by directly accessing the network. All three libraries meet this requirement: quic-go and kcp-go using the Go net.PacketConn interface, and pion/sctp using net.Conn. Another requirement is that the protocols have active Go implementations, because Go is currently the closest thing to a common language among circumvention implementers. A non-requirement but nice-to-have feature is multiplexing: multiple independent, reliable streams within one notional connection. All three evaluated libraries also provide some form of multiplexing.

Summary: All three libraries are suitable for the purpose. quic-go and kcp-go/smux offer roughly equivalent and easy-to-use APIs; pion/sctp's API is a little less convenient because it requires manual connection and stream management. quic-go likely has a future because QUIC in general has a lot of momentum behind it; its downsides are that QUIC is a large and complex protocol with lots of interdependencies, and is not yet standardized. kcp-go and smux do not conform to any external standard, but are simple and use-tested. pion/sctp is part of the pion/webrtc library but easily separable; it doesn't seem to offer any compelling advantages over the others, but may be useful for reducing dependencies in projects that already use pion/webrtc, like Snowflake.

Sample tunnel implementations

As part of the evaluation, I wrote three implementations of a custom client–server tunnel protocol, one for each candidate library. The tunnel protocol works over HTTP—kind of like meek, except each HTTP body contains a reliable-transport datagram rather than a raw chunk of a bytestream. I chose this kind of protocol because it has some non-trivial complications that I think will be characteristic of the situations in which the Turbo Tunnel design will be useful. In particular, the server cannot just send out packets whenever it wishes, but must wait for a client to make a request that the server may respond to. Tunnelling through an HTTP server also prevents the implementation from "cheating" by peeking at IP addresses or other metadata outside the tunnel itself.

turbo-tunnel-protocol-evaluation.zip

All three implementations provide the same external interface, a forwarding TCP proxy. The client receives local TCP connections and forwards their contents, as packets, through the HTTP tunnel. The server receives packets, reassembles them into a stream, and forwards the stream to some other TCP address. The client may accept multiple incoming TCP connections, which results in multiple outgoing TCP connections from the server. Simultaneous clients are multiplexed as independent streams within the same reliable-transport connection ("session" in QUIC and KCP; "association" in SCTP).

An easy way to test the sample tunnel implementations is with an Ncat chat server, which implements a simple chat room between multiple TCP connections. Configure the server to talk to a single instance of ncat --chat, and then connect multiple ncats to the client. Then end result will be as if each ncat had connected directly to the ncat --chat: the tunnel acts like a TCP proxy.

run server:
        ncat -l -v --chat 127.0.0.1 31337
        server 127.0.0.1:8000 127.0.0.1:31337

run client:
        client 127.0.0.1:2000 http://127.0.0.1:8000
        ncat -v 127.0.0.1 2000 # as many times as desired

.-------.
:0 ncat |-TCP-.
'-------'     |
              | .------------.        .------------.       .------------------.
.-------.     '-:2000        |        |            |--TCP--:31337             |
:0 ncat |-TCP---:2000 client |--HTTP--:8000 server |--TCP--:31337 ncat --chat |
'-------'     .-:2000        |        |            |--TCP--:31337             |
              | '------------'        '------------'       '------------------'
.-------.     |
:0 ncat |-TCP-'
'-------'

As a more circumvention-oriented example, you could put the tunnel server on a remote host and have it forward to a SOCKS proxy—then configure applications to use the tunnel client's local TCP port as a local SOCKS proxy. The HTTP-based tunnelling protocol is just for demonstration and is not covert, but it would not take much effort to add support for HTTPS and domain fronting, for example. Or you could replace the HTTP tunnel with anything else, just by replacing the net.PacketConn or net.Conn abstractions in the programs.

quic-go

quic-go is an implementation of QUIC, meant to interoperate with other implementations, such as those in web browsers.

The network abstraction that quic-go relies on is net.PacketConn. In my opinion, this is the right abstraction. PacketConn is the same interface you would get with an unconnected UDP socket: you can WriteTo to send a packet to a particular address, and ReadFrom to receive a packet along with its source address. "Address" in this case is an abstract net.Addr, not necessarily something like an IP address. In the sample tunnel implementations, the server "address" is hardcoded to a web server URL, and a client "address" is just a random string, unique to a single tunnel client connection.

On the client side, you create a quic.Session by calling quic.Dial or quic.DialContext. (quic.DialContext just allows you to cancel the operation if wanted.) The dial functions accept your custom implementation of net.PacketConn (pconn in the listing below). raddr is what will be passed to your custom WriteTo implementation. QUIC has obligatory use of TLS for connection establishment, so you must also provide a tls.Config, an ALPN string, and a hostname for SNI. In the sample implementation, we disable certificate verification, but you could hard-code a trust root specific to your application. Note that this is the TLS configuration, for QUIC only, inside the tunnel—it's completely independent from any TLS (e.g. HTTPS) you may use on the outside.

tlsConfig := &tls.Config{
	InsecureSkipVerify: true,
	NextProtos:         []string{"quichttp"},
}
sess, err := quic.Dial(pconn, raddr, "", tlsConfig, &quic.Config{})

Once the quic.Session exists, you open streams using OpenStream. quic.Stream implements net.Conn and works basically like a TCP connection: you can Read, Write, Close it, etc. In the sample tunnel implementation, we open a stream for each incoming TCP connection.

stream, err := sess.OpenStream()

On the server side, you get a quic.Session by calling quic.Listen and then Accept. Here you must provide your custom net.PacketConn implementation, along with a TLS certificate and an ALPN string. The Accept call takes a context.Context that allows you to cancel the operation.

tlsConfig := &tls.Config{
	Certificates: []tls.Certificate{*cert},
	NextProtos:   []string{"quichttp"},
}
ln, err := quic.Listen(pconn, tlsConfig, &quic.Config{})
sess, err := ln.Accept(context.TODO())

Once you have a quic.Session, you get streams by calling AcceptStream in a loop. Notice a difference from writing a TCP server: in TCP you call Listen and then Accept, which gives you a net.Conn. That's because there's only one stream per TCP connection. With QUIC, we are multiplexing several streams, so you call Listen, then Accept (to get a quic.Session), then AcceptStream (to get a net.Conn).

for {
	stream, err := sess.AcceptStream(context.TODO())
	go func() {
		defer stream.Close()
		// stream.Read, stream.Write, etc.
	}()
}

Notes on quic-go:

  • The library is coupled to specific (recent) versions of the Go language and its crypto/tls library. It uses a fork of crypto/tls called qtls, because crypto/tls does not support the custom TLS encoding used in QUIC. If you compile a program with the wrong version of Go, it will crash at runtime with an error like panic: qtls.ClientSessionState not compatible with tls.ClientSessionState.
    • The need for a forked crypto/tls is a bit concerning, but in some cases you're tunnelling traffic (like Tor) that implements its own security features, so you're still secure even if there's a failure of qtls.
  • The quic-go API is marked unstable.
  • The on-wire format of QUIC is still unstable. quic-go provides a VersionNumber configuration parameter that may allow locking in a specific wire format.
  • The client can open a stream, but there's no way for the server to become aware of it until the client sends some data. So it's not suitable for tunnelling server-sends-first protocols, unless you layer on an additional meta-protocol that ignores the client's first sent byte, or something.
  • QUIC automatically terminates idle connections. The default idle timeout of 30 seconds is aggressive, but you can adjust it using an IdleTimeout parameter.
  • The ability to interrupt blocking operations using context.Context is a nice feature.

kcp-go and smux

This pair of libraries separates reliability and multiplexing. kcp-go implements a reliable, in-order channel over an unreliable datagram transport. smux multiplexes streams inside a reliable, in-order channel.

Like quic-go, the network abstraction used by kcp-go is net.PacketConn. I've said already that I think this is the right design and it's easy to work with.

The API is functionally almost identical to quic-go's. On the client side, first you call kcp.NewConn2 with your custom net.PacketConn to get a so-called kcp.UDPSession (it actually uses your net.PacketConn, not UDP). kcp.UDPSession is a single-stream, reliable, in-order net.Conn. Then you call smux.Client on the kcp.UDPSession to get a multiplexed smux.Session on which you can call OpenStream, just like in quic-go.

kcpConn, err := kcp.NewConn2(raddr, nil, 0, 0, pconn)
sess, err := smux.Client(kcpConn, smux.DefaultConfig())
stream, err := sess.OpenStream()

On the server side, you call kcp.ServeConn (with your custom net.PacketConn) to get a kcp.Listener, then Accept to get a kcp.UDPSession. Then you turn the kcp.UDPSession into a smux.Session by calling smux.Server. Then you can AcceptStream for each incoming stream.

ln, err := kcp.ServeConn(nil, 0, 0, pconn)
conn, err := ln.Accept()
sess, err := smux.Server(conn, smux.DefaultConfig())
for {
	stream, err := sess.AcceptStream()
	go func() {
		defer stream.Close()
		// stream.Read, stream.Write, etc.
	}()
}

Notes on kcp-go and smux:

  • kcp-go has optional crypto and error-correction features. The crypto layer is questionable and I wouldn't trust it as much as quic-go's TLS. For example, it seems to only do confidentiality, not integrity or authentication; it only uses a single shared key; and it supports a variety of ciphers.
  • kcp-go is up to v5 and I don't know if that means the wire format has changed in the past. There are two versions of smux, v1 and v2, which are presumably incompatible.
    • I don't think there are formal specifications of the KCP and smux protocols, and the upstream documentation on its own does not appear sufficient to reimplement it.
  • There is no need to send data when opening a stream, unlike quic-go and pion/sctp.
  • The separation of kcp-go and smux into two layers could be useful for efficiency in some cases. For example, Tor does its own multiplexing and in most cases only makes a single, long-lived connection through the pluggable transport. In that case, you could omit smux and only use kcp-go.

pion/sctp

pion/sctp is a partial implementation of SCTP (Stream Control Transmission Protocol). Its raison d'être is to implement DataChannels in the pion/webrtc WebRTC stack (WebRTC DataChannels are SCTP inside DTLS).

Unlike quic-go and kcp-go, the network abstraction used by pion/sctp is net.Conn, not net.PacketConn. To me, this seems like a type mismatch of sorts. SCTP is logically composed of discrete packets, like IP datagrams, which is the interface net.PacketConn offers. The code does seem to preserve packet boundaries when sending; i.e., multiple sends at the SCTP stream layer do not coalesce at the net.Conn layer. The code seems to rely on this property for reading as well, assuming that one read equals one packet. So it seems to be using net.Conn in a specific way to work similarly to net.PacketConn, with the main difference being that the source and dest net.Addrs are fixed for the lifetime of the net.Conn. This is just based on my cursory reading of the code and could be mistaken.

On the client side, usage is not too different from the other two libraries. You provide a custom net.Conn implementation to sctp.Client, which returns an sctp.Association. Then you can call OpenStream to get a sctp.Stream, which doesn't implement net.Conn exactly, but io.ReadWriteCloser. One catch is that the library does not automatically keep track of stream identifiers, so you manually have to assign each new stream a unique identifier.

config := sctp.Config{
	NetConn:       conn,
	LoggerFactory: logging.NewDefaultLoggerFactory(),
}
assoc, err := sctp.Client(config)
var streamID uint16
stream, err := assoc.OpenStream(streamID, 0)
streamID++

Usage on the server side is substantially different. There's no equivalent to the Accept calls of the other libraries. Instead, you call sctp.Server on an already existing net.Conn. What this means is that your application must do the work of tracking client addresses on incoming packets and mapping them to net.Conns (instantiating new ones if needed). The sample implementation has a connMap type that acts as an adapter between the net.PacketConn-like interface provided by the HTTP server, and the net.Conn interface expected by sctp.Association. It shunts incoming packets, which are tagged by client address, into appropriate net.Conns, which are implemented simply as in-memory send and receive queues. connMap also provides an Accept function that provides notification of each new net.Conn it creates.

So it's a bit awkward, but with some manual state tracking you get an sctp.Association made with a net.Conn. After that, usage is similar, with an AcceptStream function to accept new streams.

for {
	stream, err := assoc.AcceptStream()
	go func() {
		defer stream.Close()
		// stream.Read, stream.Write, etc.
	}()
}

Notes on pion/sctp:

  • In SCTP, you must declare the maximum number of streams you will use (up to 65,535) during the handshake. This is a restriction of the protocol, not the library. It looks like pion/sctp hardcodes the number to the maximum. I'm not sure what happens if you allow the stream identifiers to wrap.
    • If SCTP's native streams are too limiting, one could layer smux on top of it instead (put multiple smux streams onto a single SCTP stream).
  • There's no crypto in the library, nor any provision for it in SCTP.
  • Like quic-go, the client cannot open a stream without sending at least 1 byte on it.

This document is also posted at https://www.bamsoftware.com/sec/turbotunnel-protoeval.html.

@wkrp
Copy link
Member Author

wkrp commented Oct 16, 2019

Here is a demonstration of using an encapsulated session/reliability protocol to persist a session across multiple TCP connections.

turbo-tunnel-reconnection-demo.zip

There are two implementations, reconnecting-kcp and reconnecting-quic. The client reads from the keyboard and writes to the server, then outputs whatever it receives from the server. The server is an echo server, except it swaps uppercase to lowercase and vice versa, and it sends a "[heartbeat]" line every 10 seconds (just so that there's some server-initiated traffic).

$ server 127.0.0.1:4000
$ client 127.0.0.1:4000
2019/10/16 19:40:05 begin KCP session a01140b7
2019/10/16 19:40:05 begin TCP connection 127.0.0.1:37738 -> 127.0.0.1:4000
Hello World.
hELLO wORLD.
test
TEST
[heartbeat]
abababa
ABABABA
[heartbeat]

It gets interesting when you interpose something that terminates TCP connections. The included lilbastard program is a TCP proxy that terminates connections after a fixed timeout, which technique has been reported to be used to disrupt long-lived tunnels. (You may remember I identified this as one of the problems that the Turbo Tunnel idea can help solve in the original post.) Here you see a client–server session persisting despite the carrier TCP connections being terminated every 10 seconds.

$ server 127.0.0.1:4000
$ lilbastard -w 10 127.0.0.1:3000 127.0.0.1:4000
$ client 127.0.0.1:3000
2019/10/16 19:56:11 begin KCP session f814d839
2019/10/16 19:56:11 begin TCP connection 127.0.0.1:52762 -> 127.0.0.1:3000
test
TEST
[heartbeat]
hello again
2019/10/16 19:56:29 end TCP connection 127.0.0.1:52762 -> 127.0.0.1:3000
2019/10/16 19:56:29 begin TCP connection 127.0.0.1:52766 -> 127.0.0.1:3000
HELLO AGAIN
[heartbeat]
2019/10/16 19:56:41 end TCP connection 127.0.0.1:52766 -> 127.0.0.1:3000
2019/10/16 19:56:41 begin TCP connection 127.0.0.1:52770 -> 127.0.0.1:3000
[heartbeat]

This overall paradigm is called "connection migration" in QUIC. However, neither kcp-go nor quic-go support connection migration natively. (kcp-go uses the client source address, along with the KCP conversation ID, as part of the key that distinguishes conversations; quic-go explicitly does not support the rather complicated QUIC connection migration algorithm.) Therefore we must layer our own connection migration on top. We do it in a way similar to how Mosh (Section 2.2) and Wireguard (Section 2.1). The server accepts multiple simultaneous TCP connections. When it needs to send a packet to a particular client, it sends the packet on whichever TCP connection most recently received a packet from that client. Connection migration is the purpose of the connMap data type in the server.

In order to make connection migration work, we need a persistent "client ID" that outlives any particular transient TCP connection, lasting as long as the client's session does. With kcp-go, this is easy, as the kcp.UDPSession type has a GetConv method that exposes the 32-bit KCP conversation ID, and the conversation ID is easy to parse out of raw packets (it's just the first 4 bytes). With quic-go it's a little harder, because although QUIC connections natively have a connection ID, quic-go does not expose it; and it's not trivial to parse the connection ID from raw packets. So in the quic-go implementation, the client prefixes its QUIC packets with its own randomly generated client ID. This effectively adds a field to each QUIC packet without breaking any quic-go abstractions, at the cost of some network overhead. When the serverPacketConn does a ReadFrom or WriteTo, the addresses it deals with are these "client IDs," not actual network addresses that would be bound to a particular TCP connection.

A note about combining kcp-go and smux: earlier I said "The separation of kcp-go and smux into two layers could be useful for efficiency... [If an application makes just one long-lived connection] you could omit smux and only use kcp-go." I tried doing that here, because in the demonstration programs, each client requires only one stream. I eventually decided that you really need smux anyway. This is because KCP alone does not define any kind of connection termination, so after a client disappears, the server would have a kcp.UDPSession in memory that would never go away. smux has an idle timeout that ensures that dead sessions get removed.

@wkrp
Copy link
Member Author

wkrp commented Oct 21, 2019

Turbo Tunnel in obfs4proxy (survives TCP connection termination)

Recall from my first post one of the problems with existing circumvention designs, that the turbo tunnel idea can help solve: "Censors can disrupt obfs4 by terminating long-lived TCP connections, as Iran did in 2013, killing connections after 60 seconds."

Here are proof-of-concept branches implementing the turbo tunnel idea in obfs4proxy, one using kcp-go/smux and one using quic-go:

As diffs:

Using either of these branches, your circumvention session is decoupled from any single TCP connection. If a TCP connection is terminated, the obfs4proxy client will establish a new connection and pick up where it left off. An error condition is signaled to the higher-level application only when there's a problem establishing a new connection. Otherwise, transient connection termination is invisible (except as a brief increase in RTT) to Tor and whatever other application layers are being tunnelled.

I did a small experiment showing how a Tor session can persist, despite the obfs4 layer being interrupted every 20 seconds. I configured the "little bastard" connection terminator to forward from a local port to a remote bridge, and terminate connections after 20 seconds.

lilbastard$ cargo run -- -w 20 127.0.0.1:3000 192.81.135.242:4000

On the bridge, I ran tor using either plain obfs4proxy, or one of the two turbo tunnel branches. (I did the experiment once for each of the three configurations.)

DataDirectory datadir.server
SOCKSPort 0
ORPort auto
BridgeRelay 1
AssumeReachable 1
PublishServerDescriptor 0
ExtORPort auto
ServerTransportListenAddr obfs4 0.0.0.0:4000
ServerTransportPlugin obfs4 exec ./obfs4proxy -enableLogging -unsafeLogging -logLevel DEBUG
# ServerTransportPlugin obfs4 exec ./obfs4proxy.kcp -enableLogging -unsafeLogging -logLevel DEBUG
# ServerTransportPlugin obfs4 exec ./obfs4proxy.quic -enableLogging -unsafeLogging -logLevel DEBUG

On the client, I configured tor to use the corresponding obfs4proxy executable, and connect to the bridge through the "little bastard" proxy. (If you do this, your bridge fingerprint and cert will be different.)

DataDirectory datadir.client
SOCKSPort 9250
UseBridges 1
Bridge obfs4 127.0.0.1:3000 94E4D617537C3E3CEA0D1D6D0BC852B5A7613B77 cert=6rB8kVd981U0G2b9nXioB5o0Zu7tDpDkoZyPe2aCmqFzGmfaSiNIfQvkJABakH+DfYwWRw iat-mode=0
ClientTransportPlugin obfs4 exec ./obfs4proxy -enableLogging -unsafeLogging -logLevel DEBUG
# ClientTransportPlugin obfs4 exec ./obfs4proxy.kcp -enableLogging -unsafeLogging -logLevel DEBUG
# ClientTransportPlugin obfs4 exec ./obfs4proxy.quic -enableLogging -unsafeLogging -logLevel DEBUG

Then, I captured traffic for 90 seconds while downloading a video file through the tor proxy.

$ curl -L -x socks5://127.0.0.1:9250/ -o /dev/null https://archive.org/download/ucberkeley_webcast_itunesu_390697355/1.%202007-12-07%20-%20Keynote%20Address%3A%20The%20China%20Sustainable%20Energy%20Renewable%20Energy%20Program.mp4

The graph below depicts the amount of network traffic in each direction over time. In the "plain" chart, see how the download stops after the first connection termination at 20 s. Every 20 s after that, there is a small amount of activity, which is tor reconnecting to the bridge (and the resulting obfs4 handshake). But it doesn't matter, because tor has already signaled the first connection termination to the application layer, which gave up:

curl: (18) transfer closed with 111535615 bytes remaining to read

In comparison, the "kcp" and "quic" charts keep on downloading, being only momentarily delayed by an connection termination. The "kcp" chart is sparser than the "quic" chart, showing a lower overall speed. The "plain" configuration downloaded 3711 KB before giving up at 20 s; "kcp" downloaded only 1359 KB over the full 90 s; and "quic" downloaded 22835 KB over the full 90 s. It should be noted that this wasn't a particularly controlled experiment, and I didn't try experimenting with any performance parameters. I wouldn't conclude from this that KCP is necessarily slower than QUIC.

obfs4proxy-turbotunnel

Source code for chart

Notes:

  • How this works architecturally, on the client side, we replace the original TCP Dial call with either kcp.NewConn2 or quic.Dial, over an abstract packet-sending interface (clientPacketConn). clientPacketConn runs a loop that repeatedly connects to the same destination and exchanges packets (represented as length-prefixed blobs in a TCP stream) as long as the connection is good, reporting an error only when a connection attempt fails. On the server side, we replace the TCP Listen call with either kcp.ServeConn or quic.Listen, over an abstract serverPacketConn. serverPacketConn opens a single TCP listener, takes length-prefixed packets from all the TCP streams that arrive at the listener, and feeds them into a single KCP or QUIC engine. Whenever we need to send a packet for a particular connection ID, we send it on the TCP stream that most recently sent us a packet for that connection ID.
  • There's no need for this functionality to be built into obfs4proxy itself. It could be done as a separate program:
    ------------ client ------------                ------------ bridge ------------    
    tor -> turbotunnel -> obfs4proxy -> internet -> obfs4proxy -> turbotunnel -> tor
    
    But this kind of process layering is cumbersome with pluggable transports.
  • I'm passing a blank client IP address to the pt.DialOr call—this information is used for geolocation in Metrics graphs. That's because an OR connection no longer corresponds to a single incoming IP address with its single IP address—instead it corresponds to an abstract "connection ID" that remains constant across potentially many TCP connections. In order to make this work, you would have to define some heuristic such as "the client IP address associated with the OR connection is that of the first TCP connection that carried that connection ID."

@cohosh
Copy link

cohosh commented Oct 24, 2019

Thanks for the really great work on this!

Here are some thoughts I have after taking a stab at a simpler version of this for Snowflake.

here's no need for this functionality to be built into obfs4proxy itself.... But this kind of process layering is cumbersome with pluggable transports.

I could see the benefit of making some of these functions more generic and extensible so that Turbo Tunnel can be a separate library. In order to integrate it, PT developers would still have to make source code changes, but according to some well-defined API.

An example of how some the existing functions on the client side could be make into API calls would be to modify dialAndExchange to take in a Dialer interface:

func (c *clientPacketConn) DialAndExchange(d net.Dialer, network, address string) error {
	addrStr := log.ElideAddr(c.addr)

	conn, err := d.Dial(network, address)

It's pretty much just the dial functionality that's specific to obfs4 in this case. This would require some refactoring in obfs4 (and Snowflake or any other PT) to implement a Dialer interface in place of what's already there of course.

Perhaps the Dialer interface required by net.Conn isn't expressive enough, it could be a wrapper interface with a Dialer member in addition to the other information or functions we'd need.

I'm passing a blank client IP address to the pt.DialOr call—this information is used for geolocation in Metrics graphs. That's because an OR connection no longer corresponds to a single incoming IP address with its single IP address—instead it corresponds to an abstract "connection ID" that remains constant across potentially many TCP connections. In order to make this work, you would have to define some heuristic such as "the client IP address associated with the OR connection is that of the first TCP connection that carried that connection ID."

Another way to handle this is to make a new net.Conn interface on top of the underlying stream net.Conn with it's own implementation of RemoteAddr that returns a client address that makes sense to pt.DialOR. In your current implementation, calls to RemoteAddr seem to be used for just logging at the moment. The new interface could also expose the session address with an addition function SessionAddr if needed. This is the route we went with the work-in-progress Snowflake sequencing layer in making a SnowflakeConn interface that wraps an underlying net.Conn: proto.go#L150

@wkrp
Copy link
Member Author

wkrp commented Oct 25, 2019

here's no need for this functionality to be built into obfs4proxy itself.... But this kind of process layering is cumbersome with pluggable transports.

I could see the benefit of making some of these functions more generic and extensible so that Turbo Tunnel can be a separate library. In order to integrate it, PT developers would still have to make source code changes, but according to some well-defined API.

My feeling is that it's premature to be thinking about a reusable API or library. I want to discourage thinking of "Turbo Tunnel" as a specific implementation or protocol. It's more of an idea or design pattern. Producing a libturbotunnel that builds in design decisions like QUIC vs. KCP is not really on my roadmap. In any case, I feel a requirement for doing something like that is experience gained in implementing the idea a few times not as a reusable library, and not by me only.

I'm passing a blank client IP address to the pt.DialOr call—this information is used for geolocation in Metrics graphs. That's because an OR connection no longer corresponds to a single incoming IP address with its single IP address—instead it corresponds to an abstract "connection ID" that remains constant across potentially many TCP connections. In order to make this work, you would have to define some heuristic such as "the client IP address associated with the OR connection is that of the first TCP connection that carried that connection ID."

Another way to handle this is to make a new net.Conn interface on top of the underlying stream net.Conn with it's own implementation of RemoteAddr that returns a client address that makes sense to pt.DialOR. In your current implementation, calls to RemoteAddr seem to be used for just logging at the moment. The new interface could also expose the session address with an addition function SessionAddr if needed. This is the route we went with the work-in-progress Snowflake sequencing layer in making a SnowflakeConn interface that wraps an underlying net.Conn: proto.go#L150

There's a type mismatch here though. Protocols like QUIC and KCP are fundamentally not based on an underlying stream. It's all discrete packets; i.e., it's a PacketConn, not a Conn. There's no consistent well-defined remote address for a PacketConn. You can call ReadFrom and it will tell you where that single packet came from, but that remote address may change for every call. And what's more, those packets don't even all necessarily belong to the same QUIC or KCP connection. It happens that in the special case of the obfs4 implementation, there is secretly a Conn underneath the PacketConn, so we can break the abstraction a little bit and adopt a "first remote address wins" heuristic. I actually don't think that's a big deal and I'm not worried about solving it.

The RemoteAddr of the the QUIC of KCP connection, which is a Conn built on top of a PacketConn, is actually used internally by the QUIC or KCP library: it's the address that gets passed to WriteTo in the PacketConn. So we can't change the definition of RemoteAddr without really harming semantics. I would rather define this as a separate data field that is explicitly defined as ancillary information peeking through the abstraction, not using the standard Conn interfaces.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants