Skip to content

pool4: Jool 3.3 vs Jool 3.4

Alberto Leiva Popper edited this page Oct 26, 2015 · 3 revisions

pool4's design changed a lot during the transition from Jool 3.3 to 3.4, and this had a number of consequences I feel I must not keep undocumented.

Jool 3.3: How it used to be

pool4 used to be a literal implementation of the RFC 6146 v4 transport address pool. This is how it's defined:

To make up for hacks that shouldn't exist in the application layer, RFC 6146 wants the NAT64's pool to follow the following rules. AFAIK, these rules seek to bring about some useful qualities of end-to-end transparency.

Assume n6 is a random IPv6 client needing a NAT64. n4a and n4b are two separate IPv4 clients or servers (which might or might not belong to the same node), and x, y, w and z are random port numbers.

  1. The NAT64 should always try to mask an IPv6 node using the same IPv4 address.
    In other words, if n6's 2001:db8::1#x connection is masked 192.0.2.1#y when he tries to speaks with n4a, then his 2001:db8::2#w connection should me masked 192.0.2.1#z when he tries to speak with n4b.
    If n4a and n4b know each other, and therefore n4b knows it should be talking to 192.0.2.1, it might get confused or distrustful if the NAT64 uses another address.
    As much as possible, the NAT64 should fall back to use another address only if the already mapped IPv4 address no longer has available ports.
  2. If the NAT64 masks UDP connection 2001:db8::1#x using mask 192.0.2.1#y, then x and y should have the same parity (ie. both ports should be even or both should be odd).
    I think this has to do with some awkward video streaming hacks.
    As much as possible, the NAT64 should fall back to use another address/port only if the ideal address has run out of ports of the same parity.
  3. If the NAT64 masks connection (UDP or TCP) 2001:db8::1#x using mask 192.0.2.1#y, then x and y should belong to the same port range.
    As much as possible, the NAT64 should fall back to use another address/port only if the ideal address has run out of ports of the same range.
    There are two defined port ranges:
    1. 0-1023
    2. 1024-65535

While following these rules, Jool's pool4 also attempted to randomize the ports as much as possible.

pool4's design also made rather questionable sacrifices to make port selection as swift as possible. See, the reason why --adding a pool4 entry used to take so long was because it was preallocating the entire port ranges as massive arrays and shuffling the values so no work whatsoever had to be done during packet translations. Borrowing a port from pool4 used to be a minimal/insignificant constant size operation, at the cost of --adds being uncomfortably slow. Also, because it mandated so much contiguous preallocated memory, the size of the pool4 table had a very limited maximum.

Finally, pool4 used to not support port ranges. Once an address was inserted to pool4, its entire port domain was assumed to be reserved for Jool. This forced the NAT64 to require two separate IPv4 addresses (one for its own traffic, another one for packet translations).

Jool 3.4: The new pool4

pool4 is by far the most troublesome NAT64 construct. No matter how well I design it, I feel somebody will find something to be upset about. Jool 3.4's pool4 does not intend to be final, but it does accommodate better basis where several complications can be built on top of, independently.

After RFC 6146 was established, RFC 7422 and draft-ietf-sunset4-nat64-port-allocation came up and brought to the table new challenges and requirements that sometimes contradict each other and the previous ones. Jool 3.4's pool4 was designed as a development framework where different port allocation algorithms can coexist and switched according to the user's needs. However, until we get the global framework switch done and over with, only a simple algorithm is provided.

First, the new implementation attempts to fix the memory issues. This comes at the price of performance and a paradigm shift: The previous pool4 not only kept track of the registered addresses available for translation but also which ports were already taken for BIB entry use. Keeping this information in pool4 meant coming up with an available mask was completely instantaneous. Now that pool4 does not keep track of port usage, its domain has to be compared to the BIB database whenever a new mask is required. Looking up a BIB entry is an O(log N) operation (where N is the amount of BIB entries in the table), and iterating through the pool4 candidates is O(M) (where M is the number of transport addresses in pool4). Rather depressingly, This would make mask appointment an O(M*log N) operation. However, port assignments are scattered across the pool4 domain by means of a "per-destination endpoint" offset (courtesy of RFC 6056). This means ports do not tend to be allocated contiguously, so the weight of the M number is severely alleviated (as long as pool4 is not exhausted).

Other important factors considered were

Address preservation, as well as parity/range port preservation were not considered in this iteration (and there's uncertainty on whether they will return in the future) due to the potential performance impact and the newer RFCs' complete disregard of them.

Future work

We'll need to add other algorithms/variables.

  • Deterministic Port Range allocation
    • Sequential
    • Staggered
    • Round robin
    • Interlaced horizontally
    • Cryptographically random port assignment
    • Others?
  • Connectivity State Optimization
  • Others?

I might also optimize the pool4 DB module. It's implemented almost as a list where are hash table could be more appropriate. But the thing is, I don't want to spinlock over a O(M*log N) algorithm. So I don't know about this.