Unum Summary (in progress)
Clone this wiki locally
About this document
This will be a document summarizing ideas from The End of Error: Unum Computing by John L. Gustafson. He proposes a new number format that he calls Universal Numbers, or unum for short, and an accompanying computer arithmetic that supposedly avoids rounding errors, overflow and underflow, and on average needs less bits for more accurate and precise answers than equivalent floating point calculations.
Other Online Sources
There is not that much easily available information about unums online yet, as they are a fairly new concept. Here are the few resources we have found so far:
- The IEEE website has what appears to be the first public presentation of the concept (video requires Silverlight, direct link to PDF slides here), which happened in 2013
- Slideshare has a more recent presentation from March 2015, which accompanied a podcast interview on the Rich Review. That "slidecast" can be seen here on Youtube. The transcript can be found here.
The content of the videos lies somewhere between a brief introduction and a very long pitch trying to convince you that this is a good idea, so they might not quite satisfy everyone. The slides provide fairly clear introductions, but skim over some crucial technical details.
Things that are in the book that I won't go into here
The book is over 400 pages (although a large part of that is appendices with sample Mathematica code), so things will need to be left out of this summary for brevity. For starters, it provides a lot of (very interesting) context to the topic. A lot of this is not strictly necessary to understand the unum proposal (except perhaps for getting a better understanding of Gustafson’s claims how they improve upon existing floating point and interval math). Therefore they will not be summarized here. This includes:
- Related computer history in general
- Design decisions/mistakes (according to Gustafson) made in previous formats
- Discussions of potential savings in energy and storage
- Example scenarios comparing unums to floating point and traditional interval math. Interesting to show potential benefits, not needed to implement unum math.
- Most of the detailed ideas for (hardware) implementations - parts may be essential to get the unums implementation correct though, so some elements will need to be summarised anyway.
- The scratchpad layer. Hardware territory again. But here as well part may have to be implemented in software for now to use unums, so I might change my mind on this.
Other knowledge that the reader is expected to possess
Another way to save space here is that I make the assumption the reader has a basic grasp of how a computer encodes the various IEEE floating point numbers as bit strings. Note that unums work in much the same way, and in fact are a superset of IEEE floating point numbers, so understanding these basics is important. This comes down to the following: how a given float size uses a sign bit, a fixed number of exponent and fraction bits, what the hidden bit is, how subnormals work, and how the infinities and NaNs are encoded. (NB: the book gives a very clear explanation of how this all works, so a lack of knowledge on this matter should not be an obstacle to reading it)
Some jargon terms that will be used
Before we go any further, it is probably good to sum up some of the technical terms related to floating point math that will be used:
exponent size: the number of bits used to represent the exponent of a floating point value.
fraction size: the number of bits used to represent the fraction of a floating point value.
Unit in the Last Place, or ULP: the last bit in a floating point’s fraction. It therefore signifies the difference the between the current and the next exact/previous value in a floating point's fraction. The size of an ULP therefore depends on the fraction size and the value of the exponent. It is also sometimes known as the Unit of Least Precision, but this can be slightly misleading in the context of unums. This is because the fraction size can differ between unums, meaning the precision varies more.
maxreal: the largest finite value that can be exactly represented for a given precision. It is one ULP away from infinity.
smallsubnormal: the smallest finite value greater than zero that can be represented for a given precision. It is one ULP away from zero.
1. Representing all real numbers
The flexible bitsize of unums and the promise that this will result in more efficient and faster computing draws a lot of attention. However, that is fairly straightforward to explain. How unums are able to accurately represent all real numbers in a finite number of bits is probably more important to explain clearly first.
The short version is that we use an uncertainty bit or ubit to indicate if a floating point value represents itself (an exact value), or the open interval between itself and first next representable exact number. Because floating point can already represent positive and negative infinity, the union of this new set of numbers with the existing floating point numbers represents the entire real number line. If that went too fast, read on for a slower explanation.
Accurate, Precise, Exact
One might sloppily call a number accurate, precise or exact and mean the same thing with it. However, they mean different things, which are easily illustrated with an example. Say we ask what 1 divided by 3 is, and get the following answers:
- x, where 0 < x < 0.5
- one third
The first answer is a very precise number, but highly inaccurate. The second answer is less precise, but more accurate. The third is mathematically correct, thus perfectly accurate, but hardly precise at all. The fourth is the only exact answer and hence as accurate and precise as possible. However, it cannot be represented as a finite string of decimals.
This distinction is crucial to make, because confusing them may lead to the false conclusion that not all real numbers can be represented in finite bits. Because computers have a finite number of bits, they have a finite precision. Because of that, we might conclude that a calculation such as 1/3 must be rounded. However, as answer three and four above shows it's perfectly possible to represent it! It is this confusing of limited precision for limited accuracy that leads to believing rounding errors are inevitable.
The ubit, or how to represent the complete real number line using finite bits
In a nutshell:
Take the set of all numbers that can be represented in the familiar floating point representation. This represents all numbers that can be expressed exactly as a floating point number (which includes both positive and negative infinity).
Now define another set, that of all open intervals between consecutive numbers that can be expressed exactly. This obviously accurately represents all the remaining numbers. Any given interval is as precise as the distance between the two exact numbers at its open endpoints.
Note that the second set is finite in size: every interval is bound by two exact numbers on either side, so there are two less intervals than the set of all exact floating point numbers.
The union of these two sets represents the entire real number line.
Surprisingly straightforward, right? So how to represent this second set? Very simple: we take the existing floating point representation, and add one bit at the end (or change the purpose of the existing last bit). This will be the the uncertainty bit or ubit.
If the ubit is not set, a floating point number represents itself.
If the ubit is set, it represents the open interval between itself and the smallest next exact number away from zero that can be expressed
Recall that floating point representation has a signed zero, which lets us represent the open interval on both sides of it. Furthermore, infinity with the ubit set would indicate the interval between infinity and whatever comes next. This is nonsense, thus can be used to distinguish two forms of NaN (signaling and quiet NaN). We will return to this when defining unums proper.
Gustafson argues that it is the combination of the ubit with the flexible bitsize that makes them work better than the existing floats with a fixed power-of-two number of bits. Regardless of whether this is true, they are separate ideas.
With the ubit out of the way, explaining the unum format becomes rather simple. First, recall the floating point format. Let's take Avogadro's number (~6.022 * 10^23) as an example (taken from Gustafson's slides). This is the binary representation stored in an IEEE standard 64 bits float: 0100010011011111111000010101010011110100010101111110101000010011
Let's break that down:
0 10001001101 1111111000010101010011110100010101111110101000010011 | | | | | fraction (hidden bit not shown) | exponent | sign bit
unums follow the same basic encoding scheme, but can vary in number of exponent and fraction bits. The size of the fraction is stored and saved as metadata in the end of the unum, together with the ubit. Together, the uncertainty bit, exponent size and fraction size form the utag. Here is Avogadro's number again, now as a 29 bit unum: 01100110111111110000111111011. Or broken down:
0 11001101 111111100001 1 111 1011 | | | | | | | | | | | frac. size | | | | | | | | | exp. size | | | | | | | uncertainty bit | | | | | fraction (hidden bit not shown) | | | exponent | sign bit
Even with the extra bits needed to store the ubit and how big the exponent and fraction sizes are, this unum is smaller. This is because it doesn't approximate the exact value with a minimal rounding error, but instead sets the uncertainty bit, indicating it lies between the current exactly represented number and the next, indicating the limits of its own precision.
In one important way, unums deviate from the IEEE standard, which says that when all exponent bits are 1, the float represents an exception value, which the particular exception depending on the value in the fraction. If all fraction bits are 0, the value is +∞ or -∞ (depending on the sign bit), and every other value represents NaN.
In the case of unums, they represent
∞ if all bits in the exponent and fraction are set, and the ubit is not set, with sign depending on the sign bit
NaN if all bits in the exponent and fraction are set, the ubit is set. If the sign bit is not set, it is a quiet NaN, if it is set it is a signalling NaN
Note that this breaks software that relies on NaN boxing. On the plus side, all those NaN's now represent exact numbers and their open intervals.
Some more jargon
ubit: the uncertainty bit. When not set, a unum represents an exact number. When set, it represent the open range between the number currently represented by the exponent/fraction, and the number one ULP away from zero. It is also used to indicate signalling/quiet NaN (explained later).
esizesize: the size of the exponent size, in bits. An esizesize of 2 means the esize has 2 bits available to it. An esizesize is set in the environment, not tracked in the utag.
fsizesize: the size of the fraction size, in bits. An fsizesize of 5 means the fsize has 5 bits available to it. An fsizesize is set in the environment, not tracked in the utag.
utag: the metadata at the end of a unum. It contains the ubit, a number of bits equal to esizesize to track the exponent size, and a number of bits equal to fsizesize to track the fraction size.
utagsize: size of utag, in bits. It is equal to 1 bit for the ubit + esizesize + fsizesize maxubits: the maximum size a unum may need for a given esizesize and fsizesize. It equal to esizesize + fsizesize + 2^(esizesize) + 2^(fsizesize) + 1 bit for the sign bit + 1 bit for the ubit.
ubound: a mathematical interval on the real number line, represented by one or two unums. The ubit of an unum at the endpoint is used to indicate if an endpoint is open (set) or closed (not set).
unum and ubound arithmetic
//TODO (and here is where things get interesting)