Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The integer debate #626

Closed
protolambda opened this issue Feb 14, 2019 · 32 comments
Closed

The integer debate #626

protolambda opened this issue Feb 14, 2019 · 32 comments

Comments

@protolambda
Copy link
Collaborator

protolambda commented Feb 14, 2019

Follow up issue after earlier gitter chat and implementers call today (ethereum/eth2.0-pm#29)

To pick the best solution, we need a more structural approach, and list the arguments for each "problem-class".

Note that I highlight Java and javascript as special considerations, due to the different support for integers than most other languages provide.

Go, Rust, Swift, Nim all support signed/unsigned 32/64 bit numbers.

Problem classes

  1. Slot numbers
  2. Validator indices
  3. Balances

Slot numbers

Range here: 1 slot per 6 seconds, for a few thousand years (could upgrade earlier...) = approx. 1000*365*24*60*60/6=5,256,000,000.

Signed numbers: No. However, there is a case for the first few epochs, where logic is looking back at old history, beyond the genesis. This could potentially result in an underflow.

Ways to catch the underflow:

  • x < 0: signed number check
  • x < genesis_slot, with genesis_slot >> 0 (i.e. sufficiently large offset from 0.) One potential offset is 1 << 63, effectively simulating a signed number (but worse: sign bit inverted, if you think of it as two's complement), but not human readable in decimal base. If this becomes the "blocknumber" of the future, we want it to be readable, right?

If we don't care about very long-term consistency, we can go for 32 bits. It seems unnecessary however, as there's more range available to every language/platform. E.g. we could even opt for an imaginary 48 bit (un)signed integer.

Javascript

Highlight from earlier gitter chat:

Well, if we're not using the higher bits (i.e. [53, 63]) of the signed 64 bit number for the next ~ 800K years, javascript can just keep using 52 bit mantissa based numbers (the 11 unused bits being the exponent part of the float). And then we can choose for a signed 64 bit number. With genesis at a clean 0.

I.e. by exploiting the range of slot numbers, we could choose for a 64 bit number (signed or unsigned), and have it be compatible with the 52 bit (excl. sign bit) range of javascript float-based (i.e. mantissa) integers.

Alternative would be to use big-numbers, like the alternative that Java also has, see below.

Java

Java only supports signed 64 bit numbers ("long"). Of course, you could transport a unsigned 64 bit number over a signed number, as done previously in Guava and supported in Java 8. This does introduce other things to consider, please refer to comments: A, B.

Alternative would be Big integers. (something that looks like z.Set(y).Add(x) instead of z = y + x). Special limits have to be imposed to keep it at 64-bit big-numbers as well.
Please refer to comments below about the Pros and Cons of Java options A, B.

Validator indices

The approximate range here: 0...4,000,000 (worst case validator count).
Note that due to in-active validators, the list may actually be even bigger. But it's somewhere in this order, worst case.

Signed numbers: No. However, there could be a case for an indexOf function that returns a -1 when some validator is not included in a list. Common practice in quite a few languages.

Validator indices are realtively low, and would fit easily in 32 bits:

2**31 = 2,147,483,648
2**32 = 4,294,967,296

Now the questions here are:

  1. Do we want unnecessary but easy "consistency" by going for 64 bit numbers?
  2. Do we want signed numbers?

Javascript

Fits in a 52 bit mantissa. ES6 supports bitwise operations only for 32 bits. If we want to do bit-magic on indices, we may want to just go for 32 bits or less.

Java

Java only supports signed integers. "int": 32 bits, signed. docs

Balances

For balances there is a valid concern where we may not even want to use a 64 bit number, if we want precise/small balances.

Range: two options:

  1. Same as ETH 1.0, requires big-numbers in every language (some with more readable usage than others)
  2. Less resolution. Making it fit in a 64 bit number

Signed: No. However, there is a case for ease in math to consider that clients may want to convert to signed numbers internally. Signs are not necessary anywhere after being encoded.

Javascript / Java

We know the limitations of these by now. Balances are likely to require the most resolution in the near-term. No shortcuts with ranges (like with slots). Highly important to get right and prevent bugs.

Personally a fan of using big-ints here, and use safe-math.


I tried to outline the problem cases + options + considerations. Please comment if you think anything is missing, or want to make a strong case for any of the options.

@protolambda
Copy link
Collaborator Author

protolambda commented Feb 14, 2019

Personal opinion:

  1. Slot numbers with (un)signed int64
  • clean genesis at 0
  • human readable
  • if signed: negative numbers are sufficient to account for pre-genesis logic.
  • support by every platform
  • Alternatively, if encoding size is a big issue, and we want to get rid of the sign bit, I would prefer unsigned 64 bits, with genesis still at 0 for readability. We can handle the pre-genesis underflow differently with some logic or something.
  1. Validator indices with (un)signed int32
  • if signed: intuitive (for most) indexOf behaviour
  • sufficiently big (afaik)
  • if signed: 4 bytes is not that bad at all
  • if unsigned: minimal encoding, no sign bit, readable.
  • support by every platform (32 bits is easy regardless of sign)
  1. Balances: big-ints
  • Alternatively open for unsigned 64 bits, if it fits, and Java/JS teams are ok with some performance differences

Please share if you have a different view, and why.

Edit: changed opinion slightly, still same integer sizes, but open for either signed or unsigned ints. If I really had to make a choice at gunpoint, I think I would rather have unsigned ints (mostly because of simplicity of having no signs, although they could be just fine).
Edit 2: Engrish

@mratsim
Copy link
Contributor

mratsim commented Feb 14, 2019

Spec considerations

Beyond the implementers story there is also the spec side to consider.

Potentially the early slots become an edge case with special complex (?) treatment in the spec.

Ergonomics

As discussed during the implementer calls, using signed integers requires conversion at very specific boundaries:

  • EVM
  • serialization

However using unsigned integers requires:

  • conversion in almost all logic and debugging code as we need to substract or add the epoch/slot constants before indexing an array or displaying a value.
  • lots of attention towards underflow when comparing values.

My position

I do not support unsigned int because:

  • it's harder to use the natural programming language syntax because everyone will probably use something like state.slots.at(idx) with at hiding idx - GENESIS_START_SLOT.

  • it's harder to write natural math because unsigned integers do not behave like intuitive math when substracting or comparing. We're actually doing math modulo 2^32 or 2^64.

  • If a value needs to be compared with another or substracted to another for processing (length, count, quantity), unsigned int tends to create more implementation bugs.

  • We had a lot of bugs due to unsigned memory addressing and length in EVM1 (see Sanitize memory addresses and lengths status-im/nimbus-eth1#97) and this is exemplified by a number of test cases that ensure clients can handle under and overflow in the EVM:

    • calldatacopy_DataIndexTooHigh.json
    • calldatacopy_DataIndexTooHigh2.json
    • calldatacopy_DataIndexTooHigh2_return.json
    • calldataloadSizeTooHigh.json
    • codecopy_DataIndexTooHigh.json
    • extcodecopy_DataIndexTooHigh.json

    While a VM is a special case, I'd like to avoid that in normal higher level code like the beacon chain.

In short, signed integers when we need to do math, logic and indexing and unsigned when we need control over memory representation (EVM, serialization).

@arnetheduck
Copy link
Contributor

arnetheduck commented Feb 14, 2019

An important distinction I'd add is to differentiate serialization and logic.

For slots etc, it's entirely reasonable that the serialization is uint64 while the logic is expressed in signed integer arithmetic that's natural to read (which also happened to be the status quo until the offset solution was introduced). Implementers can deal with that as they wish (and contribute back to the spec!), including proving parts of the spec as being unsigned-safe, turning subtractions into additions etc - as long as we establish "safe ranges" that are reasonable.

I'd consider the offset solution impractical mainly because it encourages cutting corners on correctness to gain some performance in the near term - going for that smaller data type when you should be doing a bigint thus effectively penalizing languages that don't naturally support the given "reasonable" range and thus making the real-world deployment either more bug-prone or less rich in terms of implementation diversity.

As an aside, I also find signed integers to be more difficult from a serialization perspective - their byte encoding (now little-endian) is onerous to work with in general (parsing, debugging etc)

@cleishm
Copy link
Contributor

cleishm commented Feb 14, 2019

It's my opinion that the spec should use unsigned integers in all situations where the described value should never be negative. Additionally, the spec should use unsigned integers of the minimal bit length required for the given purpose, and explicitly define under/overflow behavior.

Implementors can then use signed or unsigned integers as they see fit, as long as the requirements of the specification are maintained.

@atoulme
Copy link
Contributor

atoulme commented Feb 17, 2019

All,

I believe it is important to properly support teams that work in Java/Kotlin and build Ethereum 2.0 clients. We (hat tip to @cleishm) have implemented an unsigned 64 bit number in Cava:
https://github.com/ConsenSys/cava/blob/master/units/src/main/java/net/consensys/cava/units/bigints/UInt64.java

I have prepared a commit for Artemis to change its logic to use it (I replaced all uses of UnsignedLong):
atoulme/artemis@eb4ef00

I believe this alleviates some of the pains, especially as UInt64 supports bit shifting, exact additions and subtractions, and more.

If I can supply an opinion - Kotlin offers the flexibility you seek if you're looking for a flexible DSL that allows overloading operators.

Cheers.

@lookfirst
Copy link

lookfirst commented Feb 17, 2019

@protolambda

Java only supports signed integers. "int": 32 bits, signed.

"int: By default, the int data type is a 32-bit signed two's complement integer, which has a minimum value of -231 and a maximum value of 231-1. In Java SE 8 and later, you can use the int data type to represent an unsigned 32-bit integer, which has a minimum value of 0 and a maximum value of 232-1. Use the Integer class to use int data type as an unsigned integer. See the section The Number Classes for more information. Static methods like compareUnsigned, divideUnsigned etc have been added to the Integer class to support the arithmetic operations for unsigned integers."

Java only supports signed 64 bit numbers ("long").

"long: The long data type is a 64-bit two's complement integer. The signed long has a minimum value of -263 and a maximum value of 263-1. In Java SE 8 and later, you can use the long data type to represent an unsigned 64-bit long, which has a minimum value of 0 and a maximum value of 264-1. Use this data type when you need a range of values wider than those provided by int. The Long class also contains methods like compareUnsigned, divideUnsigned etc to support arithmetic operations for unsigned long."

Of course, you could transport a unsigned 64 bit number over a signed number, but it's not nice to work with.

Totally subjective opinion and not enough empirical evidence to argue for a debate.

Alternative would be slow and unreadable Big integers. (due to z.Set(y).Add(x) instead of z = y + x). Special limits have to be imposed to keep it at 64-bit big-numbers as well.

Have you tested the speed of this? This should get easily optimized in the JIT. Many large financial institutions use Java for HFT (high frequency trading), which requires insane performance with large numbers.

As for the 'unreadable' part... it is common to build an API to simplify it for your use case.

@protolambda
Copy link
Collaborator Author

protolambda commented Feb 17, 2019

In Java SE 8 and later, you can use the int data type to represent an unsigned 32-bit integer, which has a minimum value of 0 and a maximum value of 232-1.

True, this is what I meant with "transporting over a signed int", because this "support" is completely artificial.

Why?

  • It has no type. Ridiculous in a language such as Java imho.
  • It is prone to bugs, you have to know which operations are different than the default signed integers, to use the unsigned method variant.
  • You can work-around above limitations, but this would require you to either:
    • Box every int. Slow and a painful amount of boilerplate for simple 64 bit math
    • Use annotations for unsigned variables. And enforce them for all developers. And annotations don't compile, so you won't know after compile-time if you distribute without sources.
  • Java 8 is not special. The Guava library already supported it. And both do so through a SDK addition of a set of methods for missing unsigned behavior (a hack). The overhead of calling a method is already too much for me personally to not start looking for better alternatives (I want the very best here, not a hack if possible).
  • Readability, math through method calls is not a good.
  • On a JVM bytecode level, there is nothing new, it's just emulated, afaik. I would say performance is affected for those operations that are not the same between signed and unsigned, e.g. division, modulo.
  • It starts to become even worse with different JDKs, openJDK literally says "could be better, check other options" in its documentation of unsigned behavior, here. Guess what they use to implement it? They convert it to big-integers. Nobody wants to burn itself with hacky but more optimal alternatives.
  • The JIT can inline whatever, but if it doesn't magically reduce multiple alternating additions and if-statements, it's not going to equal the same performance, even for simple unsigned integer comparisons.

Java only supports signed 64 bit numbers ("long"). ...

In Java SE 8 and later ...

See above, it's really just the same data-type, with a hacky workaround to provide support for unsigned behavior. It's not completely native to the JVM bytecode afaik (please correct me if I'm wrong).

Of course, you could transport a unsigned 64 bit number over a signed number, but it's not nice to work with.

Totally subjective opinion and not enough empirical evidence to argue for a debate.

I literally wrote up an entire post to start a debate on considerations with integers, with special attestation to Java. And I'm familiar enough with the JVM and Java bytecode to work with JNI, reflection and know of awful hacks such as the lower integer cache. I tried my best documenting everything, and noted the transport-over-signed integer support, but yes I have my opinions.
I will edit it to not include "not nice to work with", and point out both of our comments. "not nice to work with" really was the nicest thing I could say about it.

Alternative would be slow and unreadable Big integers. (due to z.Set(y).Add(x) instead of z = y + x). Special limits have to be imposed to keep it at 64-bit big-numbers as well.

Have you tested the speed of this? This should get easily optimized in the JIT. Many large financial institutions use Java for HFT (high frequency trading), which requires insane performance with large numbers.

Have not tested it myself, but there's plenty of other research/benchmarks into big-integers. And generally, the standard BigInteger is much slower than the one in Go. Now compare it to a native 64 bit integer with native operations, no boxing/allocations, and it's far behind.

Personally, I fundamentally dislike it because of the boxing (it's not necessary if you're on a 64 bit platform) and awful syntax. There's reasons where I would use it however, like safe-math, or > 64 bit integers.

Also, I don't care about "usage in HFT" when it's not the bottleneck of the actual example application. Streaming and distribution across compute are much more important in such a case afaik. As a side note; I wonder if we can make the beacon-chain processing itself more parallelized...

I will just edit this, if it's too subjective.

My proposed alternative, if you want to pursue JVM with less of the concerns that Java raises: Kotlin.

Generally, Kotlin does a much better job at implementing the same thing (although still "experimental" phase in 1.3), but with types, readable constants, and readable operators. Still the same JVM limitations tho, as far as I know (but Kotlin also has a native target as well). For reasons like these, I think it deserves a look to transition to as a JVM based client now that it's still a relatively early phase.

TLDR: avoid hacky pseudo unsigned 64 bit support if possible, i.e. see if just signed numbers would work first. And if you do, use annotations, enforce them, and document the dangers.

Edit: fix quoting markup
Edit 2: fix some typos

@lookfirst
Copy link

lookfirst commented Feb 17, 2019

This got me digging into the performance and I found this:

https://github.com/bwakell/Huldra
https://codeforces.com/blog/entry/17235

@protolambda
Copy link
Collaborator Author

protolambda commented Feb 17, 2019

If we're comparing big-ints, take a look at http://www.wilfred.me.uk/blog/2014/10/20/the-fastest-bigint-in-the-west/
Some languages are just better suited for heavy optimization.

That said, I think we need to stay on topic, and discuss the benefits and drawbacks of all integer signs and sizes. Let's not get stuck talking about just big-ints, as clearly, we can do better.

The question is: what is the best choice for a solid and fast implementation, while still being reasonable to implement in all the involved languages? (x3, the different problems may require different types of solutions)

@lookfirst
Copy link

lookfirst commented Feb 18, 2019

@protolambda What is the relevance of quoting a blog post from 2014 with tests against Java 1.7?

In the case of ETH 2.0, which is still being heavily developed and worked on, shouldn't the focus be less on premature optimization and more on a solid implementation? Solid meaning that it passes all unit and integration tests.

Pretty and fast is optimizations that can be added later. Java makes profiling and refactoring quite straightforward.

Kotlin is a bike shed issue. Switching to a whole new language just over this issue seems like another premature optimization. It isn't trivial. Documentation, build systems, technical skills of the developers, etc... all need to be updated and changed. In other words, a whole number of risks for benefit of unknown percentage.

I'd just go with @atoulme's suggestion and use Cava's implementation of UInt64. Problem solved.

@cleishm
Copy link
Contributor

cleishm commented Feb 18, 2019

Kotlin is a bike shed issue. Switching to a whole new language just over this issue seems like another premature optimization. It isn't trivial.

I agree that Java/Kotlin is a bit of a bike shed issue, much like big vs little endian, signed vs unsigned, etc. ;-)

That said, I disagree about the triviality. Kotlin is highly compatible with Java, you can mix it in the same source tree, and it all works in the same IDE. I'd argue it's more like a syntax variation than a different language. Indeed, much of Cava is implemented in Kotlin but consumed using each module's Java visible APIs. We do this because it's much easier to use Kotlin in many situations. Such a situation would be if there really is an extensive need to use extensively use unsigned integers - one can switch a bunch of classes into the Kotlin syntax and that problem is solved. Fairly trivially.

@cleishm
Copy link
Contributor

cleishm commented Feb 18, 2019

(BTW, Cava already provides an SSZ implementation using Kotlin unsigned integers: https://github.com/ConsenSys/cava/blob/master/ssz/src/main/kotlin/net/consensys/cava/ssz/experimental/SSZWriter.kt#L183. I expect other eth-2.0 related libraries in Cava will, where relevant, also expose Kotlin API using unsigned ints.)

@lookfirst
Copy link

lookfirst commented Feb 18, 2019

Ah, so more like TypeScript/JavaScript. Sweet. Maybe that is the answer here then.

@cleishm
Copy link
Contributor

cleishm commented Feb 18, 2019

Ah, so more like TypeScript/JavaScript.

That's a much more succinct way of explaining it! 😂

@benjaminion
Copy link
Contributor

benjaminion commented Feb 18, 2019

Even though I'm in one of the teams working in Java, I don't think this discussion should be driven by what suits Java - one way or another we'll cope with whatever the final decision is.

For me, this ought to be a more purist conversation around formulating the spec in the safest way for implementers generally. Yesterday, I wrote a long justification for using signed integers, or, alternatively, providing clear bounds on valid ranges so implementers could safely do as they please. But then I realised I was just repeating the wisdom of @protolambda and @mratsim above, and there's no point going round in circles.


Having said that... 😂

I'd just go with @atoulme's suggestion and use Cava's implementation of UInt64. Problem solved.

Using a native type if we can (e.g. signed long for epochs/slots) would be immensely simpler. The main challenge with the wrapped types is the extremely clunky syntax which is horrible to write, awful to read and horrendous to maintain. But, as I said, we'd cope. [Yes, we know about Kotlin!]

@lookfirst
Copy link

lookfirst commented Feb 18, 2019

What I'd like to see is this issue updated with some actual code to be used in ETH2.0, done both ways (native types vs. 'extremely clunky syntax which is horrible to write'), in Java and maybe even Kotlin.

Then we can do a bit of performance testing as well as syntax evaluation and discussion and work towards a good middle ground. Right now, it is all to subjective IMHO, we should be letting the code speak for itself.

@protolambda
Copy link
Collaborator Author

protolambda commented Feb 18, 2019

@protolambda What is the relevance of quoting a blog post from 2014 with tests against Java 1.7?

@lookfirst OpenJDK hasn't even changed the big-number implementation since 2014. JDK 8 is from Mar 2014: https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/tip/src/share/classes/java/math/BigInteger.java
But, I don't care so much about Java in the comparison, or the details, it's more about the approaches other languages have taken to make big-integers efficient. (you don't have to stick to the approach of the Java SDK to get something working in Java, e.g. see Cava)


Even though I'm in one of the teams working in Java, I don't think this discussion should be driven by what suits Java - one way or another we'll cope with whatever the final decision is.

@benjaminion Agree, to a certain extent. If there's something to choose, let's go for compatibility.

Using a native type if we can (e.g. signed long for epochs/slots) would be immensely simpler

Yep, we should definitely be able to handle slots through native types for most languages. All we have to do is define a working range (enforced in spec) to support a distant but not too big future (e.g. 1000 years), and then we can support every client, each working with their native types. And unsigned/signed needs to be decided.

So let me try to simplify this slot debate to get it done with:

Slot/Epoch format debate nr. 1:

A) unsigned, genesis = 0: needs good design and testing to avoid/detect underflows for early epoch/slot logic, but will make encoding life easy, and has readable slot numbers. Also more pure, as there won't be negative slots.
B) unsigned, genesis = (1 << x): no underflows (if implemented well), but unreadable slot numbers. If x = 63 here, then it's effectively unreadable signed integers with worse encoding (because of starting 1, no var-length int option without sign bit if you want to)
C) signed, genesis = 0: readable, easy to deal with from a logic perspective (i.e. first few slots looking back in old history, catching < 0 to return nothing). Supported by every language (JS number has a sign bit in addition to mantissa, it's native to Java). The negative slot can be important to future API response designs, e.g. return -1 for slot of non-existing block-hash. However, it may be difficult to deal with the potential of having a negative slot number.

Slot/Epoch format debate nr. 2:

A) Enforce a range of max. x bits (e.g. 52) to be used (good enough for ... years/centuries), so that every client can safely use their native types.
B) Do not enforce anything. Clients may still use native types, but may also end up with some sort of failure if done wrong.

Fair to say epochs have the same encoding: epochs are not that long, so the encoding is only like 6 bits less. (64 slots). Also just nice for consistency and ease in conversion.


Then there is the validator index encoding: given the "4 million" worst case number, you would think that 32 bits is good and simple enough. So there's two questions to answer:

  • Will we need more than 32 bits? (E.g. we use index instead of pubkey as an ID for some reason, and it's not really limited by the 4M number, but the amount of entries)
  • Do we go for signed or unsigned? (See original post for pros/cons of options)

This should also be easy enough to decide on.


Then lastly, we have balances, which don't have a low range to start thinking about wide native support, and are hard to get right.

For this particular problem class, I kinda agree with @lookfirst here (although it is a lot of work):

What I'd like to see is this issue updated with some actual code to be used in ETH2.0, done both ways (native types vs. 'extremely clunky syntax which is horrible to write'), in Java and maybe even Kotlin.

Then we can do a bit of performance testing as well as syntax evaluation and discussion and work towards a good middle ground. Right now, it is all to subjective IMHO, we should be letting the code speak for itself.


Given that slots and indices are important, yet relatively easy to solve, I would prioritize this, and continue the debate on balances later.

@protolambda
Copy link
Collaborator Author

protolambda commented Feb 18, 2019

Proposal to get the debate to 2/3 problems solved:

Slots & Epochs

int64, with enforced range check during block processing.

Why?

  • Readable in human format
  • Easy to deal with from a logic perspective (i.e. first few slots looking back in old history, catching < 0 to return nothing)
  • Supported by every language (JS number has a sign bit in addition to mantissa, it's native to Java).
  • The negative slot can be important to future API response designs, e.g. return -1 for slot of non-existing block-hash.

How?

We have to deal with the possibility that someone tries to propagate a block, created in a:

  1. negative slot (already enforced, since block.slot > parent_block.slot, genesis = 0)
  2. too distant future (thousands of years from now, we'll likely ignore them anyway), outside of the range we agree upon (52 bits (excl. sign bit) being the max. if we want wide support)
  1. is already enforced, 2. likely needs to be done anyway, just do the range check on ingestion and be done with it. (add sanity checks to spot occurrences of these two options wherever you see fit tho)

Validator Indices

int32

Why?

Given the 4M worst case often cited, or 8M if you go by the EJECTION_BALANCE constant of 16M and 128M ETH in circulation in the distant future, there's a lot to say about just not using 64 bits.

As mentioned in the original write-up:

Validator indices are realtively low, and would fit easily in 32 bits:

2**31 = 2,147,483,648
2**32 = 4,294,967,296

And since indices are just indices, not out-of-context keys (unlike the public key that everyone knows of for verification etc.), 32 bits should be enough for a long time. (IMO: Once we get to a point where we need to design to billions of validators, it's mainstream and big enough to have gone through much, much more updates/forks)

I'm think signed integers work slightly better for API-like reasons, but okay with unsigned here.

If anyone can make a strong case for unsigned / against signed, raise your voice.

"indices are not negative" is not good enough imho, as it's useful to have in special-cases (like mentioned above), and negative indices are no different than indices out of range on the other end, i.e. index >= length.

Besides, life for Java is slightly easier with it being signed, since they never have to wrap them in a int64 (long).

How?

Take Let proposer = state.validator_registry[proposer_slashing.proposer_index] as example: you handle index < 0 exactly the same as index >= len(validator_registry).

Same for everywhere else, just update the current implicit range_check_index(i) function of your validator registry to check for negative indices too.

If some negative validator index is ever encoded and used in a slashing/block/whatever, it's the same as an index being out of range on the other side of the registry, and we mark it as invalid.

Balances

TBD later, part 3/3 of the debate.

Since big-integers are involved for some languages that don't support uint64 (if we would want to keep using that), it warrants some extra discussion on style and testing to get things right for everyone.

@mratsim
Copy link
Contributor

mratsim commented Feb 18, 2019

For indices signed works better because indices at low-level are pointer accesses and those are signed (cf ptrdiff_t) so this maps very well with C on 32 and 64-bit platforms but if we have the range that is implementation detail.

Edit - Side-note: Quoting Google C++ style guide

On Unsigned Integers
Unsigned integers are good for representing bitfields and modular arithmetic. Because of historical accident, the C++ standard also uses unsigned integers to represent the size of containers - many members of the standards body believe this to be a mistake, but it is effectively impossible to fix at this point. The fact that unsigned arithmetic doesn't model the behavior of a simple integer, but is instead defined by the standard to model modular arithmetic (wrapping around on overflow/underflow), means that a significant class of bugs cannot be diagnosed by the compiler. In other cases, the defined behavior impedes optimization.

That said, mixing signedness of integer types is responsible for an equally large class of problems. The best advice we can provide: try to use iterators and containers rather than pointers and sizes, try not to mix signedness, and try to avoid unsigned types (except for representing bitfields or modular arithmetic). Do not use an unsigned type merely to assert that a variable is non-negative.

@cleishm
Copy link
Contributor

cleishm commented Feb 19, 2019

Google C++ style guide may be fine for Google, but I don't think it's widely accepted as best practice, and it's focussed on implementation rather than on specifications.

For a specification, I still contend that it should describe the minimal requirement. If slots & epoch only need a range of 0 .. 2**52, then define it that way in the spec. If a bit-flag is needed to indicate an error, add that in the specification also. Implementations can then use whatever types their language of choice provides, as long as they can represent the required range and the implementation handles under/overflow as specified.

JustinDrake added a commit that referenced this issue Feb 19, 2019
* Address the slot/epoch underflow problem, even for Java implementers! 🎉
* Squash a bug with `get_previous_epoch`
* Fix #642
* Address #626 (Vitalik, Danny, myself agree that avoiding signed integers is probably best)
@mkalinin
Copy link
Collaborator

mkalinin commented Feb 20, 2019

Even though I'm in one of the teams working in Java, I don't think this discussion should be driven by what suits Java - one way or another we'll cope with whatever the final decision is.

My teammates and I totally agree with @benjaminion. We are another Java team working on beacon chain implementation.

@protolambda
Copy link
Collaborator Author

protolambda commented Feb 20, 2019

Alternative proposal, the unsigned int one

Ok, so alternatively, to make the spec more "pure", we can say the types for slots/epochs and validator indices are unsigned. But with very clear wording on the allowed range, and encoding. This way, clients can safely use the native types that they like ("long" (java signed int64), "Number" (JS sign + mantissa)).

Below representation is big-endian, to make it easy to read. Little-endian does not change much, other than the actual memory order of the bytes.


| 63 | 62 | 61 | 60 | 59 | 58 | 57 | 56 | 55 | 54 | 53 | 52 | 51 | 50 | 49 | 48 | 47 | 46 | 45 | 44 | 43 | 42 | 41 | 40 | 39 | 38 | 37 | 36 | 35 | 34 | 33 | 32 | 31 | 30 | 29 | 28 | 27 | 26 | 25 | 24 | 23 | 22 | 21 | 20 | 19 | 18 | 17 | 16 | 15 | 14 | 13 | 12 | 11 | 10 |  9 |  8 |  7 |  6 |  5 |  4 |  3 |  2 |  1 |  0 |
| S  |   A: unused till later soft fork, 11 bits       |                 B: allowed range, but subject to spec rules (time)                                                                                                                                                                                                     |

S = Sign bit. Disallowed to be 1 in input/output, unless explicitly stated.

Exceptions can be made for API design (e.g. -1 for slot of unknown block-root).
Note that it can be 1 in the internal processes, e.g. an implementation can choose to encode it as a signed-number to not have to work-around unsigned math.

A = Explicitly disallowed for now. Higher end of the encoding range for integers. Can soft-fork in a looong time from now to enable this range.
B = Explicitly allowed, but subject to specification rules. I.e. slot must not be larger than X slots of latest finalized slot number to be accepted.

52 bits should be plenty for slot/epoch numbers.

For validator numbers, we could use the same policy above for maximum capacity.

If you like this better than the earlier proposal, please add an emoji. If not, please show support for the earlier proposal, or write your own.

@cleishm
Copy link
Contributor

cleishm commented Feb 20, 2019

@protolambda So to paraphrase, are you saying that slot/epoch should be an unsigned 52-bit integer, but any part of the specifications that encodes these values should do so as 64-bits with the top 12 always set to zero?

Do we know all the places these values will be encoded?

@protolambda
Copy link
Collaborator Author

protolambda commented Feb 20, 2019

@cleishm Not exactly.

Simplified:

  • encoding and decoding should enforce 12 top bits to be 0.
  • 64 bits is used on encoding, because it:
    • aligns well
    • is very standard, easy to encode, unlike 52 bits, or possibly 48 bits.
    • encoding does not affect the platforms, if they can just decode into their own smaller types (by making use of the explicit range). All platforms will get to use their native types.
    • leaves room for an update, when the need arises (very long from now)
    • many platforms already support it. If someone wants to make their own clone of the beacon-chain, they can choose these clients, and go for a higher range (e.g. they want a lower slot time, which reaches the limit sooner)

What I'm saying is that we can loosen the design space by just stating that every client has to support 52 bits. Every platform can do this. This requirement is important, because if you transfer data through a peer, you want them to be able to handle it (without something alike to reduction in resolution), so that the data can propagate to other peers without problem.

And it's two's complement, so the sign bit will be left-most, next to the unused bits, and all will be 0 for our use case (positive numbers from 0 ... n, with n bounded by time or validator count).

And we encode it as 64 bits, because it's much more standard, and we can loosen up the range towards 64 bits as soon as the need arises (in millions of years with current slot time, or when ethereum validator count explodes beyond imagination...).

Also, being unsigned would just mean the spec has to be explicit on simple underflows (e.g. slot lookback early in the chain, looking for data before genesis). And clients can choose to handle it as signed numbers instead of extra if statements, if they want to.

The alternative-alternative proposal would be to just use uint64, but that's status quo. Similar, but not good enough for client to use their native types safely, while they could if they wanted so. If we want to stick to unsigned, then better make a good and minimal (see proposal, not this 😂) statement on what to expect from clients, so that everyone can use their best integer choice for slots/epoch/indices.

TLDR:

  • This proposal is for unsigned ints, but makes sure every platform can use their native types
  • Only problem is the enforcing of the range. However, this is only necessary when encoding/decoding something, which is not too bad.

@protolambda
Copy link
Collaborator Author

protolambda commented Feb 20, 2019

Also, if neither the Java nor the Javascript teams want to use their native types, we could just as well forget about all of this, and keep it at uint64. But reverting that decision later may not be so easy (if not near-impossible), hence all of this (boring) effort to get it sorted.

@cleishm
Copy link
Contributor

cleishm commented Feb 20, 2019

@protolambda Thanks for the clarification - I think that agrees with what I had in mind. Specify slot/epoch as an integer value in the range 0 .. 2^52, and specifying any encoding of that to use 64-bits (thus the most significant 12 bits must be zero). Additionally, define under/overflow semantics as wrap-around or error.

Implementations are then free, in their code, to treat slot/epoch as unsigned 52-bit integers, or 64-bit signed or unsigned values. They just have to make sure it encodes & decodes correctly, and handles under/overflow as specified.

@JustinDrake
Copy link
Collaborator

JustinDrake commented Feb 21, 2019

Implementations are then free, in their code, to treat slot/epoch as unsigned 52-bit integers, or 64-bit signed or unsigned values.

#655 provides this flexibility for implementers :)

@cleishm
Copy link
Contributor

cleishm commented Feb 21, 2019

@JustinDrake That's cool. Do you think overflow needs to be addressed? I suspect it should be an error.

@JustinDrake
Copy link
Collaborator

JustinDrake commented Feb 22, 2019

Do you think overflow needs to be addressed? I suspect it should be an error.

I wouldn't be opposed to adding something like "Any uint64 overflow should throw an error." assuming we make justification_bitfield a byte array. (At the moment justification_bitfield is awkwardly a uint64 designed to overflow.)

In practice, uint64s should not overflow because of time and economic constraints. And if we miss an overflow edge case that can be triggered (e.g. by an attacker) then whatever the spec defines (wrap around or throw an error) it may not make a meaningful difference as in either case it would likely require manual intervention.

@protolambda
Copy link
Collaborator Author

protolambda commented Mar 1, 2019

After experimenting with SSZ encoding implementation and spec implementation more (to be published), I feel like the choice for unsigned integers works much better: it's not hard to deal with the unsigned math at all in practice, and exclusiveness of unsigned numbers reduces the complexity of encoding/decoding.

@lookfirst
Copy link

lookfirst commented Mar 2, 2019

@protolambda Got examples? Would love to see.

@protolambda
Copy link
Collaborator Author

protolambda commented Mar 6, 2019

The complexity reduction I talked about is just not having to deal with two types of integers in your encoding/decoding functions. And the possibility of negative numbers in a communication space where there doesn't exist an idea of a negative number, makes no sense imho. By being clear about the range of unsigned 64 bit integers (i.e. for each of the tree classes: 1) slot/epoch, 2) val. indices, 3) balances, all clients should be able to support uint64.

Just to be clear:

  • slots/epochs are naturally limited by time
  • indices are naturally limited by validator count (i.e. stake)
  • balances are something up for more discussion later if necessary, but there's no "range workarounds" here (afaik)

Closing this issue now, it stays uint64, and we'll all try to be more clear about the usage of integers in future changes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

11 participants