Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BigDecimal vs Decimal128 #8

Open
littledan opened this issue Nov 13, 2019 · 10 comments
Open

BigDecimal vs Decimal128 #8

littledan opened this issue Nov 13, 2019 · 10 comments

Comments

@littledan
Copy link
Member

littledan commented Nov 13, 2019

This proposal suggests adding arbitrary-precision decimals. Another alternative would be to add decimals with fixed precision. My current thoughts on why to go with arbitrary precision decimal (also in the readme):


JavaScript is a high-level language, so it would be optimal to give JS programmers a high-level data type that makes sense logically for their needs, as TC39 did for BigInt, rather than focusing on machine needs. At the same time, many high-level programming languages with decimal data types just include a fixed precision. Because many languages added decimal data types before IEEE standardized one, there's a big variety of different choices that different systems have made.

We haven't seen examples of programmers running into practical problems due to rounding from fixed-precision decimals (across various programming languages that use different details for their decimal representation). This makes IEEE 128-bit decimal seem attractive. Decimal128 would solve certain problems, such as giving a well-defined point to round division to (simply limited by the size of the type).

However, we're proposing unlimited-precision decimal instead, for the following reasons:

  • Ideally, JavaScript programmers shouldn't have to think too much about arbitrary limits, or worry about whether these limits will implicitly cause rounding/loss of precision.
  • Thinking about Decimal for interchange/processing of values that come from elsewhere: the fact that many other systems support bigger decimal quantities means that, if we limited ourselves here, we wouldn't be able to use the JS Decimal type to model them.
  • Certain use cases benefit from being able to do calculations on very large decimals/floats. If Decimal did not provide these, the, they could drive demand for a separate data type, adding more global complexity.
  • In JavaScript, it would be inviable to use global flags (as Python does), or to generate many different types (as SQL does), to allow configuration of different precisions, as this contrasts with the way primitive types tend to work.
@littledan
Copy link
Member Author

littledan commented Jan 8, 2020

Fabrice Bellard said,

Regarding the precision, I agree that going to 128 bits is tempting
because it avoids using too much memory and may be enough for practical
use. On the other hand, the memory problem is already present with
BigInt. I think it is a question to ask to the potential users.
Personally, even if 128 bit default precision (i.e. 34 digits) is
chosen, I think it would be interesting to keep the ability to change
the default precision.

Optional bounded precision: it could be possible to add the ability to
do computations with a finite default precision. If the feature is
necessary, I suggest to do it only on nested blocks so that it is not
possible to change the default precision outside well defined code. For
example, in QuickJS BigFloat, the only way to change the default
precision is to call BigFloat.setPrec(func, precision) to execute func()
with the new default precision "precision".

I would suggest BigDecimal.setPrec(func, prec) as for the QuickJS bigfloat. The precision is changed only during the execution of "func()". The previous one is restored after setPrec or in case of exception.

Maybe it was not clear but I suppose that no precision is attached to the bigdecimal values. The precision only applies to the operations.

I could see the setPrec function, using dynamic scoping, as somewhat less bad than a simple global variable. But it still seems really undesirable to me, as it's anti-modular: you may call into code that you don't realize uses BigDecimal, unintentionally changing its precision. To make a reliable library, you'd have to guard your own exported code with setPrec, which doesn't seem so great. I'd prefer if we can either agree on a global, fixed precision (as many languages have, e.g., C# and Swift), or use unlimited precision.

@littledan
Copy link
Member Author

After some more thought and discussion with @waldemarhorwat, I've decided to leave the question of BigDecimal vs Decimal128 undecided for a bit longer, and investigate both paths in more detail.

@novemberborn
Copy link

I work on Ethereum-based financial software. The largest integer in Ethereum is a uint256. In practical terms this means the largest decimal we need to be able to represent is 115792089237316195423570985008687907853269984665640564039457584007913129639935. The smallest decimal is 0.000000000000000000000000000000000000000000000000000000000000000000000000000001. decimal128, with 34 significant digits, cannot represent these numbers.

@littledan
Copy link
Member Author

littledan commented Feb 10, 2020

@novemberborn If you're currently using a uint256, would BigInt work for your needs? How are those decimals currently represented?

@novemberborn
Copy link

@littledan we've ended up with a few representations unfortunately.

While we can represent the raw value as a BigInt, this isn't actually useful. The smallest unit of ETH is a Wei, but thinking of ETH as 1000000000000000000n Wei just hurts everybody's head. And that's before we want to calculated the USD equivalent of a given ETH balance.

@littledan
Copy link
Member Author

Can you say more about the representations you're using now? You only mentioned uint256 (of Wei?)--I'd be interested in hearing about the others.

@novemberborn
Copy link

I haven't worked much with the representation we use in our databases. We'll looking at cleaning this up so I'll know more in the next few weeks hopefully.

@novemberborn
Copy link

On the wire, we either use decimal strings, or a '1' integer string with an exponent value of 78.

@jessealama
Copy link
Collaborator

Coming in cold to this discussion, but it seems that there aren't any arguments here against the arbitrary-precision approach. The arbitrary precision approach would support options on various operations that would allow one to specify precision, thereby (potentially) gaining some speed & memory benefits in certain use cases, such as when knows, e.g., that at most 10 digits are needed for any calculation. There was a reference to a discussion with @waldemarhorwat. Are the concerns still valid?

@waldemarhorwat
Copy link

The brittleness and complexity concerns from past discussions on this topic haven't changed. See the past discussions on this topic to understand the problems and dangers that appear with unlimited precision. If precision is an option, what happens when one doesn't specify it? How does one specify it for an arithmetic operator such as +?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants