Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed point types #409

Open
4 of 7 tasks
chriseth opened this issue Mar 3, 2016 · 71 comments
Open
4 of 7 tasks

Fixed point types #409

chriseth opened this issue Mar 3, 2016 · 71 comments

Comments

@chriseth
Copy link
Contributor

@chriseth chriseth commented Mar 3, 2016

https://www.pivotaltracker.com/story/show/81779716

TODO:

  • assignment (lvalue and rvalue)
  • conversion (between different fixed types)
  • conversion (between other types)
  • comparison operators (< > =)
  • unary operators (-, --, ++)
  • binary operators (+ - / * )
  • do-we-need-these? binary operators (% **)
@chriseth chriseth added this to the bieti milestone Mar 3, 2016
@chriseth chriseth added this to the colocolo milestone Mar 29, 2016
@chriseth chriseth removed this from the bieti milestone Mar 29, 2016
@chriseth chriseth added this to the domesticus milestone Apr 11, 2016
@chriseth chriseth removed this from the colocolo milestone Apr 11, 2016
@chriseth chriseth changed the title Fixed point types Fixed point types - ConstantNumber part Apr 11, 2016
@chriseth chriseth changed the title Fixed point types - ConstantNumber part Fixed point types Apr 11, 2016
@chriseth chriseth added active and removed planned labels Apr 15, 2016
@chriseth chriseth added planned and removed active labels Apr 25, 2016
@chriseth
Copy link
Contributor Author

@chriseth chriseth commented May 12, 2016

Some notes:

IntegerType::binaryOperatorResult is too tight, this should work but seems not to: uint128(1) + ufixed(2), same with FixedPointType::binaryOperatorResult

also this should work: .5 * uint128(7)

what happens currently with uint(7) / 2? Is it identical to .5 * uint(7)?

add test about signed mod with rational constants (should behave identical to SMOD opcode)

@VoR0220
Copy link
Member

@VoR0220 VoR0220 commented May 12, 2016

signed mod as in a modulus operation with signed rational constants?

@chriseth
Copy link
Contributor Author

@chriseth chriseth commented May 12, 2016

yes - and fixed point

@VoR0220
Copy link
Member

@VoR0220 VoR0220 commented May 12, 2016

alright. I'll get on that. Crafting tests. Working on some fixes. And then it's onto the actual compilation.

@VoR0220 VoR0220 self-assigned this May 13, 2016
@VoR0220
Copy link
Member

@VoR0220 VoR0220 commented May 13, 2016

so couple of things. I got your first one working.

ufixed a = uint128(1) + ufixed(2);

The second two in the examples you laid out, based on how we defined implicit conversion are currently impossible.

0.5 currently converts to ufixed0x8 and that does not convert with a uint128.

uint(7) / 2 stays a uint256 after the division by 2 (it's truncating), and therefore cannot convert to ufixed128x128. And no... 0.5 * uint(7) is not the same as uint(7)/2....one is a ufixed0x8 and the other is dividing by an integer....kind of confusing....I'm thinking we may need to fix this up. Open to all suggestions....the only thing I can think up right now is to create a class atop integer type and fixed point and make it a super class of some kind...

@chriseth chriseth added this to the 2-functionality milestone Aug 5, 2016
@chriseth chriseth removed this from the domesticus milestone Aug 5, 2016
@VoR0220 VoR0220 mentioned this issue Sep 7, 2016
9 tasks
@chriseth
Copy link
Contributor Author

@chriseth chriseth commented Dec 2, 2016

We decided to rather denote decimal places instead of "number of bits after the comma" to reduce confusion among users. This means that
fixed64x7 is a type of 64 bits and a value x of this type is interpreted as the number x / 10**7.

The type fixed is an alias for fixed128x19. The reason is that this type simplifies multiplication and also allows conversion from int64 without loss of precision / range.

@VoR0220
Copy link
Member

@VoR0220 VoR0220 commented Dec 2, 2016

@chriseth do we still want to allow users to fully extend the decimal range so that it can be fixed0x32 (I believe that's the full amount that it could take in but may be wrong).

@chriseth
Copy link
Contributor Author

@chriseth chriseth commented Dec 2, 2016

@VoR0220 note that the 0 is the total amount of bits in the type. The drawback of using decimals is that you cannot force the value to be between 0 and 1 anymore (because that does not fit the decimal range).

@VoR0220
Copy link
Member

@VoR0220 VoR0220 commented Dec 2, 2016

@chriseth so in other words it HAS to have an integer portion now...That's fine by me. The range and precisions seriously diminishes after 128 bits in fixed point either way.

@alcueca
Copy link

@alcueca alcueca commented May 2, 2020

Ah, hold on. @chriseth, did you mean that to do ufixed(1) == 0.00...001, we could do it like ufixed f = fixed(bytes32(uint256 i))?

And then, of course, for ufixed(1) == 1.0 we would do ufixed f = ufixed(uint256 i).

I would like that. You would keep types and casting general and consistent, but there would be a workaround to cast money amounts into fixed units without range issues, loss of precision, or unnecessary operations.

@nventuro
Copy link
Contributor

@nventuro nventuro commented May 4, 2020

As long as there is a way to cast (not convert!) to and from integers of the same word size, I think we'll be fine. Simple functions can then make those operations more readable, such as:

function getRate(uint128 a, uint128 b) internal returns (fixed128x18) {
    return fixed128x18(a) / fixed128x18(b);
}

I'd focus on operations within the fixed type (add, sub, mul, div).

@chriseth
Copy link
Contributor Author

@chriseth chriseth commented May 6, 2020

The cast (type conversion / "re-labeling" without changing the underlying data) would go through the respective bytesXX type. Explicit and implicit conversions would not change the numeric value, but the EVM word data:

uint8 x = 1;
fixed128x10 y = x; // implicit conversion, results in a multiplication by 10**10 at EVM-level
bytes16 z = bytes16(y); // explicit conversion to underlying bytes type, results in a left-shift (because bytes are left-aligned)
uint128 t = uint128(z); // explicit conversion, results in right-shift
assert(t == 10**10);

Still, I would be much more confident to see some example code that really benefits from the fixed-point types that are currently planned.

@nventuro
Copy link
Contributor

@nventuro nventuro commented May 6, 2020

@asselstine you've used fixed point libraries to compute rates and fractions, perhaps you could shed some light on this?

@alcueca
Copy link

@alcueca alcueca commented May 6, 2020

It's late already here, tomorrow I'll write up an analysis of fixed point math use in MakerDAO, that'll be juicy.

@asselstine
Copy link

@asselstine asselstine commented May 7, 2020

@nventuro I could shed some light on how we've used fixed point math in PoolTogether. I have used @albertocuestacanada's Fixidity.sol and have also rolled my own. Currently:

  • Monetary values are stored as uint256.
  • Monetary values are multiplied or divided by "mantissas" and returned as uint256. Mantissas are fixed point 18 numbers like Ether. Here is our simple multiply uint by mantissa function:
function multiplyUintByMantissa(uint256 b, uint256 mantissa) internal pure returns (uint256) {
    uint256 result = mantissa.mul(b);
    result = result.div(SCALE);
    return result;
}

It's pretty much like every other library out there.

The discussion between decimal and binary fixed point is a little beyond me.

However, in general I think that having a built-in fixed point type would improve readability for all contracts. Having a common fixed point "language" across all Ethereum projects would be good for the ecosystem, and would mean that people wouldn't have to roll their own or copy-and-paste Compound's Exponential lib.

Another benefit for code quality would be compiler warnings about mixing fixed with non-fixed or perhaps even bad casts. It would extend the helpful strong typing right down to the math.

@chriseth
Copy link
Contributor Author

@chriseth chriseth commented May 7, 2020

@asselstine thanks for the insights! Would you be able to share some code examples about how you would use built-in fixed point types?

@3sGgpQ8H
Copy link

@3sGgpQ8H 3sGgpQ8H commented May 7, 2020

function multiplyUintByMantissa(uint256 b, uint256 mantissa) internal pure returns (uint256) {
uint256 result = mantissa.mul(b);
result = result.div(SCALE);
return result;
}

This implementation suffers from what I call “phantom overflow”: it may overflow even when the final result would fit into uint256. For example, multiplication by one or by a number less than one may overflow, which seems absurd to me.

@asselstine
Copy link

@asselstine asselstine commented May 7, 2020

@3sGgpQ8H Yes, the overflow is a concern. The scaling factor removes a lot of the top-end of the uint256. This is another one of my concerns that I'd love to see some support for, or compiler warnings at the very least.

@chriseth Happy to share some example code!

A good example is the calculateExitFee function.

The function first calculates the "mantissa" based on the user's total token supply:

FixedPoint.calculateMantissa(tickets, totalSupply)

This mantissa represents the fraction of tokens they own out of the supply. This mantissa is then used to calculate a "fair" fee to be paid to the prize. Pretty common use case.

If I were to use fixed point math, it might look something like:

fixed256x18 mantissa = fixed256x18.fraction(balance, totalSupply)
uint256 share = mantissa * total;

As @3sGgpQ8H mentioned, it would be great to have more clarity around overflow and numerical limits. I think this is where it might be useful to learn more heavily on the compiler for numerical safety. I'm not sure off the top of my head what that would look like, however.

@randomnetcat
Copy link
Contributor

@randomnetcat randomnetcat commented May 7, 2020

I can share some implementation experience. In #3389, we decided to restrict the fixed point types that could be multiplied and divided in order to prevent overflow. I believe the result was that two fixed points could only be multiplied/divided if both of their types were 128 bits wide or less.

@chriseth
Copy link
Contributor Author

@chriseth chriseth commented May 7, 2020

@asselstine what are reasonable bounds on totalSupply? Do you use fractional tokens? How would your math look like given unrestricted precision on both integers and fractions - i.e. what kind of formulas do you use?

@asselstine
Copy link

@asselstine asselstine commented May 7, 2020

Just want to add that this code has not been audited and is alpha :)

totalSupply is from the standard ERC20 token, so it is a uint256. It would be interesting to limit the minting of tokens so that the supply is less than uint128, as suggested by @random-internet-cat. That would ensure the numbers fit into the fixed point size and mitigate overflow. It does remove a lot of the top-end of the number, though.

Does that answer your question?

@chriseth
Copy link
Contributor Author

@chriseth chriseth commented May 7, 2020

@asselstine thanks for that comment! I would still really like to see more uses of fixed points. I have a hard time reading your code, unfortunately, could you maybe just provide the raw formulas?

@asselstine
Copy link

@asselstine asselstine commented May 7, 2020

Sure.

First we let:

totalSupply = total # of tickets
balance = users # of tickets
previousPrize = the size of the last prize that was awarded
remainingTime = remaining number of seconds until prize
prizePeriodSeconds = total number of seconds between prizes

Then we calculate the exit fee:

exitFee = (remainingTime / prizePeriodSeconds) * (balance / totalSupply) * previousPrize

@3sGgpQ8H
Copy link

@3sGgpQ8H 3sGgpQ8H commented May 8, 2020

I believe the result was that two fixed points could only be multiplied/divided if both of their types were 128 bits wide or less.

For me, forbidding multiplication of wide types makes the whole idea quite useless. I would prefer compiler to automatically apply different strategies for different types and even different values.

For example, when combined bit-widths of both arguments do not exceed 2^256-1, naive approach could be used:

return x * y / scale;

Otherwise, if scale is below 2^128-1, slightly more complicated approach is needed:

if (y == 0) return 0;
else {
    require (x / scale < uint(-1) / y); // Overflow protection, should be improved

    uint xh = x / scale;
    uint xl = x % scale;
    uint yh = y / scale;
    uint yl = y % scale;

    return xh * xh * scale + xh * yl + xl * yh + xl * yl / scale;
}

Probably, scale above 2^128-1 should be just forbidden, but if not, then for higher scales, compiler should use 512-bit intermediate arithmetic, like here: https://medium.com/coinmonks/math-in-solidity-part-3-percents-and-proportions-4db014e080b1#fe5c

@asselstine
Copy link

@asselstine asselstine commented May 8, 2020

compiler should use 512-bit intermediate arithmetic

I love this idea. It would give us a lot more headroom over scaling the values.

@fulldecent
Copy link
Contributor

@fulldecent fulldecent commented May 11, 2020

512-bit is not something we need to have in the first iteration of this.

@3sGgpQ8H
Copy link

@3sGgpQ8H 3sGgpQ8H commented May 12, 2020

Yes, but overflow semantic should not change after the initial release, otherwise it will be a mess.
So, one option would be to limit the maximum number of decimals to be 38, so scale factor will always fit into 128 bits. This way, multiplication could avoid using 512-bit internal logic. However, division will still be a problem. In my opinion, x / 2.0 should never overflow, but naive implementation like x * 1e18 / 2e18 would overflow on large x values. So for division, we probably need some king of wide arithmetic even in initial release.

Another option (the one I would prefer) is to include only literals, assignments, comparisons, ABI encode/decode, and reinterpret casts into the initial release of the feature in compiler, and leave arithmetic and conversion operations to be implemented in libraries, as there is no single “right” implementation for them.

@axic
Copy link
Member

@axic axic commented Feb 10, 2021

Revisited the nice summary from the OZ forums this Monday, and discussed some of this on Gitter.

My current impression is that we should split this up into two phases (this was mentioned in the OZ forums too):

  1. Add a bare minimum support to the language and compile:
  • a single fixed128 type,
  • ABI coder,
  • conversion to same-width bytes,
  • perhaps, but not necessarily, conversion to same-width integer type
  1. Experiment with various implementations for different operators in the "stdlib" (#10282)

Even if we decide to move some operators into the compiler later on, I would argue that features like sqrt/pow/etc should stay in the stdlib.

@chriseth chriseth moved this from Icebox to Design Backlog in Solidity Feb 25, 2021
@chriseth
Copy link
Contributor Author

@chriseth chriseth commented Feb 25, 2021

Sounds like a good first milestone, so I would say let's do this! One important point: Conversion from non-integer literals should also be part of the first milestone.

@chriseth
Copy link
Contributor Author

@chriseth chriseth commented Jul 28, 2021

Test ideas: check that e.g (1/3) == (1/3 + 1/3000000000000000) does not evaluate to true.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Solidity
  
Design Backlog
Backlog (breaking)
  
high priority
Linked pull requests

Successfully merging a pull request may close this issue.

10 participants