-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Separation of Decimal and Context #70
Comments
You say:
So let's do that example. Say we are attempting to compute You could argue that apd can just choose a high-enough precision so the end result will be within some error bound, but that would require apd to understand entire computations and be able to determine how precise the end result would be based, and then compute the desired precision from that. This to me sounds incredibly complicated (especially considering we support sqrt, ln, exp, etc.). Until then we just require users to specify precision. Contrast this to other projects where the precision is a global variable (and thus unsafe to change during concurrent operations): All of the decimal packages must have this precision specified. Some just hide it from the user more than apd. You can use Also see python, which has a really similar API as apd, and requires precision to be specified: https://docs.python.org/2/library/decimal.html |
What's the reasoning for the separation of
Decimal
andContext
? Whenever I use a decimal type, I tend to need to use an arithmetic operation on it. Having to define a context, my relevant decimal objects, and determine a precision for the operation each time I need to calculate a value is a bit unwieldy. On the note of precision, I usually want the most precise value during a series of operations and then I will round at the end of the series. Having to determine a precision for each operation is not ideal in that case, as it may result in a loss of precision at one of the steps in the series.The text was updated successfully, but these errors were encountered: