-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
precision-semantics policy #75
Comments
I think you're stuck here - in fact Multiprecision is deeply schizophrenic over this, and in general you absolutely cannot reason over how many temporaries will be created, or whether they are default constructed (current default precision) or copy constructed (copies source precision). So even in the simplest possible case:
We have two options:
For more complex expressions things get even worse because the expression template may radically reorder operations, change them even from operator+ to a += etc. The only conceivable way I can think of to ensure complete sanity here, is to always promote to the highest precision of each the arguments (rather like arithmetic type promotion), and return to assignment-copies-precision semantics. But then you would loose the ability to automatically up/down-sample precision on assignment, and instead would have to use
etc, even though x may be at the precision you want prior to the assigment. On the other hand, perhaps this is worth it for better sanity elsewhere? |
assignment preserves precision of target (APPoT)I like consistency. I like how predictable things are with assignment preserves precision of source (APPoS)OTOH, 💡in a related vein, what I think might really help is to provide a switch that one can compile-in that prohibits mixed precision arithmetic -- it throws, say. maybe Another question I have: are there any performance concerns with APPoT/APPoS? I don't know, is what I am trying to say. I want to help bring predictable behaviour that I can communicate to my users, and will never change. I think we (you) get to choose, but that choice should be careful, deliberate, and well-documented, esp. because of the centrality of Boost to many C++ developers (me!). If poked harder to make a decision this instant, I would take APPoT for consistency with dependencies and the pleasure of my experience writing code against Thanks for the conversation, I really appreciate your time. |
I don't know what the right thing to do is either. I'm not sure that expression templates are as consistent as you think - for complex expressions there will still be temporaries created at uncertain (ie default) precision. If we use APPoS then there is a performance hit - a small one checking whether we need to change the precision of the target, and then a bigger one if the target (or any temporaries) have to be "re-precisioned". I'm wondering if the answer is "don't do that", and provide tools for the user to check at runtime that they're not accidentally doing mixed-precision arithmetic? That would include having the current default precision the same as the arguments so that any temporaries have the right precision. It would be nice if temporaries could be magically constructed at the correct precision, but the number of places that need checking/fixing is crazy and I simply don't know how to validate that :( |
Ugh, debug checks don't enforce consistent behaviour either - not unless you enforce uniform precision is assignment and move assignment as well - and that would prohibit a whole bunch of use cases :( |
I think you're right; there's little that can be done about generating temporaries at default precision. We must therefore write code that changes precision deliberately and cautiously. For example, I keep banks of complexes around and change them all at certain points in my code to resolve this problem, and then just assume that precision is uniform, for better or worse. Or, I take a collection of vectors, say, get the highest precision of any of them, upsample them all to that, change default, and then proceed knowing that I am consistent. A library-level runtime consistency check would be awesome, but I understand that might be unwieldy to implement. Even a few opportunities or tools to that end would be better than none, though! Maybe that's a longer-term goal? I currently vote for APPoT, based on performance and consistency with mpfr and mpc (with I think that I should try to compile my Python bindings and ensure that APPoT for Maybe we should take a break to think about APPoT / APPoS before really committing? Or ask another person? Does Christopher Kormanyos or another stake holder have an opinion? |
I've just posted to the develop mailing list on this... |
I checked my Python bindings. Things work sanely, with 👍For posterity, here's the code:
|
mpfr_float
that is precision-semantically equivalent to the new mpc
type
i feel like one strike against APPoT is that it masks precision problems 🎭 , in that you never see from the outside that non-target precision may have been used; the target is guaranteed to be in its starting precision. i still like APPoT, though, over APPoS. |
any word back from the dev list? |
Yes, but I'm not sure whether it helps: "How about something like this:
I believe these are the correct semantics, without
|
interesting, the suggestion is for APPoS. i do like APPoS, because it is easier to tell if something is out of precision. i just don't have a strong preference.
easy, right? 🤷♀️ is there much change needed with
this requires modification of the internals of the library, right?
this is already done, just
i think this is just
I think this person is thinking of the precision cost due to higher precision, not that of constantly checking precisions of things as came up earlier in this discussion. Is this worth it? I personally strive to use uniform precision, because there is numerical analysis underlying the precision changes in my code. the minimum working precision at any point in the code is based on the (log10 of the) condition number of the jacobian. so, i am not sure i care about performance in mixed precision case, which would come up inbetween primary loops. but i do care about performance in general, because complex variable precision arithmetic is the critical pathway for me. one other concern I have: |
Very little change IMO. There might be some messing about with assignment of expression templates to stop them from using a lower precision target variable as working space. And it is potentially a saner alternative - or at least more consistent?
Getting mixed precision arithmetic to work correctly with variable precision types requires a very significant investment of time regards of the model used.
Yep.
Yep.
Which is probably true BTW.
If you always keep precisions consistent, then this should all be a non issue anyway. But to operate at a lower precision, you would have to copy all the input variables, do the calculation and then move back to higher precision. My principal concern is for the person who has a huge matrix of high precision data, and wants to do a low precision computation on it - under this model they can't - because as soon as they perform an arithmetic operation involving the data, it's performed at that higher precision and then assigned to the result. In other words there's no support for "add_at_precision(x, y, N)" that performs N-digit addition of x and y regardless of how many digits x and y have. However, this model does closely mimic what happens with mixed arithmetic of fixed precision types.
The only sane choice under this model is 60, and yes commutative. Crazy question - do you primarily operate at just 2 precisions? And if so have you considered using fixed precision types? You could ditch memory allocation altogether on new variable construction as well - at least for mpfr (sadly not mpc). |
no; well, kinda. i use hardware double when possible, and multiple when dictated. one feature of the software is that we let people choose adaptive, or a fixed precision. adaptive is where it's at, though, for actually solving non-trivial systems. the thought has gone through my mind to use a fixed set of precisions known at compile time, and move between them as needed. one package (not c++), homotopycontinuation.jl, is thinking of taking something like this approach, but they will limit themselves on what problems they can solve without arbitrary precision. they also had only implemented double precision stuff last i talked to them, so this will be a long bridge to cross for them, i bet. i personally still like mpfr-style variable precision. |
i thought ^^^ this was par for the course?
can we brainstorm this problem? let's suppose we have oooh, i think you mean that
am i on the right track? |
Sort of, but consider mpfr C code first, if we do a:
The I assume that mpfr will do just enough work to obtain a correctly rounded result at the precision of Now, if expression templates are on and we do a: dest = a * b; we end up with exactly the same code path as the C code above, no temporaries, just a straight mpfr_mul into the result. So, quite by accident, it's fast as long as APPoT is what you want. Trouble is, it's not currently predictable once more complex expressions involving temporaries are involved, and/or you start getting move semantics kicking in. So as things stand the only thing we can say about:
Is that the precision of the result is unspecified unless all variables (both source and destination) have the same precision as the current default. I suspect it's going to be very hard to say much else if we stick with APPoT. But with APPoS it's a bit easier to implement something sane because each and every operation operates at the highest precision of the source variables involved. |
Matrix arithmetic is a tricky one because it would depend on the internals of the matrix implementation. If everything is expression templates, then
Might well operate at the precision of |
I was thinking that we could provide non- member functions which generate the target at a precision other than the largest of the sources. So I naive form would be:
IFF I'm correct about MPFR then a better implementation wouldn't need to create temporaries and would just do an mpfr_add direct to |
I think my main reservation about APPoS is that just a few high precision values can pollute the whole computation - consider a matrix multiplication Potentially, the optimal solution would be a policy based approach whereby we have a thread-local policy in effect that determines the behaviour, so this might include:
Anything else? The only thing I can think of that would be incompatible with this, are parallel implementations of the arithmetic operations. But we don't have those ;) That would allow you I think, to temporarily change the default behaviour, and say "multiply these two matrixes and have the result and all intermediates at current default precision please", and all up/down sampling would "just happen". |
i figured i'd bring in a comparison with another library... here's what a "competitor" to Boost.Multiprecision, mpreal by Pavel Holoborodko, says about their code (which doesn't look maintained by the website):
what wisdom can we glean from this description? |
this seems like the correct thing to me... the parts of this whole conversation is filled with back and forth in my mind. wow. i just can't stay focused on APPoT or APPoS. they're both so reasonable in their own ways.
i like the policies ideas. this was in my head earlier in the discussion, to add a policy choice. "prefer explicit over implicit", i teach my students. perhaps it's the best solution to this problem -- pass the choice on to the user. |
i note that i think it is hard to write client code that doesn't care about the policy choice. APPoS seems to induce a "change precision of operands early" and "change precision of targets late" kind of code, while APPoT feels like "just maintain precision of target and it'll be fine". i know about only my own subject area, so... i don't know how other people feel about this. how does this jive with you? |
from the official mpfr documentation: (from 4.4, rounding modes)
(from 5, mpfr interface)
so, it really looks like MPFR is APPoT in flavor, we we noted earlier in this issue. of course, there are a bunch of options for arithmetic in MPFR. |
I confess I don't use the variable precision types, just the fixed precision ones... |
that's fine! all are welcome! |
I've been reflecting on this a bit, and I think that APPoS is probably the only option I can implement consistently between expression templates on and off. Well that and "don't care", but that's rather less useful! |
Ok, APPoS seems like a good conclusion to come to. 🌸 i have a bunch of tests and code that are written against APPoT, so I'll change them. I wish I were stronger at the core stuff you do with this wonderful library, so I could contribute more than filing issues and conversing! yet, i am glad that I get to participate in this process with you. thanks for including me. |
OK let me see what I can do... |
APPoS implemented in #82 includes mixed precision arithmetic support. Big intrusive change, so it's a PR for now while CI cycles. Currently lacking decent tests (suggestion or code welcome!). I'm away on holiday shortly so I'll deal with this when I get back. |
Merged to develop. |
i want to do more testing, but it'll be a little bit. thanks!!! |
In the past for other projects, I've done the following...
* Allow the user to select precision before each big operation(such as multiplication or difvision).* Internal operations such as inversion, square root, andelementary transcendental functions set the neededprecision at every step of the calculation, such as in a Newpton-Raphsoniteration or series calculation. This makes the precision issuemostly efficient and hidden from the user.
As John indicated, our focus has been primarily on thefixed precisioncase and we try to support this withgreat correctness.
The issue of carrying only the neede precision is, however,one that we might consider more closely in the future.
In the larger sense, the topic of exactly how to handle precisionfor calculations with big´number types is ultimately somethingthat would need to be specified with great care. So sure,we should definitely be discussing this issue. But we must becautious and avoid breaking any of Multiprecision in its present form.
With kind regarss, Chris
On Sunday, September 16, 2018, 10:58:24 PM GMT+2, danielle brake <notifications@github.com> wrote:
i want to do more testing, but it'll be a little bit. thanks!!!
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
The conclusion I came to, was that this behaviour was the only thing we could implement reliably that gave predictable results in the presence of both rvalue moves and expression templates, either one of which could unpredictably "bump up" your precision to the same policy as this at any moment anyway. Newton iteration is a good test case, I'll try and whip up an example. |
FYI there are still issues with mixed precision and expression templates... much harder to get right than I thought :( |
Add example of variable precision FP evaluation and update docs. See #75.
The issues should be fixed now, also added a variable precision Newton Iteration example. Performance gain from using "just enough" precision at each step seems to be about 4-5x. |
Closing down for now, please file any issues found as new bugs - thanks! |
edit: i changed the title of this issue from "writing a complex class using
mpfr_float
that is precision-semantically equivalent to the newmpc
type".within the Boost.Multiprecision library, there are some precision rules:
for legacy purposes, i have to keep supporting users of my library with Boost versions that do not contain the new
mpc
implementation of complex numbers. i want my type to be precision-semantically identical withmpc
. i am struggling with one thing, and, while i know this isn't a Boost.Mutiprecision issue per se, since it is a use of bmp, i am hoping for some help.if this is too off-topic, please politely decline -- i'm feeling anxious about asking this already, admitting that i am ignorant about this; i'm a mathematician first and a programmer second.
my class is the "obvious" implementation of complex numbers with a
bmp::mpfr_float
as the real and imaginary fields, and all the typical operator overloading, etc. it's not elegant but it works.my one and only family of failing tests relate to mixed precision arithmetic. when i take complex's
a
andb
, at different precisions, and say add them, stuffing them into a previously existingz
at some other precision, thempc
policy is thatz
should be at its same precision after the arithmetic. however, with RVO and move assignment in the mix, myz
ends up at whatever precision the result is at. i can fix this by making the move assignment operator preserve precision, but this breaks moving. no good. i feel like i need to distinguish between moving and these cases coming from arithmetic, but that it's an impossible problem without re-writing the whole thing using ET's and basically replicating Boost.Multiprecision, a non starter.here's the test that fails for my naive implementation, whereas this test passes for the glorious new
mpc
implementation:my obvious overload for
+
:should i give up and just accept that because of RVO and move assignment that, without expression templating my type and basically recreating Boost.Multiprecision itself at some level, this is an impossible problem to solve?
my genuine thanks for any consideration. 🙇♀️
The text was updated successfully, but these errors were encountered: