New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create a better interface for switching precision and rounding #683
Comments
The thread-safe part can be addressed in a minimal way by ensuring that either the contexts or the attributes of the context use thread-local variables: That would entail some slowdown on attribute access but it is not clear how significant it is in the context of everything else. The slowdown of thread-local data access in Python is not as much as it would be when working in something like C which has less general overhead. A thread-local is really just a global variable that is per-thread rather than being global to all threads. It is thread-safe but still otherwise has all of the pitfalls of global variables so it would be good to make it clear in the docs how to avoid using this altogether at least for downstream library code that really should not depend on using global state like this. It is broadly possible to just use local contexts: In [30]: from mpmath import MPContext
In [31]: mp1 = MPContext()
In [32]: mp2 = MPContext()
In [33]: mp1.dps = 4
In [34]: mp2.dps = 8
In [35]: mp1.cos(1)
Out[35]: mpf('0.5403061')
In [36]: mp1.cos(1) + mp1.cos(1)
Out[36]: mpf('1.080612')
In [37]: mp2.cos(1) + mp2.cos(1)
Out[37]: mpf('1.080604611') This works because the mpf instances store a live reference to the precision of the context that created them: In [43]: f1 = mp1.cos(1)
In [44]: f2 = mp2.cos(1)
In [45]: f1._ctxdata
Out[45]:
[mpmath.ctx_mp_python.mpf,
<function object.__new__(*args, **kwargs)>,
[17, 'n']]
In [46]: f2._ctxdata
Out[46]:
[mpmath.ctx_mp_python.mpf,
<function object.__new__(*args, **kwargs)>,
[30, 'n']]
In [47]: f1.context
Out[47]: <mpmath.ctx_mp.MPContext at 0x7fed42848ed0>
In [48]: f1.context is mp1
Out[48]: True
In [49]: f2.context is mp2
Out[49]: True That can be used by methods like One way this is a bit flakey though is that sometimes there will be mpfs from different contexts: In [87]: f1*f2
Out[87]: mpf('0.29192658151797381250293427379467042520362856982479944')
In [88]: f2*f1
Out[88]: mpf('0.29192658141') Here The docs should generally recommend using a local context like this rather than manipulating the global context. Instead of from mpmath import mp
mp.dps = 100 it can be: from mpmath import MPContext
mp = MPContext()
mp.dps = 100 Better would be if the from mpmath import MPContext
mp = MPContext(dps=100) If this were the recommended way to create the context then the confusion in gh-657 would possibly not have happened. I think this is a reasonable way to use In library usage it is then necessary to be able to use things like In [81]: mp = MPContext()
In [82]: with mp.workdps(10):
...: print(mp.cos(1))
...:
0.5403023059
In [83]: with mp.workdps(20):
...: print(mp.cos(1))
...:
0.5403023058681397174 A larger codebase using this would need to pass around this context object Utilities like In [90]: mp = MPContext()
In [91]: f = mp.cos(1)
In [92]: f
Out[92]: mpf('0.54030230586813977')
In [93]: f._ctxdata[2]
Out[93]: [53, 'n']
In [94]: mp.dps = 30
In [95]: f
Out[95]: mpf('0.540302305868139765010482733487152')
In [96]: f._ctxdata[2]
Out[96]: [103, 'n']
In [97]: mp.cos(1)
Out[97]: mpf('0.540302305868139717400936607442955') This is necessary for In [101]: mp.dps = 10
In [102]: f1 = mp.cos(1)
In [103]: f1
Out[103]: mpf('0.540302305868')
In [104]: with mp.workdps(20):
...: print(f1 + f1)
...:
1.0806046117359073833
In [105]: f1 + f1
Out[105]: mpf('1.080604611736') Here I think that probably in library usage of mp_extra = mp.extraprec_new(4)
f_add_extra = mp_extra.add(f1, f1)
f_add = mp.normalize(f_add_extra) This way every operation comes from a particular context and no global state affects anything. This style of usage is automatically thread-safe without any need to use special thread-local storage. Passing immutable context objects around as arguments to functions also makes it straight-forward to cache the results of different operations with e.g. |
I'm not sure. Because we have functions in the global mpmath namespace, that implicitly have notion of the active global context (which is actually I don't think that banning such usage is a good idea. And also we can't "guess" the local context from arguments of such functions, because, for example, we want to feed them with python's builtin types, like
This is a good example why the notion of the "active" context (like gmpy2's) is useful.
Actually, mpmath has
+1 Do you have some suggestions about naming? Maybe we should have a single factory function to produce contexts, named (surprise!) as
gmpy2's way seems to me better: >>> ctx = gmpy2.context(precision=37)
>>> f = ctx.cos(1);print(f)
0.540302305868
>>> with gmpy2.local_context(precision=70):
... print(f+f)
...
1.0806046117359073832631
>>> print(f+f)
1.0806046117359074 In fact,
It's possible right now, but this will be too verbose if we force users to explicitly reference context for every arithmetic operation.
Yeah, I see benefits of this. IIRC, the bigfloat package has immutable contexts. In short:
|
Oh, I missed those.
I am not suggesting to abandon anything or break compatibility. Adding an alternate interface for creating contexts can be done without breaking compatibility. At the same time a new interface can make other changes such as by returning an immutable context. If a new interface is considered to be better then the documentation can be changed to suggest using it. Currently all docstrings use So if the suggested usage is not from mpmath import mp
mp.dps = 50
x = mp.cos(1) This is close enough to from mpmath import context
mp = context(dps=50)
x = mp.cos(1) This version has the advantage of not depending on global state. If the I expect that most "end users" will not want to do much more than set a single precision and do all of their calculations with that. If there is a need to extend the precision with e.g. mp_extra, [x_extra, y_extra] = mp.extradps(4, [x, y])
z_extra = mp_extra.cos(x_extra) + mp_extra.sin(y_extra)
print(z_extra)
I agree and also In [31]: with mp.extraprec(50):
...: x = mp.mpf(1)/3
...:
In [32]: x
Out[32]: mpf('0.33333333333333333') |
I think these statements are contradictory. You can't just add an additional immutable context type: after this you will have to change a lot of code to allow using this context in the mpmath (most functions now alter their context variable).
Yes, I think we should adapt docstrings and sphinx docs to avoid this, but this is slightly a different issue.
I thought we do agreed that this is a horrible idea, unless we make all contexts immutable (in which case this looks like gmpy2.mpfr, where numbers have precision settings). But even after this, that "guessing" of precision settings from arguments of arithmetic ops is fragile and will break few remaining algebraic identities of floating-point arithmetics, e.g. commutativity.
Explicit is better than implicit. That's why I think that this notion of a "single global precision" should be transparent for the user. |
Many functions do not alter the context e.g. the libmp functions are all pure. For the higher level functions that do alter the context the immutable context could delegate to an internal mutable context. Another point about immutable contexts is that creating a context should be made faster than it currently is. Currently it is slow because of all of the metaprogramming, decorators and |
This is relatively low-level stuff. See #178 for some plans to replace mpf_/mpc_ layers with more high-level code like in functions/.
BTW, before any big change as proposed here, we, probably, should start with writing benchmarks for current high-level code. |
That discussion looks a little dated now. I'm not sure which part of that would still be relevant any more. |
Yes, and I thank you for your feedback in #178. Yet it seems relevant as some comment on historical code changes and it has also some notes about contexts. BTW, many old issues (til around #216, I think) are irrelevant now - I'm slowly sorting this out and I would appreciate a second view. |
The mpmath provides global context objects (mp, fp, iv). Most functions in the global mpmath namespace are actually methods of the mp context.
It would be better, if we could introduce an explicit notion of the current context, that will prevent issues like #657, something more close to gmpy2 contexts handling. (Another example that worth considering is the figfloat package.)
The current interface also is not thread-safe.
The text was updated successfully, but these errors were encountered: