# lambdify precision loss with module=mpmath from high-precision Floats #8818

Closed
opened this Issue Jan 13, 2015 · 5 comments

Projects
None yet
4 participants
Contributor

### cbm755 commented Jan 13, 2015

 Floats with more than 16 digits are converted to double precision somewhere. Consider: ``````In [52]: x = symbols('x') In [53]: g = sqrt(2) - x In [54]: h = g.evalf(64) In [55]: g Out[55]: -x + sqrt(2) In [56]: h Out[56]: -x + 1.414213562373095048801688724209698078569671875376948073176679738 `````` Note `h` has a 64-digit accurate Float in it (and the value is correct). But lambdifying `g` and `h` is not same: ``````In [57]: f1 = lambdify(x, g, modules='mpmath') In [58]: f2 = lambdify(x, h, modules='mpmath') In [59]: f1(N(sqrt(2),64)) Out[59]: 1.899113549151959749494648453912391430844193166723988993255955998e-65 In [60]: f2(N(sqrt(2),64)) Out[60]: 0.00000000000000009667293313452913037187168859825586442682332026201917202971226475 `````` The help string for `f2` shows no loss: ``````In [64]: f2? Type: function String form: at 0x7f6a43bd92a8> File: Dynamically generated function. No source code available. Definition: f2(_Dummy_22) Docstring: Created with lambdify. Signature: func(x) Expression: -x + 1.414213562373095048801688724209698078569671875376948073176679738 `````` I haven't figured out how to look at the actual code yet, but somewhere something is being converted to double-precision (which might be fine for module=numpy but should not happen here).

Closed

Contributor

### cbm755 commented Jan 13, 2015

 Tracked down a bit: lambdify.py line 376 calls python's builtin `eval` from a string representation of the function. This will convert the 64 digit float into a double precision. Perhaps this is a design decision of `lambdify`: currently it cannot support more than double precision. But then what is "module=mpmath" supposed to do here?

Closed

Member

### moorepants commented Jan 13, 2015

 If you check the history of lambdify, someone refactored it long back and added the "modules" support. I personally never understood why the sympy or mpmath modules are necessary at all, because evalf and subs already give that functionality. I'd be fine with removing them because I've never heard of a use case.
Member

### asmeurer commented Nov 29, 2016

 modules='mpmath' is used in nsolve (the function is lambdified and passed to `mpmath.findroot`). This bug prevents you from using higher precision in nsolve, even if you set `mpmath.mp.dps` manually. CC @scopatz, this is the source of the problem we were having.
Contributor

### scopatz commented Nov 29, 2016

 I see, @asmeurer. So is the solution to remove that line? Or to have a kwarg to turn it off?
Member

### asmeurer commented Nov 29, 2016

 Fix is at #11862.

### Shekharrajak added a commit to Shekharrajak/sympy that referenced this issue Jan 1, 2017

``` Keep full precision when lambdifying to mpmath ```
```We need a special lambda printer for Float for mpmath. Previously it would
just use str(Float), which gets evaled as a Python float literal, hence
limited to only ~15 digits of precision.

Note that it is still up to the caller to set mpmath.mp.dps to get the full
precision. This is unlikely to change, since mpmath keeps a global precision
(unlike SymPy which has a per-Float precision).

Fixes sympy#8818.```
``` d8112e6 ```

### skirpichev added a commit to skirpichev/diofant that referenced this issue Dec 17, 2018

``` Keep full precision when lambdifying to mpmath ```
```We need a special lambda printer for Float for mpmath. Previously it would
just use str(Float), which gets evaled as a Python float literal, hence
limited to only ~15 digits of precision.

Note that it is still up to the caller to set mpmath.mp.dps to get the full
precision. This is unlikely to change, since mpmath keeps a global precision
(unlike SymPy which has a per-Float precision).

Fixes sympy/sympy#8818

// edited by skirpichev```
``` f7cdb6d ```