Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update docs regarding Decimal scientific notation #377

Open
Mark90 opened this issue Dec 2, 2019 · 3 comments
Open

Update docs regarding Decimal scientific notation #377

Mark90 opened this issue Dec 2, 2019 · 3 comments

Comments

@Mark90
Copy link

Mark90 commented Dec 2, 2019

  • Include a link to the documentation section or the example

https://github.com/oracle/python-cx_Oracle/blob/master/doc/src/user_guide/sql_execution.rst#fetched-number-precision

  • Describe the confusion

Python's decimal.Decimal defaults to scientific notation for representing very small float values. Since the docs suggest using Decimal for retaining float precision, I think it makes sense to also mention this aspect, because depending on the developer's usecase this may or may not be a problem.

Python 3.7.4 (default, Jul  9 2019, 18:13:23)
[Clang 10.0.1 (clang-1001.0.46.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from decimal import Decimal
>>> print(Decimal('0.000001'))
0.000001
>>> print(Decimal('0.0000001'))
1E-7
>>> print("{:f}".format(Decimal('0.0000001')))
0.0000001
  • Suggest changes that would help

Add a code snippet and explanation similar to the above to the corresponding documentation section.

@Mark90 Mark90 added the bug label Dec 2, 2019
@Mark90
Copy link
Author

Mark90 commented Dec 2, 2019

I don't know why the Bug label was added, it's not a bug. It should be labeled in relation to Documentation Improvements.

@anthony-tuininga
Copy link
Member

I believe that has to do with display of the value and not specifically to do with how it is represented internally. We can add a note to the documentation, however, to clarify.

@Mark90
Copy link
Author

Mark90 commented Dec 2, 2019

Sorry, yes, I actually meant to refer to the string representation.

Thanks, a note would be great!

Bit more background. When I extract data from a database (which wasn't Oracle up till recently) e.g. to load into other systems, I rarely fiddle with how data is written out, because Python and/or the DB client takes care of it. In this case using Decimal may set the developer up for an unexpected failure - but I can also see why it's a great solution to retain precision. I just noticed there was an attempt to use Decimal as default, which was reverted for the same reason :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants