Skip to content

Generate fixed point decimals #420

@jshprentz

Description

@jshprentz

Add an optional places parameter to the decimals() strategy to specify the number of decimal places needed in generated values.

Many applications use Decimal numbers to perform precise calculations to a specific number of decimal places. Dollar amounts often use two decimal places, for example. Mutual fund shares may be tracked to five or six decimal places. Manufacturing tolerances are often specified to thousandths of an inch—three decimal places.

The Hypothesis decimals() strategy currently generates decimal numbers from floats or fractions. These value typically have uncontrollably many decimal places.

>>> from hypothesis.strategies import decimals
>>> decimals().example()
Decimal('0.002472471796257515368506383841')

My proof of concept, attached as test_decimals.py.txt, adds a an optional places parameter to the decimals() strategy. When places is specified, decimals() generates values with the given number of digits after the decimal point. When places is not specified, the current unconstrained value generation is used.

>>> import test_decimals
>>> decimals = test_decimals.decimals
>>> decimals(places=3).example()
Decimal('5269885835422919568130741.438')
>>> decimals(places=2).example()
Decimal('27258100601131699143211881.42')
>>> decimals().example()
Decimal('0.2908554201733450571536239166')

The sample code generates fixed point decimals from the integers() strategy rather than the floats and fractions used currently. The Hypothesis generated integers map directly to decimal values. With three decimal places, for example, 1 maps to .001, 2 maps to .002, and so on. For fixed decimal place values, I think that the underlying integer shrink strategy is more appropriate than the float or fraction strategy used for unconstrained decimal place values.

Python has a global decimal context containing various configuration parameters for the decimal module. Applications may change the decimal context globally or locally. My proof of concept code uses the context's precision value, which specifies the maximum number of digits allowed in a value. Hypothesis caching does not play well with global values changed between tests. My sample code does not cache values from decimals(), but does cache values from fixed_point_decimals(), which has precision as an explicit parameter. For example,

>>> from decimal import localcontext
>>> with localcontext() as context:
...     context.prec = 10
...     value = decimals(places=3).example()
...     print(value)
...
2609227.817
>>> with localcontext() as context:
...     context.prec = 20
...     value = decimals(places=3).example()
...     print(value)
...
17961664460289794.568

Metadata

Metadata

Assignees

Labels

enhancementit's not broken, but we want it to be better

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions