Support decimals #737

Open
opened this issue Apr 28, 2019 · 11 comments

Projects
None yet
7 participants

Support decimals as first class citizens

F# is often using decimal type, for e.g. financial calculations. Sadly, I have to often communicate with people that are non-fsharp-developers. I end up spending my time removing and adding "m"-letters from the communications, because it makes figures messy.

Consider the following code (or 200 lines of corresponding code):

```let slope = -0.11m
let consta = 0.5m
let doSomething score =
let complement =
1m - (score * slope + consta)
if complement > 0.95m then 0.95m
elif complement < 0.85m then 0.85m
else complement```

It's totally understandable by a tech/math person after someone first explaining to her/him that we have this problem with the language that every numer has to end with m.

Secondly I would like the math operations support decimals. So instead of writing

`    let sigmoid (z:decimal) = 1.0/(1.0 + exp(float -z)) |> decimal`

I would like to write:

`    let sigmoid z = 1.0/(1.0 + exp(-z))`

I don't want implicit conversion to floats and decimals.

I propose we handle non-variable constant float figures as either-float-or-decimal type, and then after adding to decimal they are decimals, and after adding to float they are floats.
So that this code passes:

```let add5d (x:decimal) : decimal =
x + 5.0

let add5f (x:float) : float =
x + 5.0```

Pros and Cons

Communication with non-fsharp-programmers.

More business logic code, less technical structuring code.

Please tick this by placing a cross in the box:

• [x ] This is not a question (e.g. like one you might ask on stackoverflow) and I have searched stackoverflow for discussions of this issue
• [x ] I have searched both open and closed suggestions on this site and believe this is not a duplicate
• [? ] This is not something which has obviously "already been decided" in previous versions of F#. If you're questioning a fundamental design decision that has obviously already been taken (e.g. "Make F# untyped") then please don't submit it.

• [? ] This is not a breaking change to the F# language design
• [x ] I or my company would be willing to help implement and/or test this
Author

Thorium commented May 4, 2019

 If not doable, then I would like to have #light style of "keyword" to select non-suffix numbers type.
Member

realvictorprm commented May 4, 2019

 I think your problem is understandable and should be tackled.

Happypig375 commented May 4, 2019

I'd like to generalise this to

Inferred literal types

F# currently requires types to be specified in literals. This easily becomes a problem when communicating with other non-fsharp-developers.

For example, decimal literals need to be littered with `m` everywhere, which is ugly and redundant. I propose that instead of limiting literals with no suffixes to a known type, their types will be inferred. For example, instead of writing

```let slope = -0.11m
let consta = 0.5m
let doSomething score =
let complement =
1m - (score * slope + consta)
if complement > 0.95m then 0.95m
elif complement < 0.85m then 0.85m
else complement```

, the following would be allowed.

```let slope = -0.11 //unknown floating-point type, waiting to be inferred
let consta = 0.5 //unknown floating-point type, waiting to be inferred
let doSomething score =
let complement =
1. (*Still need to write the decimal dot to differentiate between integral types and floating-point types*)
- (score * slope + consta)
if complement > 0.95 then 0.95
elif complement < 0.85 then 0.85
else complement
let _ = doSomething 0.77m //This is what limits the types above to decimal```

If no limiting suffix is given, such as changing `0.77m` to `0.77`, the types of the floating-point numbers would be inferred as `float`, which is the default floating-point type.

The same would be enabled for integral literals:
Old: `let arrayOfBytes = [| 2y; 3y; 4y; 5y; 6y |]`
New: `let arrayOfBytes = [| 2y; 3; 4; 5; 6 |]`

This change means that a numeric literal without a suffix could have an arbitrary type.
To define a literal with the default integral type, int, the existing suffix `l` would now be required.

To define a literal with the default floating-point type, float, a new suffix would need to be defined. Currently, the suffixes for floating-point types include `F` or `f` for `float32`, `M` or `m` for `decimal`, `lf` for `float32` in hex form, and `LF` for `float` in hex form. It is an unfortunate fact that `F` stands for `float32`, not `float`, which is inconsistent with `lf` and `LF`. A new pair of suffixes, `ff` and `FF`, would be introduced to mean `float`.

Therefore, this is now possible:

```let inferredIntegerType = 2
let Int = 2l
let inferredFloatType = 2.0
let Float = 2.0ff```
```let x = 4.5
let y = x + 4.5ff
//x is float```
```let x = 4
let y = x - 3y
//x is byte```
```let x = 4
let y = ref x
//Value restriction. The type of x is the default integral type, int.
//y has the type ref<int>```

Pros and Cons

Pros: This strengthens one of the advantages of using F# - ability for non-developers to understand code.
This also enables developers to write more concise code.

Cons: It is a change of meaning of existing literals, which may be a breaking change.

Further considerations

This could be in theory be extended to string and byte array literals, but usage of byte array literals seem too low to justify implementing this feature.

Please tick this by placing a cross in the box:

• This is not a question (e.g. like one you might ask on stackoverflow) and I have searched stackoverflow for discussions of this issue
• I have searched both open and closed suggestions on this site and believe this is not a duplicate

[?] This is not something which has obviously "already been decided" in previous versions of F#. If you're questioning a fundamental design decision that has obviously already been taken (e.g. "Make F# untyped") then please don't submit it.

[?] This is not a breaking change to the F# language design

• I or my company would be willing to help implement and/or test this
Member

realvictorprm commented May 4, 2019

 I would limit the change to the floating point numbers. I'm in favour of this idea as I often enough swear about that I need to write the literal to every number. I think the inference rules need to be more precise and structure so the impact can be better estimated. My understanding: Numbers with a dot inside are considered to be of non further specified floating-type. If no specialication during inference happens at the end of the analysis all general floating-type occurences will be unified to `float` (double). If a value is used with a restricted value (e.g. decimal) the value will be unified and all occurences need to be type checked again to apply this recursive. This described behaviour would limit this change to calculations only and won't allow to write the array syntax described by @Happypig375

voronoipotato commented May 6, 2019 • edited

 The other simpler solution would be a pragma or project setting that sets the default floating point type. At my work we always use decimals, always. Nearly every time we are using a float, it's a mistake. Conversely I know there a businesses who always use float, and any time they are using a decimal it's a mistake. Given that we can require `let x = 3.4f` and `let y = 3.4m` it stands to reason to me that there should be a "default" floating type setting. It would need to be well documented however it might be a good solution if we only want this to apply to floating point numbers instead of a general inference solution.

Happypig375 commented May 6, 2019

 However, there's currently no way to specify a float using a suffix. `m` is for decimal, `f` is for float32. In contrast even if no suffix is needed for an int literal, you can still suffix it with `l` to achieve the same effect.
Member

cartermp commented May 6, 2019

 I like @Happypig375's thinking here about generic numeric types when no suffix is present. Type inference would then apply when you use it in a particular context (e.g., when stuffing into a `%f`l in a print statement). There would have to be a decision about the default picked when it's not realized as a specific type in your own code. I'd propose that to just be what it is today (`int` and `float`).
Member

realvictorprm commented May 6, 2019

 @cartermp great to hear that, I think the default you propose would fit best with backwards compatibility
Collaborator

dsyme commented May 7, 2019 • edited

 The main issues with leaving the type uninferred for `let x = 1.0` are: The execution of a fragment like `let = 1.0` "in isolation" (e.g. as a single-line execution in F# interactive, or a single cell in a Jupyter notebook) will infer the default type. This will be very unexpected. It already happens with code like `let f x = x + x` which defaults to `integer` in the absence of other information. (You can argue this is a flaw in the send-text-to-interactive model of execution and that the inference information from the source document should be propagated to F# Interactive). There are likely to be subtle compat issues if type inference doesn't "commit" to the resolution early, e.g. later overload resolution may fail because less committed information is known. The compat issues could be overcome with a `/langversion` switch (at the expense of possible breaking source-language changes) or a `#pragma "inferred-literals"` or perhaps by extremely careful implementation of the change in inference. We've had these kind of compat issues before and I'm at the stage where I think I'd rather reach for a `#pragma`. An alternative would be to allow the use of a hard known type, e.g. ``````let x : decimal = 1.0 `````` and other places where the exact type is known from context. But this will have more limited application.

gusty commented May 12, 2019

 I always thought it's a good idea to have generic numbers baked into the compiler. Currently the compiler provides this kind of support for math operations (by using SRTP and defaults) but not for literals. Haskell has already generic number literals, so in F# we would default to `int` (existing default for math ops) unless it contains a `.` in which case we would have to add a default for `float`. Additionally a `bigint` default could be considered for big numbers. Regarding isolated sentences, one important problem this would solve is when a person not very familiar with the language (in my experience business people, quants) write a simple line like `(20.5 + 43.1) / 2` the error they find is not very obvious to them and the workaround of adding a `.` to everything is not very welcome, sometimes it's enough to convince them to switch back to python, C# or whatever they've been using. In F#+ there is an experimental feature with generic literals (with the suffix G) and it works as expected with the existing F# compiler's type inference. Therefore I propose we change the name of the suggestion to Generic Literals.

Happypig375 commented May 12, 2019 • edited

 Also people unfamiliar with F# might write `1/2` and expect `0.5`. This is a non-issue in Haskell because `/` is always float division and the `div` function is used for integer division instead. This can't be changed because of backwards compatibility.