Abstract prioritization #5927

Open
ncannasse opened this Issue Jan 5, 2017 · 17 comments

Projects

None yet

4 participants

@ncannasse
Member

In HL I have the following:

package hl

@:coreType @:notNull @:runtimeValue abstract UI16 to Int from Int {}

Now the problem is that UI16 * Int = UI16 while I would like to result to be Int since it's a "larger type".
Is there a way to define that kind of priority (without having to define all the binops on UI16) ?

Funny is that F32 * Float = Float , so it seems it depends on the core type (or is it unspecified?)

@Simn
Member
Simn commented Jan 5, 2017

Abstracts are classified as KParam and then it comes down to what you probably implemented a decade ago:

		| KParam t, KInt | KInt, KParam t ->
			if op <> OpDiv then result := t
		| KParam _, KFloat | KFloat, KParam _ ->
			result := tfloat
@ncannasse
Member

Ah funny I didn't thought this will have lived through the abstract changes.
I think we should have a notion of "priority". By default it would be for instance 0 for Int, 1 for abstracts and 2 for Float - which would be compatible with current behavior, then I would add @:priority(-1) to UI16 declaration so Int has priority over it.

This would also help resolving ambiguous cases such ase (UI8 * UI16)

What do you think?

@Simn
Member
Simn commented Jan 6, 2017

My initial reaction is to think that you're crazy. :)

I'll think about it some more for Haxe 4.

@ncannasse
Member
ncannasse commented Jan 6, 2017 edited

Maybe for 3.4 we could have a faster fix that would return the underlying type instead of KParam (when a meta is defined on the abstract), that would at least solve the bugs in HL related to overflows when using basic type, and it would not affect any existing code given there would be no such meta used for other types

@Simn
Member
Simn commented Jan 6, 2017

Let's please not do something like this for 3.4...

@back2dos
Member
back2dos commented Jan 6, 2017

The particular issue could be solved without touching anything by defining an army of operators on the abstracts in question, specifying the desired result type.

As for dealing with it in the language, I think this @:priority idea is total overkill. I propose instead that @:coreType abstracts may define an underlying type, which will be used to determine 1. the actual operations and 2. the return type (unless top-down inference says otherwise).

So the definition would be:

@:coreType @:notNull @:runtimeValue abstract UI16(Int) to Int from Int {}

These specific numeric types are always subtypes of the larger types. For example the result of (a : UI16)+ (b : UI16) should by default be Int to capture overflows. Only ((a : UI16)+ (b : UI16) : UI16) should result in UI16.

@ncannasse
Member

@back2dos I don't think so, it's also very different from what C is doing, which makes harder to port some code

@Justinfront
Contributor
Justinfront commented Jan 6, 2017 edited

algebra in Haxe with abstracts has evolved very powerfully but any consideration of priority should also consider presedence or orderOfOperation as discussed here:
HaxeFoundation/haxe-evolution#13
currently I believe for each symbol it is predefined "fixed" for all abstracts, while that allows consistancy when mixing with normal number types it really limits the mathematical capabilities of Haxe. Perhaps I am wrong but priority and orderOfOperation seem to be connected in defining consistant and flexible Haxe algebra system. To add something like priority without thinking about algebra approach in general could lead to a limited solution that is less flexible long term?
While it's important to solve platform target maths for HL and beyond, it would be a shame if a rushed fix possibly limits Haxe's abstract symbol manipulation. Good approaches would attract Science and Maths communities to Haxe if they can bend abstracts to obscure mathematics and physics, currently it seems to only be halfway so is unlikely to have any attraction to these communities?
So while I don't have a direct contribution I wanted to bring up presedence as it feels it maybe related to priority but maybe it's not.

@Justinfront
Contributor

Also surely Nicolas if you want to propose @:priority it should be raised in evolution, I thought that was the point of Evolution repository? :)

@ncannasse
Member

@Justinfront the point of evolution is to have proposals from the community.

I'm looking at fixing a bug for now which makes HL UI16/UI8 integers auto overflow when used with an Int. I don't care if it's just a temp change and we spend months/years discussing the "proper" solution afterwards, as long as the issue is fixed in the meanwhile.

@back2dos
Member
back2dos commented Jan 6, 2017

@ncannasse If you want a temp fix, then define the operators, no? As for my other proposal's divergence with C, I think you'd have to go into a little more detail ;)

@ncannasse
Member

@back2dos in C, doing (unsigned char) + (int) gives an an (int). It's always the "largest" type (i8 < i16 < i32 < f32 < f64). Depending on the compiler you get some warnings in the case you mix signed/unsigned or i32/f32 because some i32 can't be represented as f32.

The problem with defining the operators is that I have 3 abstracts, which are UI8 UI16 and F32, so that's a lot of them to define :)

@Simn
Member
Simn commented Jan 6, 2017

I think I prefer you having to copy and paste a few operator definitions over changing years old code between RC and release...

@back2dos
Member
back2dos commented Jan 6, 2017

@ncannasse I am aware of C's behavior and I'm in fact aiming for something very similar:

If you declare @:coreType abstract UChar(Int) then according to my proposal the operation will naturally result in Int. How is that different? My thinking is that the basic Int and Float types are the "larger" ones virtually all the time. The one exception I currently see is Int64 which is larger than Int but the operations on Int64 cause the other operand to be promoted to Int64 anyway, so I think it could just work. Could you construct an example where it fails?

@ncannasse
Member

@back2dos that would be a problem with UInt

@ncannasse
Member

@Simn I'm not talking about changing any code or risking any compatibility break, just adding an extra case that would not affect current behavior

@Simn
Member
Simn commented Jan 7, 2017

Yes but I don't share your optimism regarding that extra case not affecting anything.

@Simn Simn modified the milestone: 4.0 Jan 9, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment