Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Orthogonal concepts #14

Open
shelby3 opened this issue Sep 25, 2016 · 192 comments
Open

Orthogonal concepts #14

shelby3 opened this issue Sep 25, 2016 · 192 comments

Comments

@shelby3
Copy link

shelby3 commented Sep 25, 2016

I am trying to nail down in my mind, the fundamental orthogonal semantic concepts of our proposed programming language.

Concept Description
data Unified sum, product, recursive, and record data types. No early bound operations other than construction and access. Record types may optionally be mutable.
typeclass Delayed binding of operations to types at the use-site. Contrast to OOP (subclassing) which binds operations at construction of objects. For polymorphic types, typeclass objects delays binding operations until construction of the typeclass object (instead of prematurely binding operations at the construction of an instance of the implementing type); whereas, my posited and proposed solution to the Expression Problem employing unions, in theory further delays binding to the use-site for polymorphic types. Note that delaying is a positive attribute in this context, because it increases degrees-of-freedom.
module Encapsulating data with compile-time (statically) bound operations and access control, enabling more sophisticated types.
monadic effect system #4 (comment)

Thus I am wondering if module is the most apt name for the concept? I think of a module as a unit of code that is imported separately. We still need to import data and typeclass, so is this done by files? Isn't each file then a module? If so, what is the apt name for the concept of module above? I am thinking the above concept for module is really a class (without the subclassing inheritance anti-pattern). Can't modules extend other modules?

Edit: note also OCaml functors and modules.

Edit#2: on typeclasses, I replied to @keean:

Optional implicit parameters (which Scala has) provide a way of unifying runtime interfaces (passed by value) and monomorphisable interfaces (passed implicitly by type).

I like that summary (in the context of what I had written). We should make sure we frame that and use it in our documentation.

Edit#3: @shelby3 wrote:

I think it is important to remember that we want to keep implementation of interface (i.e. typeclass) orthogonal to instantiation of data, otherwise we end up with the early binding of subclassing. For example, in subclassing both a rectangle and a square would be subclasses of an interface which provides width and height. Yet a square only has one dimension, not two. Any interfaces which need to operate on the dimensions of rectange and square need to be customized for each, which is what typeclasses do. We will still have subtyping such as when an interface extends another interface then we can subsume to the supertype (which is why we need read-only references and for supersuming then write-only references). Yet references will not have the type of an interface if we don't provide typeclass objects, so then the only case of subtyping will be the subset relationships created by conjunctions and disjunctions of data types.

@keean
Copy link
Owner

keean commented Sep 25, 2016

Yes, I think that is correct. We basically split objects into three orthogonal concepts, data are effectively object properties, typeclass are inferfaces, an modules control data hiding (privite data etc).

Note: Elsewhere I proposed that we can combine the three into a single syntax and generic concept, but lets leave that to one side for now.

@shelby3
Copy link
Author

shelby3 commented Sep 25, 2016

Did you see my edits pointing out that 'module' has two meanings? Should we use another keyword than module?

@keean
Copy link
Owner

keean commented Sep 25, 2016

Extension and functors are something ML supports on modules, at the moment I don't think we should. I like the ability to have multiple modules in one file, but effectively modules are able to be separately compiled and linked later. In the simplest form they are like namespaces where you cannot access their private data from outside. A module has to export things to be visible outside, and import the external functions and data in order for them to be visible inside.

@shelby3
Copy link
Author

shelby3 commented Sep 26, 2016

Note I added the idea for a monadic effect system. Well I guess it isn't orthogonal in some sense as its implementation relies I suppose on the other 2 or 3 concepts, but the systemic model for handling of impurity is an orthogonal concept.

@shelby3
Copy link
Author

shelby3 commented Oct 5, 2016

@shelby3 wrote:

Concept Description
typeclass Delayed binding of operations to types at the use-site. Contrast to OOP (subclassing) which binds operations at construction of objects. For polymorphic types, typeclass objects delays binding operations until construction of the typeclass object (instead of prematurely binding operations at the construction of an instance of the implementing type); whereas, my posited and proposed solution to the Expression Problem employing unions, in theory further delays binding to the use-site for polymorphic types. Note that delaying is a positive attribute in this context, because it increases degrees-of-freedom.

I want to raise this conceptualization of solving the Expression Problem to a matrix, so that we can how this plays out on more levels.

Background

Dynamic polymorphism is instituted whenever we bind the interface to an object and refer to the type of that object as the interface; Because whether it be at data type object instantiation for subclassing or typeclass object instantiation (aka Rust trait objects), we then must incur dynamic dispatch because when holding a reference of said interface type at compile-time, there is no static (compile-time) knowledge of the types of data that can be referenced with the said interface type (rather the runtime dynamic dispatch handles the dynamic polymorphism). In subclassing the interface is a supertype and in typeclass objects, the interface is the typeclass bound (constraint) on data types that can be assigned to a reference with said type.

So the lesson that once we bind an interface at compile-time before the function call site, then we forsake static polymorphism and the opportunity for mono-morphisizing the function. More importantly, we lose the axis of the Expression Problem to add new interfaces (operations) to the data types referenced via an interface type. We can add new data types but can't add new operations.

Subclassing binds the interface very early at data type instantiation. Typeclass objects bind later at typeclass object instantiation. And typeclass bounds bind even later at the function call site.

So ideally, we wanted to always delay to typeclass bounds, but this conflicts with polymorphism of adding data types orthogonally when dealing with heterogeneous union types. Thus I pointed out that by retaining the union of data types instead of prematurely subsuming them to the typeclass bound of a typeclass object, then we could delay the binding to the function call site. Thus we can continue to add new orthogonal operations to the union type, unlike for typeclass objects. The trade-off is that adding a new type to a union type requires invariant collections (such as a List); whereas, typeclass objects will interopt fine with arrays because the type of the array doesn't need to change when adding new data types to an exist typeclass object type. I explained this is more detail in prior linked discussion. Readers can refer to the prior explanation, so I don't need to repeat it here.

Matrix Conceptualization

So what we can see is there is a tension with trade-offs between the competing choices of carrying around the specific data types and subsuming to the interface. So both are needed for different situations.

Recently @skaller pointed out another axis of this tension. That is in the case where don't want to discard the data types because we want to leave them open to new operations, but we want to dictate that a specific implementation is used for a specific typeclass where ever it might be needed. Apparently OCaml's functors accomplish this by being first-class and one can pass these along with a module and they will morph the module with the specific constraints desired, while leaving the rest of the module unspecialized, and thus through the chaining of functors, one can get the same effective result as early binding for one typeclass but remaining open to extension for other typeclasses.

So really what we need is to being able to attaching a typeclass object to a data type instance and pass them around as a pair every where a data type is expected. So in other words, dynamic patching of the dictionary for the dynamic dispatch, wherein if that data type is then applied to a trait object at a call site or trait object assignment, then the prior binding is not overwritten and any additional dynamic dispatch bindings are augmented.

So then we no longer view data types and typeclass objects separately. A data type always has a dynamic dictionary attached to it, yet it is augmented at runtime, not with compile-time knowledge. The compiler can only guarantee the required typeclass operations will be available at runtime, but it can't guarantee which implementation was selected, because at runtime when it writes the implementation to the dynamic dictionary of the typeclass object, it can't know if a preexisting one will already exist.

So we have lifted dynamic polymorphism to a 2nd order. We have dynamism of dynamic dispatch.

The matrix is we have two axes of data type and interface, and a third axis of implementations that cojoin the two. This third axis can't be tracked at compile-time without dependent typing, but the compiler only needs to insure that the required interfaces are present in order to insure soundness. So this works.

What this enables is for example @skaller's example of wanting to set the ordering of comparison operations, while leaving the data type open to extension with other typeclasses. @keean's proposed solution also achieves it, but it lacks atomicity of retaining this with the data type. For example, @keean's solution is not generic in type of algorithm that can be applied ex post facto and thus it fails another axis of the tension of solving the Expression Problem.

What we will see generally is that higher order dynamism is required to solve higher order tensions in the Expression Problem.

This was referenced Oct 5, 2016
@keean
Copy link
Owner

keean commented Oct 5, 2016

So the lesson that once we bind an interface at compile-time before the function call site, then we forsake static polymorphism and the opportunity for mono-morphisizing the function.

This is not quite right. We can still mono-morphise even with compile-time interface binding - in fact for the traditional requirement this is a requirement. The point is we can statically know both the interface and the exact type passed, and therefore chose and inline the exact function. Rust does this in all cases exept when dynamic polymorphism in involved.

So we can inline and monomorphise the functions when the interface is statically bound and the type is statically know. If either are false (so dynamic binding to the interface, or dynamic typing) we cannot monomorphise in languages like Rust.

@keean
Copy link
Owner

keean commented Oct 5, 2016

So really what we need is to being able to attaching a typeclass object to a data type instance and pass them around as a pair every where a data type is expected.

This is exactly what adding functions to an objects prototype does in JavaScript.

@shelby3
Copy link
Author

shelby3 commented Oct 5, 2016

@keean wrote:

So really what we need is to being able to attaching a typeclass object to a data type instance and pass them around as a pair every where a data type is expected.

This is exactly what adding functions to an objects prototype does in JavaScript.

No it is not the same. The JavaScript prototype impacts all instances. I am referring to impacting only instances which are assigned to the trait bound.

@keean I have decided that it is nearly impossible to try to explain these issues like this. I will need to make detailed code examples and write a white paper. You don't seem to understand me.

@keean
Copy link
Owner

keean commented Oct 5, 2016

No it is not the same. The prototype impacts all instances. I am referring to impacting only instances which are assigned to the trait bound.

So you want the implementation used to depend on the value of the object not just its type.

This is monkey-patching, adding a method directly to the object itself.

@shelby3
Copy link
Author

shelby3 commented Oct 5, 2016

@shelby3 wrote:

So the lesson that once we bind an interface at compile-time before the function call site, then we forsake static polymorphism and the opportunity for mono-morphisizing the function.

This is not quite right.

It is right. I just haven't been able to communicate my points to you.

@keean
Copy link
Owner

keean commented Oct 5, 2016

You have written it badly then because

once we bind an interface at compile-time before the function call site, then we forsake static polymorphism and the opportunity for mono-morphisizing the function.

This is exactly the opposite of what is true.

We must bind the interface statically in order to be able to monomorphise and we must statically know the type at the call site as well.

@shelby3
Copy link
Author

shelby3 commented Oct 5, 2016

@keean wrote:

The point is we can statically know both the interface and the exact type passed

If you had understood me, you would understand that I already explained that we don't know the exact type passed, that was entirely the point of binding early erasing the data types from further compile-time knowledge and you only have the trait bound remaining. That you didn't even get that point from what I wrote seems really bizarre.

@keean
Copy link
Owner

keean commented Oct 5, 2016

If you had understood me, you would understand that I already explained that you don't know the exact type passed, that was entirely the point. That you didn't even get that point from what I wrote seems really bizarre.

We always know the exact type passed, that is the whole point of parametric polymorphism. The only time we do not is if there is what Rust calls a trait-object or what Haskell calls an existential-type involved.

@shelby3
Copy link
Author

shelby3 commented Oct 5, 2016

@keean wrote:

The only time we do not is if there is what Rust calls a trait-object

Bingo. Re-read my long comment again.

@keean
Copy link
Owner

keean commented Oct 5, 2016

So in a traditional system like Rust, if there is a trait object we can never monomorphise, it doesn't matter how early or late the type-class is bound.

@shelby3
Copy link
Author

shelby3 commented Oct 5, 2016

@keean wrote:

once we bind an interface at compile-time _before_ the function call site, then we forsake static polymorphism and the opportunity for mono-morphisizing the function.

This is exactly the opposite of what is true.

The word 'before' does not mean 'at'. It means 'before'. I am referring to subsuming to the trait bound before the call site, such as a trait object. But then I generalize it to partial trait objects and 2nd order dynamism.

@shelby3
Copy link
Author

shelby3 commented Oct 5, 2016

@keean wrote:

So in a traditional system like Rust, if there is a trait object we can never monomorphise, it doesn't matter how early or late the type-class is bound.

In a system where I have generalized it to partial trait bounds and 2nd order dynamism then we still can't monomorphise but we gain the ability to add operations ex post facto.

Your reading comprehension my long comment was really not adequate. Was it really that badly explained or is it your mind is ossified and not prepared for a generalization?

@keean
Copy link
Owner

keean commented Oct 5, 2016

The word 'before' does not mean 'at'. It means 'before'. I am referring to subsuming to the trait bound before the call site, such as a trait object. But then I generalize it to partial trait objects and 2nd order dynamism.

But if you have dynamic polymorphism you cannot mono-morphise. It doesnt matter whether the interface is bound before or at the call site.

@shelby3
Copy link
Author

shelby3 commented Oct 5, 2016

@keean wrote:

But if you have dynamic polymorphism you cannot mono-morphise. It doesnt matter whether the interface is bound before or at the call site.

Precisely. Early binding of interface requires dynamic polymorphism in order to open to adding new data types to the interface bound, e.g. for collections.

Also, you are only considering one of the tradeoffs. You forgot the other one which my long comments states and I have reiterated:

In a system where I have generalized it to partial trait bounds and 2nd order dynamism then we still can't monomorphise but we gain the ability to add operations ex post facto.

@keean
Copy link
Owner

keean commented Oct 5, 2016

But that is not what this sentence says:

So the lesson that once we bind an interface at compile-time before the function call site, then we forsake static polymorphism and the opportunity for mono-morphisizing the function.

This says we cannot mono-morphise if we bind the interface at compile time, when there is dynamic polymorphism. But we cannot mono-morphise at all with dynamic polymorphism, so the logic is wrong.

@shelby3
Copy link
Author

shelby3 commented Oct 5, 2016

@keean wrote:

But that is not what this sentence says:

So the lesson that once we bind an interface at compile-time before the function call site, then we forsake static polymorphism and the opportunity for mono-morphisizing the function.

This says we cannot mono-morphise if we bind the interface at compile time, when there is dynamic polymorphism.

Correct.

But we cannot mono-morphise at all with dynamic polymorphism,

Correct.

so the logic is wrong.

What is wrong? You just didn't apparently understand that the goal is not to achieve monomorphism but rather to solve the Expression Problem and keep the extension open to new trait bounds even though we've partially bound to some typeclasses:

@shelby3 wrote:

Precisely. Early binding of interface requires dynamic polymorphism in order to open to adding new data types to the interface bound, e.g. for collections.

@shelby3
Copy link
Author

shelby3 commented Oct 5, 2016

The point is that we always pass a data type along with a dictionary, but that dictionary can be incomplete and we don't track at compile-time what may already be in that dictionary. When we are ready to bind some trait bounds at a call site, then we write some code which will add these implementations to the dictionary at runtime (insuring they will be available as required for compile-time soundness), but will not overwrite any that already existed in the dictionary.

The point of this is that our caller can set preferential implementations (such as which direction of ordering for lessThan operation), yet the code that is called doesn't have to prematurely specialize on any trait bound signatures. Premature specialization is the antithesis of combinatorial composition as per @skaller's excellent point.

@shelby3
Copy link
Author

shelby3 commented Oct 5, 2016

I think we aren't actually attaching the dictionaries to the data objects. Rather the data objects are tagged (instanceof) and the dictionaries are passed in as arguments to each function. Perhaps we should model it with a monad?

@keean
Copy link
Owner

keean commented Oct 5, 2016

When we are reading to bind some trait bounds at a call site, then we write some code which will add these implementations to the dictionary at runtime.

This is the same as asking if the implementations exist from the call-site. In order to add them, they must first exist in the source code, so the compiler can statically check they exist.

We can go further, we can ask, from the call site, get a list of all types for which there is an implementation of this function. Now we could put this in the type signature so that the linker knows statically what types it is safe to send to that function.

@shelby3
Copy link
Author

shelby3 commented Oct 5, 2016

Wow you really aren't getting it.

When we are reading to bind some trait bounds at a call site, then we write some code which will add these implementations to the dictionary at runtime.

This is the same as asking if the implementations exist from the call-site.

Read again:

When we are ready to bind some trait bounds at a call site, then we write some code which will add these implementations to the dictionary at runtime (insuring they will be available as required for compile-time soundness), but will not overwrite any that already existed in the dictionary.

The point of this is that our caller can set preferential implementations (such as which direction of ordering for lessThan operation), yet the code that is called doesn't have to prematurely specialize on any trait bound signatures. Premature specialization is the antithesis of combinatorial composition as per @skaller's excellent point.

@keean
Copy link
Owner

keean commented Oct 5, 2016

There is still something odd about the way your description comes across to me, which might be my misunderstanding...

If I have a function that calls methods on something:

f(x) =>
    x.call_widget()

We can say with 100% certainty whatever type whether dynamic or static polymorphism that is passed to f must implement call_widget. Do you agree?

@shelby3
Copy link
Author

shelby3 commented Oct 5, 2016

@keean wrote:

We can say with 100% certainty whatever type whether dynamic or static polymorphism that is passed to f must implement call_widget. Do you agree?

Agreed. And the mistake in your thinking is you are conflating not knowing if an preexisting implementation will not be replaced at runtime, with knowing at compile-time that all required implementations will be placed into the dictionary as required. But I am repeating myself.

I understand the concept is a bit confusing at first for someone not expecting it. I do tend to come of out no where with mind twisting new things.

@keean
Copy link
Owner

keean commented Apr 20, 2018

"Traditional" polymorphism is compile time only and you don't known the future at runtime.

This is not as restrictive as you think. You can access runtime polymorphic values using interfaces that are common to all the possible types in the value.

@sighoya
Copy link

sighoya commented Apr 21, 2018

Not without losing modularity. Consider two people compiling modules to binary separately, each compiler chooses a representation for the types independently, so you cannot prevent them choosing the same representation for different types. It's now unsound

If you state that a Type is also a value of themself or a value of another Type, then the memory layout is fixed for each compiler, I don't like the fact that compiler can choose their own layouts just like in C++.

(what the value of a recursive type would look like)

A value (instance) of a meta type is a type or even the same meta type.

what is the type of all types that are not members of themself.

The meta type "ConcreteType" which contains all the types not containing them self, i.e.
{Int, Float,...} (But also ConcreteType is a meta type)

You can access runtime polymorphic values using interfaces that are common to all the possible types in the value.

You refering to trait objects, right? They are already runtime polymorphic and not compile time polymorphic though the compiler knows how to operate on them because all implementing types provide the methods known at compile time required by the trait.
Trait objects just like union types offer you type switch over a random decisions (value), but all possible outcome types of trait objects are already known. Pure runtime polymorphism goes further and offer you random structures, i.e. a types that you can't know a compile time.

@keean
Copy link
Owner

keean commented Apr 21, 2018

Pure runtime polymorphism goes further and offer you random structures, i.e. a types that you can't know a compile time.

Which is useless, because you cannot perform any operations on a type you do not know the memory layout of. The only operation valid on any type is the identity function.

@keean
Copy link
Owner

keean commented Apr 21, 2018

If you state that a Type is also a value of themself or a value of another Type, then the memory layout is fixed for each compiler, I don't like the fact that compiler can choose their own layouts just like in C++.

This is not true when you have user defined (nominal) types. For each new type the user defined you need a unique tag-id, which we can imagine is an integer. So we can start with Int=1, float=2. When the user does "struct x {Int, float}" we need a unique tag-id to distinguish between that and "struct y {Int, float}". You have to remember that in object systems the object class is a tag-id. The alternative is structural typing, but then you cannot have object-classes, and you would end up with something more like JavaScripts prototype based type system with duck-typing.

The other thing to consider about runtime typing and reflection is that it is very slow, look at all the "Go" programs, and look at how much slower the code is when they have to resort to runtime reflection.

@sighoya
Copy link

sighoya commented Apr 21, 2018

Which is useless, because you cannot perform any operations on a type you do not know the memory layout of. The only operation valid on any type is the identity function.

Or you infer the available operations over runtime reflection.

@keean
Copy link
Owner

keean commented Apr 21, 2018

Or you infer the available operations over runtime reflection.

Only if you know the type. Consider I give to 127 bytes of data. What are you going to do with it?

@sighoya
Copy link

sighoya commented Apr 21, 2018

The other thing to consider about runtime typing and reflection is that it is very slow, look at all the "Go" programs, and look at how much slower the code is when they have to resort to runtime reflection.

A general problem of runtime polymorphism

This is not true when you have user defined (nominal) types. For each new type the user defined you need a unique tag-id, which we can imagine is an integer. So we can start with Int=1, float=2. When the user does "struct x {Int, float}" we need a unique tag-id to distinguish between that and "struct y {Int, float}"

Different ids is okay, but you can store all types for instance as string into memory or you hash over it for comparison

Only if you know the type. Consider I give to 127 bytes of data. What are you going to do with it?

Probably nothing, but you can read a type as string and infer its components or operation.

Edited

@sighoya
Copy link

sighoya commented Apr 21, 2018

Please refresh

@keean
Copy link
Owner

keean commented Apr 21, 2018

Different ids is okay, but you can store all types for instance as string into memory or you hash over it for comparison

Only if the type data is stored in memory, and without static types how do we know that this is type data and not an integer or a string?

@sighoya
Copy link

sighoya commented Apr 21, 2018

Only if the type data is stored in memory, and without static types how do we know that this is type data and not an integer or a string?

We don't. If you expect that your current io input string should be a type you interpret it like that else you throw an exception.

@keean
Copy link
Owner

keean commented Apr 21, 2018

We don't. If you expect that your current io input string should be a type you interpret it like that else you throw an exception.

So even with dynamic types, you need static typing. So let's look at the other way around. Do you need dynamic types? If you allow sum types, where we effectively create a value indexed type list, so then we can use a case statement to split the different types into different execution paths, and we can prove coverage statically, so something like:

data Test = MyString String | MyInt Into
case t of
   MyString x -> ...
   MyInt X -> ...

So the restriction is we need to know which types we expect to see. Otherwise we can use an existential type with an interface, then we can cope with any unknown type that implements the interface.

These two options cover all the practical cases where we can actually do anything useful with the value because we can only write code to process the types we know.

@sighoya
Copy link

sighoya commented Apr 21, 2018

So even with dynamic types, you need static typing.

This alludes to the great misconeption in the world that dynamically typed languages are uni typed, because dynamically typed languages are statically typed but their type checking is partial (optimistic) as they always work with a union like type Any.
There are exceptions through which allow for other type assertions than Any.

So let's look at the other way around. Do you need dynamic types?

No, because you can emulate it over values.
Just read in the type as string and destructure, manipulate or even instantiate it with a new string representing the value. This is in fact very similar to runtime reflection except that runtime reflection deals with bytecode rather than strings.

@keean
Copy link
Owner

keean commented Apr 21, 2018

No, because you can emulate it over values.
Just read in the type as string and destructure,

So whilst it seems you agree with me that we can do everything we want with static types, I don't really get why you keep going on about strings? A type is a graph, you can see this because anything with braces '(' and ')' is fundamentally a binary tree, and if we include recursive types, we end up with types being regular_trees not strings!

@sighoya
Copy link

sighoya commented Apr 22, 2018

So whilst it seems you agree with me that we can do everything we want with static types, I don't really get why you keep going on about strings?

Because you can simulate types created at runtime with strings representing the types, i.e. respresenting the type definition. *
It is however unnice toward runtime reflection.
You wouldn't do that with type definitions at compile time.

Note:* You didn't integrate types created at runtime in your type system as this would mutate your type set.

@keean
Copy link
Owner

keean commented Apr 22, 2018

The 'any' type is bad in any case, you don't really want it. Note this is different from an unconstrained type variable. As I said you can only perform the identity operation on the 'any' type.

Perhaps I should turn this around? What do you think you gain by having runtime types? Can you show me a short motivating example for the sort of programming your are adovacating?

@sighoya
Copy link

sighoya commented Apr 22, 2018

As I said you can only perform the identity operation on the 'any' type.

The any type is nothing different than the union type specified over all possible types in your program.

What do you think you gain by having runtime types? Can you show me a short motivating example for the sort of programming your are adovacating?

For instance in game programming.
You create different Enemy (Sub) Types at runtime:

List<Enemy> enemies;
if(random condition)
{
    Sub1Enemy se1 = RuntimeReflection.createInstance(SubEnemy1,random args)
    enemies.add(se1)
}
...
if(random condition)
{
    SubnEnemy sen = RuntimeReflection.createInstance(SubEnemyn,random args)
    enemies.add(sen)
}
return enemies

We assume the subtypes of Enemy all exists, but the arguments for instantiation and the decision which subtypes are instantiated is randomized.

You get the list of enemies in a function and filter the desired subtype of enemies over this list and mutate specific properties of this type of enemy.

fun(List<Enemy> l)
{
    for(enemy : l)
    {
        match(enemy)
        {
            case enemy isa SubEnemyi:
                    //e.g. access a specific SubEnemyi field
        }
    }
}

Of course you can do this in Haskell via sum types but these are runtime types, too, though they are labeled.
Sum types and union types strongly relate to each other.

Edit:
Note I presented an example about runtime polymorphic types, but maybe you meant types created at runtime.

As an example for this I could imagine that create a new subtype of enemy and instantiate it over random choices resulting to random number of fields.
This is a rare case but should be possible and has the advantage over string representations in that the instances of the new subtypes with their fields are automatically efficiently stored into memory by the runtime.

@keean
Copy link
Owner

keean commented Apr 22, 2018

Or you can define an interface for enemy, and then you could runtime load unknown enemies as binary modules, where you do not know the internal data layout of the state store for the enemy, which I think is a better solution of the above problem.

@sighoya
Copy link

sighoya commented Apr 22, 2018

Refresh please

@sighoya
Copy link

sighoya commented Apr 22, 2018

Or you can define an interface for enemy, and then you could runtime load unknown enemies as binary modules

What's the difference and in which part is this better?

Edit: What me interests more is what is an interface, a typeclass or something else?

@sighoya
Copy link

sighoya commented Apr 22, 2018

I guess you instantiate your required interface with different modules.

Then your interface type is a (partially) dynamic type. And how do you access fields that were not defined in the interface without runtime reflection?

@keean
Copy link
Owner

keean commented Apr 22, 2018

Then your interface type is a (partially) dynamic type. And how do you access fields that were not defined in the interface without runtime reflection?

You access through the defined API which is the same for all implementations of the interface. What I meant was the 'module' can have its own private data which could be any type.

@sighoya
Copy link

sighoya commented Apr 22, 2018

You access through the defined API which is the same for all implementations of the interface. What I meant was the 'module' can have its own private data which could be any type.

Sure all implementations implement the methods of the interface and all those methods can be accessed but what is when I try to access data which is not listed in the interface?

@keean
Copy link
Owner

keean commented Apr 22, 2018

but what is when I try to access data which is not listed in the interface?

How do you know there is data there, how do you know how to interpret the data. Really what can you do with data you don't understand, you may find out it's an 'int' but is it a price, a distance, a frequency?

@sighoya
Copy link

sighoya commented Apr 22, 2018

How do you know there is data there, how do you know how to interpret the data.

Runtime Reflection which scans structurally over this value.

Really what can you do with data you don't understand, you may find out it's an 'int' but is it a price, a distance, a frequency?

For the same reason when you try to search keys overs json structures which don't exists but you tried it however because you have expectations over it.

You have to imagine that you have partial knowledge. You know you want a structure with a give field name and a given type or a part thereof but you don't know the complete structure and you don't interseted in the complete structure.
Dynamicism is to handle with partial knowledge.

@keean
Copy link
Owner

keean commented Apr 22, 2018

Dynamicism is to handle with partial knowledge

But to what end? What can you do if you don't know what it is? This is like downcasting (towards the subtype).

You have an API for shapes, and each shape has a 'draw' method. Great, you can now draw your shapes anywhere. You start poking around in their private data and get 32.6, is that the radius of a circle, the size of a square, the height of a triangle? Just the number is meaningless, and you cannot do anything with it. Really you should not be allowed to see anything that is not part of the published API, otherwise all sorts of things can go wrong, for example the next release of the module could store it's data in a completely different layout. You end up with very fragile code that is going to break all the time and need a lot of maintenance and rewriting.

@sighoya
Copy link

sighoya commented Apr 23, 2018

for example the next release of the module could store it's data in a completely different layout. You end up with very fragile code that is going to break all the time and need a lot of maintenance and rewriting.

Afaics, if you compile your loadable module you should include the corresponding layout in it which you don't know in the project which will import the module.
Another example is if you use an structure instance from a library and access a specific field, your compiler must know that this field access is still valid.
Where does the compiler knows this?
Because the library must contain some layout information.
The same for polymorphic structures which can't be completely monomorphized and removed in binary code in order to monomorphize them later in the linked application.

@shelby3
Copy link
Author

shelby3 commented Sep 9, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants