-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fast track transpiling to TypeScript? #13
Comments
I had actually planned to start with no optimisation or type checking :-) Simply parse the source, discard the typing information and output the JavaScript. You can then already start experimenting with the syntax and writing simple programs, and you will simply get runtime errors (or bugs) in the JavaScript. Type-classes are not that simple however, because they don't obviously associate with any given argument to the function. Consider a
(aside: we need to decide on where the type-class requirements list is going to go in function definitons). Cast is does not provide methods on A or B but provides independent functions (all type classes do). What we actually want to do is turn the type-class into a Record data type, turn the implementation into a value of the type of that Record, and then add an extra parameter to the function. The JavaScript would be something like:
Note there is no representation of the type-class itself in the JavaScript, it just has the implementations. In the future we can monomorphise and inline the typeclass function and remove the extra argument where appropriate, but that can be viewed as an optimisation. |
@keean wrote:
If that was feasible, I would have created ZenScript a long time ago. Your plan is not feasible. The compiler has to select the dictionary for the implementation of the data type which is passed in the source code, but there is no way for ZenScript to do that without also doing full type checking. That is why I devised this hack described in the OP. My hack transforms typeclassing into subclassing within the module, so that a subclassing type checker can check everything. And it requires no typechecking in ZenScript. And the only downside I see, is that it requires that modules don't provide conflicting implementations (which is damn easy for us to do now at this experimentive stage). So again I ask you do you want to accept my proposal that we first prioritize transpiling to TypeScript? I am probably going to do if you won't. Afaics, there is no other fast track way.
Obviously we can't do that in my hack, but we can do higher-kinds if our transpile target supports them:
P.S. would you please stop writing this:
Since (for the 5th time I will state that) I presume we've decided (since I've seen no objections) on the following.
Or:
We unified around |
@shelby wrote:
Thinking about how to emit that in a language with classes and tuples. We also need to get the declaration of covariant or contravariant for type parameters correct. Also were is mutability declared above?
Makes one realize the unified |
Nil should have the same type as Cons. The reason is we do not know how long the list is at runtime, so how do we type check:
|
Please no all caps, it's like shouting in my text editor :-( ML prefixes type variables with a character (happens to be Also I note you want to get rid of the type parameter list, you do realise sometimes you need to explicitly give the parameters if they cannot be inferred like:
This also makes me realise another ambiguity in our syntax, it is hard to tell the difference between assigning the result of a function and assigning the function itself as a value. In the above you might have to pass to the end of the line to see if there is a '=>' at the end. This requires unlimited backtracking which you said you wanted to avoid. |
@keean's prior comment has a response. Please where reasonable to do so, let's try to keep the syntax discussions in their Issue thread. I started the tangent in this case. 😯 |
@keean wrote:
Disagree. And nearly certain you are incorrect. Assuming:
The But the guard doesn't change the type of I don't think you should attempt the type checker as our first step. We should gain experience with TypeScript's type checker first, before attempting our own. After that, everything will be clearer to both of us, so we don't commit major blunders on the type checker and lose precious time. |
So what you are saying is that |
@keean wrote:
Yes if by
Disagree.
Different type from what? And what is the problem you see? I don't see any problem. The
Where
|
In the case of the above datatype In the above In other words we assign |
What the heck is |
Do you prefer |
For me and everybody I know who uses a mainstream programming language, We are targeting JavaScript. The |
@keean wrote:
This is hopeless. You can stop thinking in terms of Haskell's lack of subtyping and subsumption. Mythical Man Month effect is taking over.
|
Ada uses |
@shelby3 wrote:
I think we should avoid subtyping and subsumption, except specifically for union types where we have set-based subsumption like Edit: Remember we agree on how |
@keean I need to go jogging and buy prepaid load for my mobile phone. I suggest we focus on the syntax for now and that you experiment with output to TypeScript (or Ceylon) to get a feel for how these type systems work with subtyping and as we do this transform of our simplest typeclasses. It is too much verbiage to discuss the typing now. We can't target Scala as it doesn't have the unions. |
There's a reason I dont like typescript or Cylon :-) Anyway above I was thinking about bidirectional type inference, which was a mistake, as I actually want things to be compositional, which would align better with what you want. My point about not losing track of type variables, and needing more than one bottom in something like |
Finished jog. Very fast. Headed to buy load. Please let us put typing strategies aside for now and focus on the syntax and transpiling to TypeScript. This is what I am going to do. I need a working language yesterday. I don't have time to wait for the perfect language. Sorry. Let's focus on what we can do quickly. The perfect language will come incrementally. |
Then we are stuck with typescripts unsound type system? |
@keean wrote:
I documented upthread why I thought we could avoid the aspects that are unsound. If not, we could try N4JS which the authors claim is sound. The transformation to subclassing will lose some of the degrees-of-freedom that we get with for example multi-type parameter typeclasses, but we can start with single-parameter so at least I can start experimenting with the syntax and typeclass concept. You may not agree with this direction because you may think it is not correct to model with a subtyping type checker. I think we will end up with a subtyping type checker eventually any way and that you will eventually realize this. If I am wrong, then I am wrong. You think you know, but I doubt either us can know for sure yet. But what are your options? You can't transpile to PureScript because it doesn't allow impurity. So your only option is to build the entire type checker. But that will take months to get to a working stable, usable state. And there is a non-zero risk that you would realize along the way that your design was incorrect and full of corner cases issues. Then you'd start over again. So what alternative do you propose? We both agree that we need an imperative, typeclass language. It seems we mostly agree on the syntax. Our major conflict if any is on implementation and which strategy to prioritize. |
http://www.brandonbloom.name/blog/2014/01/08/unsound-and-incomplete/ |
Thinking about if we really need higher-kinds, because TypeScript apparently doesn't have the feature and Ceylon's feature may or may not be adequate. Both of them offer first-class unions, but TypeScript's guards may require less boilerplate (Edit: Ceylon has flow-typing aka typeof guards), we access A interface Monoid<A, SELF extends Monoid<A, SELF>> {
identity(): SELF
append(x: A): SELF
}
abstract class List<A> implements Monoid<A, List<A>> {
identity(): List<A> { return Nil }
append(x:A): List<A> { return new Cons(x, this) }
}
class _Nil extends List<never> {}
const Nil = new _Nil
class Cons<A> extends List<A> {
head: A
tail: List<A>
constructor(head: A, tail: List<A>) {
super()
this.head = head
this.tail = tail
}
} I thought the above code is higher-kinded, but I guess not because no where did I write Whereas, the following attempt didn't because it doesn't make that assumption. interface Monoid<A> {
identity(): this
append(x: A): this
}
abstract class List<A> implements Monoid<A> {
identity(): this { return <this>Nil }
append(x:A): this { return <this>(new Cons(x, this)) } // Error: Type 'Cons<A>' cannot be converted to type 'this'.\nthis: this
}
class _Nil extends List<never> {}
const Nil = new _Nil
class Cons<A> extends List<A> {
head: A
tail: List<A>
constructor(head: A, tail: List<A>) {
super()
this.head = head
this.tail = tail
}
} And the following attempt didn't compile also because it didn't make that assumption. interface Monoid<A> {
identity(): this
append(x: A): this
}
abstract class List<A> implements Monoid<A> { // Error: Class 'List<A>' incorrectly implements interface 'Monoid<A>'.\n Types of property 'identity' are incompatible.\n Type '() => List<A>' is not assignable to type '() => this'.\n Type 'List<A>' is not assignable to type 'this'.
class List<A>
identity(): List<A> { return Nil }
append(x:A): List<A> { return new Cons(x, this) }
}
class _Nil extends List<never> {}
const Nil = new _Nil
class Cons<A> extends List<A> {
head: A
tail: List<A>
constructor(head: A, tail: List<A>) {
super()
this.head = head
this.tail = tail
}
} |
Well both of those are having to bend a monoid to fit their objects only systems (see my post on objects or no objects). A monoid is simply this:
Note in a non object system it has no self reference and no need for higher kinds, they are both free functions like string literals and string concatenation. So using that example:
And we can use it like this:
Note Regarding why your implementation did not work, try:
|
I tried but it won't type check and Monoid has incompatible semantics with a list. Readers see also the related discussion in the Higher-kinds thread. |
Indeed, I met Jeremy at a workshop on "Datatype Generic Programming" at Oxford University back around 2004 around when the HList paper was published. The HList paper was all about type-indexed-types, and I could see how this enabled some really elegant programming, however trying to implement this in Haskell was fighting an uphill battle, because it was not the focus of the language nor its design committee. |
@keean I want to work with you. But I am afraid we are going to repeat the same problem in our language to each other. We always do. It is difficult for you or I to change our personality or mannerisms. Maybe I could try a new tactic? When ever you write something that offends me, I could quote it and write "could you please rephrase that for me?". But if you reply with some obstinate insistence that "wrong is wrong" then we would not have made any progress on resolving it. If I instead try to just ignore it, it does not work because it accumulates. I really think a person's personality is take it or leave it. You are very knowledgeable. It is a shame if our personalities clash. I do not have enough precision in my work on this to possibly satisfy your precision. I gain precision over time. When you talk with those other experts, they have specialized in this field with PhDs, and they speak your language with high precision. We are perhaps mismatched. |
If I do complete a simplified language, it will not hit every aspect of what you were thinking of doing, at least not at the outset. But overtime it is likely to trend towards something closer to what you want. In any case, it will at least it will have some things you want (e.g. typeclasses) so it would be a first step towards experimentation and see what works well and does not work well. As you know my priority right now is on compatibility with transpiling to TypeScript and simplifying as much as possible. I want to start the LL(k) grammar tomorrow. If I can't make this happen quickly, then I really need to abandon and put it on the back burner. So my available time for talking about design decisions has expired. If there are any more points, they need to be made immediately. |
The nesting of I am do not yet clearly see how to reconcile that with your statement, but we may just have different conceptualization of terminology, so I am not sure if what I am thinking about is the same as what you are thinking about. Afaics, the disjointedness seems unrelated to flattening, i.e. @shelby3 wrote:
I do not see how to implement typeclasses for intersection types without having to implement every permutation manually (which is ridiculous and unacceptable). Thus I think I am pretty much decided to not allow intersection types (the programmer can use a nominal product type instead). For unions we can dispatch on the tagged option dynamically, i.e. all possibilities are covered by implementation of typeclasses for each possible option. Optionally the programmer can choose to implement a set of options, e.g. for For Sum types with options as values, those values can not be implemented for typeclasses individually because they are not types. So instead we implement the entire Sum type for the typeclass, then we must always use type-case logic, versus the aforementioned optional design pattern for tags-as-types. Thus the options as types (tags-as-types aka type-indexed-types) seems to interopt better with typeclasses. The type-indexed-types seem to have more flexibility than the tags-as-values (aka Sum types) form of co-products. We lose the origin on the Lambda Cube but we gain the ability to not double-tag JavaScript types and the other advantages enumerated. I am still wondering if there are any tradeoffs other than the loss of global and local inferencing already mentioned. |
Have we come to agreement on nomenclature? It seems you are looking at 'tagged unions' as opposed to 'disjoint tagged unions' (which are also known as sum types). This suggests 'non-disjoint sums' as a name that is consistent with the body of computer science publishing. Regarding precision and working, I think it is important to focus on what is wrong. Your whole idea of 'non-disjoint sums' is not wrong, it was the naming. I suggest you focus on what specific part of what you said is wrong, Likewise I will try and be more precise about what is wrong. However I only have a small brain, and if I am half focused on not saying "you're wrong", I won't have enough left to solve the problems, but I will do my best to avoid it. I would appreciate if you could cut down on the ad-hominem attacks in return, which don't help further the discussion, and probably undermine your credibility with other readers. |
We don't need to use type-classes because the 'case' statement, and normal pattern matching work. In the cases were we do, Haskell cannot do this, but Datatype generic programming allows type-classes to be declared using the structure of the types. Using type-indexed-coproducts might be a better solution if it was built into the language, this is what we were investigating in the HList paper. What I don't yet know is if this mechanism is sufficient to write elegant boiler-plate free programs on its own, or if the traditional sum types are needed as well. Really the only way to determine this is to create an experimental language with just the experimental feature (as design is more about what you leave out) and see what it is like to program with without the other features. |
An intersection type represents a function is a combination of other functions like
we can easily infer the type: What if we passed the function as a dictionary (record):
Now we can make dict an implicit argument:
So a type-class is kind of an implicit module, and a module is a nominally typed intersection type. What does it mean to have a type-class of type classes? It seems like nonsense, but in that case if our type system allows type-classes of types, and permits intersection types, does that mean it admits nonsense? One argument to avoid intersection types is that if a 'typeclass' is something other than a 'type' then you cannot create nonsense, because typeclasses only range over types, and not over themselves. This seems vaguely similar to set theory, and the Russel paradox. If we use intersection types, we are collapsing everything into one level, like set theory, and that probably means there are problems (infact we know it is incomplete and some unifications never terminate). Type classes have a stratification that prevents this, and also limits us to one meta-level (that is programs that create programs, not an infinite stack of programs creating programs). |
I think to work out what we want regarding intersection type, you have to consider the bottom level, what is the data and what is the memory layout. Simple values like 'Int' and 'Float' are easy. Structs make sense. Union types do not make sense, because we cannot interpret a word from memory if we do not know whether it is a Float or an Int. We need some 'program' to interpret the data based on some 'tag' that is stored elsewhere in memory. In other words 'unions' are not a primitive type to the CPU. Intersection types do not make sense either. You do not have machine code functions that can cope with Ints or Floats. You can have generic machine code, for example 'memcpy' can copy an object of any type as long as we know its length. In other words pointers are better modelled as universally quantified types than intersection types. Structs (objects with properties) each of which can be typed make sense. Type-classes make sense, where we know the type of something we can select the correct method to use. All this assumes an 'unboxed' world like that of the CPU. Another way to think about this is that the computer memory is filled with bits like
To interpret these bits we need to know how they are encoded. Static types represent the implicit knowledge of how they are encoded based on their location (the static refers to lack of motion, hence fixed location). To decode dynamic types, we need to know how to interpret the encoding of the type, so we run into the question, what is the type of a type. To avoid the Russel paradox we cannot answer "the type of a type is type", so we need something else. My answer to this is that 'tags' encode runtime type information, and the 'sum' datatype is the type of the tag + its data. |
To make union types make sense we would need a standard encoding for all types. This means we must have at least partial structural typing. The primitive types we need to start with would be:
Maybe some bigger ones for future-proofing, and then some vector types for SIMD:
That would do for basic types. Unlike static typing where we can represent these in the compiler, we need these to actually be written to memory for use are runtime. We then need an encoding for objects (which are tagged product types) and unions. For your solution to work you will need to work all these codings out and include them in the language runtime. My solution above, only allows static types based on location, and the implementation of unions would be layered on top of this, implemented as DSL library in the new language. |
@keean wrote:
Well that suggests my original understanding that in math a ‘sum’ is just a co-product (as I cited Filinski in 1989 wrote about sums versus products in the field of computer science). That is why I just assumed that the popular term Sum type, applied as a general name for any co-product type. And I was thinking that Haskell’s algebraic types are disjoint tagged unions as one possible variant of a sum type. The mainstream sources do not seem to clarify this unambiguously. You cited some literature which was also somewhat vague on the historic delineation of the use of the terminology. That is why I have stated to you in private that I think “this is wrong” is not very productive. It is more productive to try to go develop a very well thought out statement which recognizes what your colleague has correct and builds upon it, in way that is not just some quip but rather imparts information for your colleague and readers. In that way, your colleague will not view it as unproductive ego contests. I would like to cite my recent post as an example of such a statement which does not bother with bluntly saying “you are wrong”: @shelby3 wrote:
Disagree. As I have stated in private, I think it is important to focus on stating what is correct. It is more than just a subtle difference. I am not saying to ignore correcting what is wrong. I am saying that it is very lazy to quip “that is wrong”. To well articulate what is correct and also recognize what your colleague has stated correctly, requires production and effort. I prefer to see production and than cutting each other down. I get depressed when I see negative activity and I end up lashing out in return. Because I sort of adjust to the way the people are who are around me. If people want to be negative, then I after giving them every chance, and they insist then I let them have a boatload of negativity. But I have decided that in the future, I will just put such people on Ignore. I realize that is a flaw in my personality to argue with people who are negative. I am a Cancer zodiac sign. This means the mood and ambiance of the workplace is very important for me to be productive. I like uplifting, positive people (but not in lack of substance way, i.e. I am not an air zodiac sign). I am an earth sign, meaning I need warmth of relations. I do not co-exist well with cold people. People do have different personalities and we just have to accept that not all personalities can mesh.
I also have committed negative communication at times. Most recently I have been trying to apply the effort to be more careful about what I write in public. I will quote what I wrote to you in private: @shelby3 wrote in private:
@keean wrote:
Yeah I agree not to let you or anyone else bait me into calling out and then lashing out when my pleas fail. As I explain below, I will learn to just walk away from situations that do not mesh. I do not know if I can ignore if you continue to think that blunting pointing our wrongs without well developing statements of what is correct with fairness, and thus we would slide back into the same flame wars. As I wrote above as quoted from private, I decided recently that I just have to learn to Ignore such people entirely, meaning ending all communication with them. It does not mean that they are incapable of having a conversation with others. Different people have different levels of tolerance for communication styles. Having said that, if I felt someone had a treasure map (i.e. offer me extremely valuable information), I would probably bite my tongue and be exceedingly nice while they said anything they want to say about me or my ideas. tl;dr: I do not think we solved the problem. So I will drastically reduce my interaction and try to be very judicious about the topics I discuss going forward with you in public (private communication is okay you can say whatever you want there, lol). But I want to make it clear that in no way is this a statement of judgement about you. I am not accusing you of being wrong for your style of communication. Diversity of personalities makes the world a more fertile soil. |
If I say an idea you post is wrong, there is no other useful response apart from to try and convince me you are right. I have not insulted you, am am merely stating an opinion about the correctness of an idea. You have no right to object to my opinion about your idea, or to be 'offended' by it. It is after all just my opinion. In fact I should be flattered you are offended by it, because that means you hold me in such high esteem my opinion seems like a fact to you. If you then escalate into ad-hominem attacks it looks like you have no convincing arguments that the idea is infact correct, and are trying to derail the discussion. |
@keean your reply is seems to indicate to me (IMO) why the incongruence in our personalities and communication styles will never be solved. I do not think you understand (or you disagree with) my point about how to be positive versus negative. But that is okay. We can move on. I will be very, very judicious about the topics I participate in from here forward. Thank you for taking the time to respond on all issues including the meta ones. |
Good so if I proceed with that feature, then it will help you also by being the test bed. That is encouraging and inspiring because I know if you were inspired to write libraries, you have a lot of knowledge about Alexander Stepanov’s Elements of Programming models. |
I visualize some potential tradeoffs of your suggested design choice:
There might be some advantages as well, i.e. the analysis above may not be complete.
For targeting JavaScript, the reified type tags are always there any way with Or even as I had proposed before upthread for JavaScript, since typeclasses should only be implemented one way for each type, these typeclass methods could be placed on the |
If it does add excessive boilerplate, I will have failed. The compiler should do the work, although there might be a keyword involved to control when this happens. The way I see this happening is that accessing the contents of a container should be a type-class based operation, that allows user defined containers to override the normal behaviour, so a user defined container is just as short to use as a built in type like an array.
Again I see type-classes facilitating this. Dereferencing would be overridable, so you can write a container where the type is encoded in the pointer if that is what you really want.
If you put the encoding inside the compiler, it will require patching the compiler itself to support a new target. This seems worse than just having to edit a library. It does make me wonder if WebAssembly (asm.js) would be a better target, because it would make it more like native targets and less dependent on the strangeness of JavaScript. Maybe you are right and it would be better to include native boxing in the language. I want to keep the core language as small as possible, hence why I was thinking this should be in a library, but maybe its important enough to be in the core. |
@keean wrote:
But as I wrote before, that forces you to expose low-level details in the language, e.g. pointers. I do not want any pointers in my language. K.I.S.S. principle and also encapsulation of details which for example are sandbox security holes and many other reasons.
The compiler could be modular so that post processing of the AST to different targets can be swapped. Essentially it is akin to that the compiler can have libraries. If you put the libraries above the language instead of below, it seems to me to be in the incorrect abstraction layer.
I presume you know that WASM is not the same as ASM.js. Afaik, WASM does not run everywhere JavaScript does yet, and it is not a mature technology yet. ASM.js is low-level and afaik a nightmare to debug in the browser, although perhaps source maps could help with that somewhat, it still would not be as high-level intuitive as debugging in JavaScript. Also we need JavaScript’s GC. Every output target has some strangeness.
I agree of course that separation-of-concerns and modularity are excellent design concepts. Yet exposing the low-level details necessary to optimize boxing in a library above the language seems to be conflating abstraction layers and thus not achieving optimal abstraction. I could also for example envision issues about compiler selected optimized binary unboxed data structures versus boxed members of data structures. I am planning to have a language feature to map between them, because otherwise we need to expose low-level JavaScript details such as Just because we can put/hide details in a library above the language, does not prevent that the low-level details exposed above the language from allowing complexity to seep into userland code. The users will take advantage of any primitives exposed in the language, as flies are to honey. So attaining simplicity is not just about what is left out of the compiler, but what is also left out of the language. Libraries (aka modularity) can be above or below the language. It is all about the abstraction layer. |
The other point of view is that you don't want to lock the language into some type encoding that will prevent future extensibility. Using bits in pointers to encode things limits porting to platforms that have different alignment requirements, and is probably a bad idea, and it is not enough space to encode all the types, so there would be some types that just don't benefit. Remember structural types are of unlimited length (as they have to encode the structure) whereas nominal types can be encoded as an integer. So much approach would probably not use fancy pointer encodings, and from my experience with optimization, I don't think it will cost much performance, as CPUs are optimised for integer word performance. It's going to mess up pre-fetch and caching too. My approach would be to use static typing wherever possible, so that the use of dynamic types is restricted to where it is really needed. |
Who proposed that? I certainly did not. I proposed there is no specification of these low-level details “above the language” and thus the language compiler is free to optimize for each target. My point is do not expose complex, low-level details above the language, so the compiler can optimize for each output target and so the programmers are not given access to complexity that can make the code like Scala or C++ with its 1500 page manual and abundance of corner cases.
That criticism thus does not apply.
Compiler will be free to optimize whatever is tested to be most optimum. Btw, I am not 100% sure that unused bits of 64-bit pointers affect pre-fetch and caching.
I do not know what caused you to mention this (?), as it seems so broad and not specifically related to the discussion we were having. A union or sum-like type is not statically dispatched, although its bounds are statically typed. Of course we statically type what we can that makes sense in terms of the priorities, but when you need union then you need it. I hope we do not go on and on, just for sake of seeing who can be the last one to reply. What is your cogent point overall? |
But you would lose binary compatability, and the ability to transfer data between machines (as they may have different versions of the compiler, or have different CPU and therefore type representations differ). |
Not necessarily, it is common in languages to use dynamics types everywhere, even when not needed, for example Java does this (every method is automatically virtual). JavaScript also does this, and then tried to optimise it all away in the JIT compiler. The problem is it is all too easy to prevent the JIT compiler from being able to optimise by using a dynamic feature (like changing the type of a property) when you do not need to, you could for example use a separate property. The C++ "virtual" keyword may seem like boilerplate, but it serves a useful purpose, that is it makes it clear when you are paying the cost. So a non-virtual method always dispatches at the fastest speed, and cannot be slowed down or break the optimiser, but it also cannot be dynamically dispatched. By making it virtual there is still the possibility it could be optimised, but you are allowing it to use the slower mechanism because you need dynamic dispatch. This follows the principle of only paying for what you use, and making the cost visible in the source code. I think this is an important principle that languages I like follow. |
Such communication must be serialized to a standard protocol/format.
What specifically does this have to do with the discussion we were having? I agree that one of the design priorities can be to minimize accidental dynamic typing. Do you see a specific instance that applies to our discussion? Afaics, as I already wrote in my prior comment post, we were not discussing whether union types can be statically dispatched (because they can not in any design other than dependent typing), but rather about the other aspects of the design of a “sum-like” type. |
@keean wrote:
That is similar to the idea I had proposed originally for using the prototype chain to simulate typeclasses. You’re showing it is possible to get TypeScript’s typing system to somewhat integrate with my idea. Thanks! |
TypeScript may be close to get HKT typing: |
Revisiting this I realized that instead of Also application of non-pure functions will not allow discarding the The advantage of discarding the Consider:
Instead of:
Even if we have block indenting:
The symbol soup reduction still applies when applying functions:
|
Building off the reasoning and justification in my self-rejected issue Fast track transpiling to PureScript?, I have a new idea of how we might be able to attain some of the main features proposed for ZenScript, without builiding a type checker by transpiling to TypeScript.
If we can for the time being while using this hack, presume that no modules will employ differing implementations of a specific data type for any specific typeclass, i.e. that all implementations of each data type are the same globally for each typeclass implemented (which we can check at run-time and
throw
an exception otherwise), then the module can at load/import insure that all implementations it employs are set on theprototype
chain of all the respective classes' construction functions. In other words, my original point was that JavaScript has global interface injection (a form of monkey patching) via theprototype
chain of the construction function, and @svieira pointed out the potential for global naming (implementation) conflicts.So the rest of the hack I have in mind, is that in the emitted TypeScript we declare typeclasses as
interface
s and in each module we declare the implemented data types asclass
es with all the implementedinterface
s in the hierarchy. So these classes then have the proper type where ever they are stated nominally in the module. We compile the modules separately in TypeScript, thus each module can have differing declarations of the sameclass
(because there is no type checking linker), so that every module will type check independently and the global prototype chain is assured to contain theinterface
s that the TypeScript type system checks.So each function argument that has a typeclass bound in our syntax, will have the corresponding
interface
type in the emitted TypeScript code. Ditto typeclass objects will simply be aninterface
type.This appears to be a clever way of hacking through the type system to get the type checking we want along with the ability to have modules add implementations to existing data types with our typeclass syntax. And this hack requires no type checking in ZenScript. We need only a simple transformation for emitting from the AST.
As for the first-class inferred structural unions, TypeScript already has them, so no type checking we need to do.
It can't support my complex solution to the Expression Problem, but that is okay for starting point hack.
I think this is way we can have a working language in a matter of weeks, if we can agree?
That will give us valuable experimentation feedback, while we can work on our own type checker and fully functional compiler.
TypeScript's bivariance unsoundness should be avoided, since ZenScript semantics is to not allow implicit subsumption of typeclass bounds, but this won't be checked so it is possible that bivariance unsoundness could creep in if we allow typeclasses to extend other typeclasses. Seems those bivariance cases don't impact my hack negatively though:
Not a problem because of course an
interface
argument type can never to be assigned to any subclass.We should be able to design around this.
Also a compile flag can turn off TypeScript's unsound treatment of the
any
type.TypeScript is structural but there is a hack to force it to emulate nominal typing. We could consider instead transpiling to N4JS which has nominal typing and soundness, but it is not as mature in other areas.
The text was updated successfully, but these errors were encountered: