Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compile-time Extension Interfaces #87

Open
wants to merge 85 commits into
base: master
from

Conversation

@raulraja
Copy link

commented Oct 2, 2017

Proposal

The goal of this proposal is to enable compile-time dependency resolution through extension syntax. Overall, we want to enable "contract" interfaces to be declared as program constraints in function or class constructor arguments and enable the compiler to automatically resolve and inject those instances that must be provided with evidence in one of a given set of scopes. The resolution scopes to look for those evidences are predefined and ordered, more details inside the proposal.

In the case of not having evidence of a required interface (program constraints), the compiler would fail and provide the proper error messages both by IntellIJ IDEA inspections and compile-time errors.

This would bring first-class named extension families to Kotlin. Extension families allow us to guarantee that a given data type (class, interface, etc.) satisfies behaviors (a group of functions or properties) that are decoupled from the type's inheritance hierarchy.

Unlike the traditional subtype style composition where users are forced to extend and implement classes and interfaces, extension families favor horizontal composition based on compile-time resolution between types and their extensions.

Current code status

Thanks to @truizlop and @JorgeCastilloPrz,we already have a working POC where we've coded a big part of the KEEP-87. There are still some additional behaviors we'd need to code that have been clarified on the KEEP though, and we'd love to get JetBrains help on iterating those. Instructions about how to test it have been provided within the "How to try?" section of the keep, where we have also provided a code sample.

Rewording

We have revamped the initial proposal given Type Classes are not really our aim here, but compile-time dependency resolution for Kotlin. The nature of the proposal has not changed a lot, just suffered a major refactor so it's more in the line of what a generic language feature like this one would be, and also to fit the compiler implementation feedback we've got from some JetBrains devs these later months.

We believe the compiler could support compile-time dependency resolution out of the box, as other modern laguages already do. We believe some infrastructure patterns would fit better as built-in language features, so users can get their canonical solution for that already provided and focus their work on the actual domain requirements and business logics of their programs. We've seen similar things in other languages that provide built in support for concurrency and other stuff.

How to contribute

Want to help to bring compile time extension resolution to Kotlin?. A fork is being provisioned where a reference implementation based on this proposal will take place at https://github.com/arrow-kt/kotlin

@elizarov

This comment has been minimized.

Copy link
Member

commented Oct 2, 2017

The proposed syntax for companion constrains looks a bit confusing. Take a look at this example:

typeclass Monoid {
    fun combine(b: Self): Self 
    fun Self.Companion.empty(): Self
}

It is not perfectly clear from this declaration what is the receiver type of combine and empty functions. It is, in fact, Self for combine and Self.Companion for empty, but the syntax is not regular enough to make it clear.

I was thinking a bit on how to fit companion-extension syntax in a nicer way and came up with the following alternative syntactic approach.

Let's eschew the idea that extension declaration is a way to group multiple extension functions together, but instead leave the regular extension function syntax intact inside both typeclass and extension scopes like this:

typeclass Monoid {
    fun Self.combine(b: Self): Self
    fun Companion.empty(): Self
}

Here, in the context of typeclass declaration (and only there) we treat Self and Companion as context-sensitive keywords that refer, respectively, to the actual class that this typeclass applies to and its companion (regardless of how its companion is in fact named). Now, when declaring the actual extension we'll have:

extension Int : Monoid {
    fun Int.combine(b: Int): Int = this + b
    fun Int.Companion.empty(): Int = 0
}

In this approach extension block serves just a wrapper and an additional safety net. It does not bring any scope with it. All the functions declared inside extension are going to be treated normally (as if they were on the top level), but compiler will complain if they do not match the signatures of the corresponding type-class.

Unfortunately, this proposal also opens the whole can of worms on companion-vs-static debate. One of the issues being the fact, that not all classes in Kotlin (and even in Kotlin stdlib) have companions and there is currently no way to declare a companion outside of the class. Moreover, it is hardly possible to fix or to work around due to dynamic nature of JVM.

One solution to this conundrum is to stop piggybacking onto companion objects at all. Let us, instead, declare functions on the typeclass itself, like this:

typeclass Monoid {
    fun Self.combine(b: Self): Self
    fun empty(): Self
}

In this approach typeclass does declare a scope in a similar way to interface declaration. We interpret combine as a function with two receivers (an instance of the typeclass and Self), while empty has only one receiver (an instance of typeclass). In fact, it meshes perfectly with how the typeclasses are going to be represented on JVM anyway. Now the extension declaration is going to look like this:

extension Int : Monoid {
    fun Int.combine(b: Int): Int = this + b
    fun empty(): Int = 0
}

The interesting thought direction here is that this kind of extension mechanism can work as a replacement for companion objects altogether. We might just deprecate companions, because any declaration of the form:

class Foo {
    companion object {
       /** body **/
    }
}

can be replaced, for the same effect, with

class Foo {
    extension Foo {
       /** body **/
    }
}

or, which is not possible with companion objects, if the access to private members of Foo is not needed, it can be moved outside:

class Foo {
}

extension Foo {
    /** body **/
}

It is cleaner and more modular, since you are not limited to one companion object anymore, but can define as many as you want.

However, with this alternative-to-companion approach to typeclasses we have to work out on how the typeclass instances are going to be named and what is the syntactic approach to explicitly specify this name (this is, at least, needed for JVM interop). We can also try to somehow reuse companion modifier for extension blocks by imbuing it with an additional syntax. Lots of things to think about.

@raulraja

This comment has been minimized.

Copy link
Author

commented Oct 2, 2017

@elizarov makes perfect sense. I'll update the proposal to the new proposed syntax

}
```

Some of this examples where originally proposed by Roman Elizarov and the Kategory contributors where these features where originally discussed https://kotlinlang.slack.com/archives/C1JMF6UDV/p1506897887000023

This comment has been minimized.

Copy link
@mkobit

mkobit Oct 2, 2017

Small note seeing this link, you may want to copy/paste the snippets if you believe it provides important context because the Kotlin slack is free and only has the most recent 10k messages.

This comment has been minimized.

Copy link
@raulraja

raulraja Oct 2, 2017

Author

@mkobit thanks, I copied most of the snippets from @elizarov into the proposal so I think we are good if that goes away.

@mikehearn

This comment has been minimized.

Copy link

commented Oct 2, 2017

I realise all this stuff comes from Haskell and people who love category theory, but I strongly suggest that to gain better acceptance you drop the conventional FP names for things and use more appropriate names that are less jargony and more Java-like:

Monoid -> Combinable
Functor -> Mappable

etc

@raulraja

This comment has been minimized.

Copy link
Author

commented Oct 2, 2017

@mikehearn those are just examples and not part of the proposal. Anyone is free to name their own typeclasses whatever they want and this proposal does not address what names users will choose nor it proposes new types for the stdlib.
Also the examples don't come from Haskell. Those names and type classes I used as examples are part of many libraries and langs by now including libs in Kotlin.

@dnpetrov

This comment has been minimized.

Copy link
Contributor

commented Oct 3, 2017

How type class implementations (extensions) are resolved?
When two implementations of the type classes are in conflict?

Elaborating on the example with ProdConfig and TestConfig, suppose I have

package monoid

typeclass Monoid {
    fun Self.combine(b: Self): Self
    fun empty(): Self
}

and, for example,

package org.acme

extension Int : Monoid {
    fun Int.combine(b: Int): Int = this + b
    fun empty(): Int = 0
}

and

package org.fizzbuzz

extension Int : Monoid {
    fun Int.combine(b: Int): Int = this * b
    fun empty(): Int = 1
}

Now, in package myapp, I want to use one particular implementation of Monoid.
What should I import?

@raulraja

This comment has been minimized.

Copy link
Author

commented Oct 3, 2017

@dnpetrov an import matching the contained instances would bring them into scope:

import org.fizzbuzz.*

In the case where you have two instances that are conflicting the compiler should bail because they are ambiguous and the user needs to disambiguate them.

import org.fizzbuzz.*
import org.acme.*

val x: Int = 1.combine(2) // // Ambiguous instances for Monoid: Int. Please redefine`extension Monoid: Int` in the current scope or use tagged instances if they provide different behavior.

A natural way for disambiguation may be redefining the instance in the current scope which would take precedence over the imported one.
If the compiler generated synthetic names for the instances or supported users naming them you could just import the FQN.

import org.fizzbuzz.monoidIntAdd 

Another approach would be to place further constrains in the instances like is done today with the <A: Any> trick where it is used to ensure non nullable values are always passed as args to functions.

typeclass Add : Int
typeclass Multiply : Int

extension Int : Monoid : Add {
    fun Int.combine(b: Int): Int = this + b
    fun empty(): Int = 0
}

extension Int : Monoid : Multiply {
    fun Int.combine(b: Int): Int = this * b
    fun empty(): Int = 1
}

fun <Tag, A: Monoid: Tag> combine(a: A, b: A): A = a.combine(b)

combine<Add, Int>(1, 1) // 2
combine<Multiply, Int>(1, 1) // 1
combine(1, 1) // Ambiguous instances for `Monoid: Int`. Please redefine`extension Monoid: Int` in the current scope or use tagged instances if they provide different behavior.
@elizarov

This comment has been minimized.

Copy link
Member

commented Oct 3, 2017

Let's assume the example by @dnpetrov

package org.acme

extension Int : Monoid {
    fun Int.combine(b: Int): Int = this + b
    fun empty(): Int = 0
}

It is pretty straightforward with star imports:

import org.acme.*

To disambiguate extension in the case of start import from the other package, I suggest to use the simple name of the class that extension is being defined for, like this:

import org.acme.Int

This will bring in scope all Int extensions defined in this package.

This approach has a nice and quite well-intended side-effect that if a class and its extension are both defined in the same package, then a qualified import of a class automatically brings into scope all the extensions defined for this class in the same package. We can even go as far as to forbid redefining class extensions in other packages if they are defined in the original package. This will improve predictability and is quite consistent with a class member vs extension resolution in Kotlin (class member, being defined by the class author, always takes precedence).

However, if there is no extension in the class's original package, then different clients can define and/or import different extension instances for this class with the same typeclass.

Also note, that a generic method with typeclass upper-bounds should have an implict instance of typeclass in scope behind the scenes (actually passed to it by invoker) and this instance should always take precedence.

@elizarov

This comment has been minimized.

Copy link
Member

commented Oct 3, 2017

The interesting side-effect of this proposal is that we can almost deprecate reified modifier and reimplement it with type-classes instead (not really):

typeclass Reified {
    val selfClass: KClass<Self>
}

Now a function that was doing something like:

fun <reified T> foo() { .... T::class ... }

can be replaced with:

fun <T : Reified> foo() { .... T.selfClass ... }

This analogy might help to understand how typeclasses shall behave in various scenarios. In particular, generic classes with typeclass upper bounds shall be supported if and only if we roll out support of generic classes with reified type parameters.

@raulraja

This comment has been minimized.

Copy link
Author

commented Oct 3, 2017

@elizarov that will simplify most of our code a ton since we are currently relying on inline <reified A>... all over the place and we frequently find places inside generic classes where we can't use those methods with the classes type args.

@raulraja

This comment has been minimized.

Copy link
Author

commented Oct 4, 2017

Updated the proposal with a section on overcoming some of the limitations of inline reified with Typeclass constrain evidences as demonstrated in #87 (comment)

@dnpetrov

This comment has been minimized.

Copy link
Contributor

commented Oct 4, 2017

What if I have multiple type class implementations for the same type in the same package?
Like

package org.acme.bigint

class BigInt { ... }

extension BigInt : Monoid { ... }
extension BigInt : Show { ... }

I'd expect extensions to have names or something. Implicit imports look somewhat fragile. You don't know what else you bring into the scope.

Maybe even just follow familiar class inheritance syntax:

typeclass Monoid<T> {
    fun T.combine(b: T): T
    fun empty(): T
}

extension AdditiveInt : Monoid<Int> {
    override fun Int.combine(b: Int): Int = this + b
    override fun empty(): Int = 0
}

Then typeclass becomes just a special kind of interface, and extension - a special kind of object, with its members implicitly imported in file scope, and all regular resolution rules just work as they should. Note that you also get multi-parameter type classes for free.

@dnpetrov

This comment has been minimized.

Copy link
Contributor

commented Oct 4, 2017

With parametric type classes, type class dependent functions might need some different syntax, like:

fun <T> combineWith(others: Iterable<T>): T where Monoid<T> { ... }
@raulraja

This comment has been minimized.

Copy link
Author

commented Oct 4, 2017

@dnpetrov As for the syntax makes sense. That syntax is similar to what I originally proposed as well since it is also necessary to implement the fact that certain type classes can't target a single type but target the combinations of many. For example a natural transformation to go from one type to another:

typeclass FunctionK<F<_>, G<_>> {
  fun <A> invoke(fa: F<A>): G<A>
}

extension Option2List : FunctionK<Option, List> {
  fun <A> invoke(fa: Option<A>): List<A> =
    fa.fold({ emptyList() }, { listOf(it) })
}

In the case above the instance does not target neither Option nor List but those are part of the type args and folks should be able to summon the instance somehow if they can prove that such instance exists for both types. I'm not sure how that case could be expressed with extension syntax.

@dnpetrov In your example above an alternative way to express combineWith could be:

fun <T> combineWith(others: Iterable<T>, instance MT : Monoid<T>): T = ...

That would allow call sites to replace the instance by explicitly providing one if needed so the method could be invoked in a similar way as:

combineWith(others) //compiles if Monoid<T> instance is found and fails if not.
combineWith(others, customInstance) // always compiles

That also is an alternative way to handle ambiguity when collisions exists beside using concrete imports.

@elizarov @dnpetrov thoughts? Thanks!

@dnpetrov

This comment has been minimized.

Copy link
Contributor

commented Oct 4, 2017

@raulraja Looks good. Need some time to reshuffle my thoughts and probably show this to the owner of Resolution & Inference. @erokhins pls take a look...

@elizarov

This comment has been minimized.

Copy link
Member

commented Oct 4, 2017

The latest syntactic proposal by @dnpetrov for typeclass declaration is indeed more general and is easier to adapt to to the can of typeclasses binding multiple types.

However, I don't like the need to name typeclasses and more so the need to declare and name typeclass instances in function signature. It is just a boilerplate. IMHO, it defeats the whole purpose of having type-classes as a language feature, since you can already do all that.

At this point I would recommend to (re)read Cedric's blog post on ad-hoc polymorphism. In our example, we can already write just:

interface Monoid<T> {
    fun T.combine(b: T): T
    fun empty(): T
}

then write

object AdditiveInt : Monoid<Int> {
    override fun Int.combine(b: Int): Int = this + b
    override fun empty(): Int = 0
}

then declare functions with an explicit type class instance:

fun <T> combineWith(others: Iterable<T>, MT : Monoid<T>): T = ...

It is even shorter than the latest syntactic proposal for typeclasses, since I don't have to add instance modifier before MT. So, what is the value of the type classes proposal then?

IMHO, the only valuable benefit of having type classes directly supported in the language is making them easier to grasp for beginners by capitalizing on the mental model of interfaces on steroids (aka Swift protocols), and eschewing as much boilerplate as we can in the process. Having to name your type class instance is just a boilerplate, having to pass your type-class instances explicitly is just a boilerplate. Neither should be required.

@cbeust

This comment has been minimized.

Copy link
Contributor

commented Oct 4, 2017

A nitpick but in the examples, I think you should be more specific about the name of the Monoid types, e.g. MonoidAddition, MonoidMultiplication, etc...

@raulraja

This comment has been minimized.

Copy link
Author

commented Oct 4, 2017

@elizarov Kategory uses interfaces for typeclasses now and does it like that. The issue is that:

fun <T> combineWith(others: Iterable<T>, MT : Monoid<T>): T = ...

Requires explicitly passing the instance manually which defeats the purpose of type classes. Having a keyword to require the compiler to verify that call sites such as combineWith(listOf(1)) compiles because AdditiveInt is in scope is the real feature because we get compile time verification of constrains.

I agree though that the variant based on extensions is much easier to grasp for beginners. We can perhaps encode all complex cases where typeclasses require multiple types if typeclasses allowed for type args:

typeclass FunctionK<G<_>> {
  fun <A> Self<A>.convert(): G<A>
}

extension Option : FunctionK<List> {
  fun <A> Option<A>.convert(): List<A> =
    this.fold({ emptyList() }, { listOf(it) })
}

val x: List<Int> = Option(1).convert() // [1] 
val y: List<Int> = None.convert() // [ ] 

And in a polymorphic context:

fun <G<_>, F<_>: FunctionK<G>, A> convert(fa: F<A>): G<A> = fa.convert()
val l: List<Int> = convert(Option(1))
val s: Set<Int> = convert(Option(1)) // No `Option: FunctionK<Set>` extension found in scope.
@elizarov

This comment has been minimized.

Copy link
Member

commented Oct 5, 2017

Having played more with @dnpetrov syntactic proposal I start to like certain parts of it. First of all, the generic-like declaration for interface indeed looks more concise than my original proposal with the Self keyword. Being able to eschew the "magic" nature of Self is also good.

Moreover, instead of a having to reserve a new typeclass keyword we can just be explicit about the fact that it just an interface, but tag this interface with an extension modifier like it is customary in Kotlin's syntactic tradition:

extension interface Monoid<T> {
    fun T.combine(b: T): T
    fun empty(): T
}

This syntax also scales to multiple types (and, maybe HKTs, but I will not cover them here). For example, you can have:

extension interface Bijection<A, B> {
    fun A.toB(): B
    fun B.toA(): A
}

However, as soon as you allow applying this syntax to multiple types, the story with "standalone" (static? but not really) functions like empty becomes complicated. We do want those functions to extend "the type" and we want them being applicable on the type itself (like companion object functions for the type). You should be able to use Int.empty() expression in the context of Monoid<Int>, but with multiple types it becomes unclear which type is being extended this way.

In order to overcome this difficulty, we can use some fresh syntactic construct to bind the type to its typeclass. Look at how the dot (.) can work for this purpose and how it is consistent with Kotlin's approach to extension functions:

extension interface T.Monoid {
    fun T.combine(b: T): T
    fun empty(): T
}

In this context, T will be called "an extension receiver type" (that is, the type that receives the extension).

Note, that we can allow to add additional types in <...> after the extension interface name and allow "receiver-less" extensions like Bijection, too.

Similarly, declaration of typeclass instances can ready follow the model of Kotlin's companion object. Name can be optional, too, so that an instance without a name can be written like this:

extension object : Int.Monoid {
    override fun Int.combine(b: Int): Int = this + b
    override fun empty(): Int = 0
}

Since we have indicated that Int is an "extension receiver type", it becomes clear and syntactically transparent on why one can write Int.empty() in the context of this extension object. It is also clear on how to provide an explicit name, if needed.

Alas, this notation makes it less obvious on how to specify typeclass constraints for functions. The interface-inspired syntax of T : Monoid does not look consistent with the declaration of Monoid anymore and breaks altogether with receiver-less type classes like Bijection<A,B>. I don't think that reusing a where clause works either, but we can introduce a new clause for this purpose, for example:

fun <T> combineWith(others: Iterable<T>): T given T.Monoid = ...
@dnpetrov

This comment has been minimized.

Copy link
Contributor

commented Oct 5, 2017

@elizarov

However, as soon as you allow applying this syntax to multiple types, the story with "standalone" (static? but not really) functions like empty becomes complicated. We do want those functions to extend "the type" and we want them being applicable on the type itself (like companion object functions for the type). You should be able to use Int.empty() expression in the context of Monoid, but with multiple types it becomes unclear which type is being extended this way.

This approach has some issues with overloading. E.g., what if I have several different type classes (extension interfaces, whatever) for T with method empty()?
I'd suppose it should rather be more explicit, like in #87 (comment), where type class implementation is passes as a named parameter, which has special default value resolution strategy.

I'd expect something like:

extension interface Monoid<T> {
    fun T.combine(b: T): T
    fun empty(): T
}

extension object AdditiveInt : Monoid<Int> {
    override fun Int.combine(b: Int): Int = this + b
    override fun empty(): Int = 0
}

extension object StringConcatenation : Monoid<String> {
    override fun String.combine(b: String): String = this + b
    override fun empty(): String = ""
}

fun <T> T.timesN(n: Int, extension monoid: Monoid<T>): T {
    var result = empty() // 'monoid' is an implicit dispatch receiver
        // or, if you need to disambiguate, 'monoid.empty()'
    for (i in 0 until n) {
        result = result.combine(this) // 'monoid' is an implicit dispatch receiver, 'combine' is a member extension
          // or 'monoid.run { result.combine(this) }'
    }
    return result
}

fun test() {
    assertEquals("abcabcabcabc", "abc".timesN(4))
}
@dnpetrov

This comment has been minimized.

Copy link
Contributor

commented Oct 5, 2017

Now, when I look at my example above, it starts to remind me "extension providers" from Xtend (https://www.eclipse.org/xtend/documentation/202_xtend_classes_members.html#extension-methods).
Probably we'll also need 'extension val's.

@raulraja @elizarov
I think we need some more practical motivating example. Yeah, all those Monoids and BiFunctions and HKTs thrown in here and there are nice. Can we think of a concise, meaningful piece of code to demonstrate why type classes (extension interfaces) make life better? Like HTML builders for extension lambdas, for example.

@d3xter

This comment has been minimized.

Copy link

commented Oct 5, 2017

One possible motivation could be ad-hoc polymorphism as described by Cedric.
An Example is Limiting the possible values put into a JSON.

For example in klaxon to create a new Json-Object you have the following method:
fun obj(vararg args: Pair<String, *>): JsonObject Link

This basically allows everything to be put inside.
Other JSON-Libraries like json-simple have many methods called putString/putInt and so on.

With typeclasses we can define a JSONValue:

typeclass T.JSONValue {
    fun T.toJSON() = String
    fun fromJSON(String) = T
}

extension object : Int.JSONValue {
    fun Int.toJSON() = this.toString()
    fun fromJSON(a: String) = a.toInt()
}

Then all you need inside your JSON-Library is a method like that:

fun <T> JsonObject.put(key: String, value: T) given T.JSONValue { ... }

@raulraja

This comment has been minimized.

Copy link
Author

commented Oct 5, 2017

@dnpetrov here are a few:

Type classes cover many use cases, here are a few interesting ones syntax aside:

Compile-Time Dependency Injection

package common

extension interface Dependencies<T> {
    fun config(): Config
}
package prod

extension object ProdDeps: Dependencies<Service> {
    fun config(): Config = ProdConfig
}
package test

extension object TestDeps: Dependencies<Service> {
    fun config(): Config = TestConfig
}
import test.*

Service.config() // TestConfig
import prod.*

Service.config() // ProdConfig

In the example above we used config but you can apply a similar case to the Android Activity which managing it one of the biggest issues for developers.

Type restrictions for safer operations

extension interface Numeric<T> {
    fun T.plus(y: T): T
    fun T.minus(y: T): T
}

extension Fractional<T> : Numeric<T> {
    def T.div(y: T): T
}

extension object : Numeric<Int> {
    override fun Int.plus(y: Int): Int = this + y
    override fun Int.minus(y: Int): Int = this + y
}

extension object : Fractional<Double> {
    override fun Double.plus(y: Double): Double = this + y
    override fun Double.minus(y: Double): Double = this + y
    override fun Double.div(y: Double): Double = this / y
}

fun <T> add(x: T, y: T, extension N: Numeric<T>): T = x.plus(y)
fun <T> div(x: T, y: T, extension F: Fractional<T>): T = x.div(y)

add(1, 1) // 2
add(1.0, 1.0) // 2.0
div(3.0, 5.0) // 0.6
div(3, 5) // No `Fractional<Int>` instance found in scope.
//In Kotlin now 3/5 == 0 loosing precision.

Type safe composable encoder / decoders and reflectionless instrospection

Expanding on what @d3xter mentions in #87 (comment)

sealed class JValue {
    object JNull: JValue()
    class JString(s: String): JValue()
    class JDouble(num: Double): JValue()
    class JDecimal(num: BigDecimal): JValue()
    class JInt(num: Int): JValue()
    class JBool(value: Boolean): JValue()
    class JObject(obj: List<JField>): JValue()
    class JArray(arr: List<JValue>): JValue()
}

typealias JField = Pair<JString, JValue>

extension Writes<T> {
  fun writes(o: T): JValue
}

extension Reads<T> {
  fun reads(json: JValue): T
}

extension Format<T> : Reads<T>, Writes<T>

//Primitive to JValue instances omitted for simplicity

fun <T> T.toJson<T>(instance tjs: Writes<T>): JValue = tjs.writes(this)

fun <T> JValue.to(intance fjs: Reads<T>): T = fjs.reads(this)

data class Person(val name: String, val age: Int)

object extension Writes<Person> {
    fun writes(o: Person): JValue = JObject(listOf(
        "name".toJson() to o.name.toJson(), // fails to compile if no `Reads<String>` is defined
        "age".toJson() to o.age.toJson(), // fails to compile if no `Reads<Int>` is defined
    ))
}

Person("William Alvin Howard", 91).toJson() // if both `Reads<String>` `Reads<Int>` is defined : { "name": "William Alvin Howard", age: "91" }

The most important use case of them all is that users and library authors can write code with typeclasses and default instances providing complete solutions to their users. At the same time users have the power to override library exposed typeclasses behavior at any time by providing other instances in scope.
In the example above if you need keys serialized with Pascal Case style the user can just override the instance and leave the rest of the library intact:

Defined in user code (not in the json library), this implementation capitalizes the keys.

object extension Writes<Person> {
    fun writes(o: Person): JValue = JObject(listOf(
        "Name".toJson() to o.name.toJson(), // fails to compile if no `Reads<String>` is defined
        "Age".toJson() to o.age.toJson(), // fails to compile if no `Reads<Int>` is defined
    ))
}

Person("William Alvin Howard", 91).toJson() // { "Name": "William Alvin Howard", "Age": "91" }

The library author did not have to add specific support for Pascal Case because overriding a tiny bit of functionaly is easy enough. Entire system and libraries can be composed with typeclass evidences deferring dependencies to users but providing reasonable defaults that are verified at compile time therefore making programs more secure and making easier not to recurre to runtime reflection, costly initialization graph of dependencies etc.

I'll be happy to provide more use cases if needed but I believe the controlled and safe power type classes would bring to Kotlin is a great step forward in terms of compile time verification, type safety and performance for many common patterns like the ones demonstrated above.

There is also the use case of supporting typed FP with all the abstractions we used in examples previously but I will not bring those examples here because Type Classes is much more than FP, it's all about polymorphism and evidence based compile time verification opening the door to many other use cases most of them in trivial tasks users face in daily programming.

@elizarov

This comment has been minimized.

Copy link
Member

commented Oct 5, 2017

@dnpetrov I would suggest that "type extension" like empty in Monoid are added to the scope of the corresponding type, so the examples below should just work:

fun <T> T.timesN(n: Int): T given T.Monoid {
    var result = T.empty() // because T is a Monoid.
    for (i in 0 until n) {
        result = result.combine(this) // again, because we are given that T is Monoid
    }
    return result
}
@raulraja

This comment has been minimized.

Copy link
Author

commented Oct 5, 2017

@elizarov is :T given T.Monoid<T> needed or would it be :T given Monoid<T> ?

@elizarov

This comment has been minimized.

Copy link
Member

commented Oct 6, 2017

@raulraja That was a type. Fixed, than you. You can use either :T given Monoid<T> or :T given T.Monoid depending on the definition of Monoid -- either as extension interface Monoid<T> or extension interface T.Monoid.

The problem with the former, as was discussed before, is that it is not clear how to disambiguate the functions like empty without a Scala-like implicit parameters, which I particularly despise, because they wreck the symmetry between declaration and use of a function.

With the later declaration, given T.Monoid we can use T.empty() to disambiguate. The open questions, which I cannot answer without a larger design discussion and analysis of use-case if whether we should support unqualified empty() at all and whether a declaration of Monoid<T> (without a receiver type) with receiver-less empty() function should be even allowed.

@sellmair

This comment has been minimized.

Copy link

commented May 17, 2019

Also, I personally would prefer implementing one extension object per extension rather than having it all implemented in one object. I cannot see a reason why this should not be allowed? Sure, I can just do one implementation object per extension in a sub-package of the type itself, but I would still prefer having the option.

@pakoito

This comment has been minimized.

Copy link

commented May 17, 2019

I see what you mean now. Your proposal would require effectively changing the language to allow introducing top-level constructs inside classes, like Java has with statics. Kotlin uses companion objects for that, and we aimed to minimize changes to the language, and keep the implementation simple and compatible with Java.

@sellmair

This comment has been minimized.

Copy link

commented May 17, 2019

Okay now I am confused:
What is the difference between

class A {
    companion object {
        object B 
    }
} 

vs

class A {
    object B
}

Because from the language perspective, it seems pretty similar.

  1. Both can be accessed by (Edit. This was wrong. Companion needs to be resolved A.Companion.B)
val  b = A.B
  1. Both are just single instance

  2. Both will be instantiated at application startup time, right?

@pakoito

This comment has been minimized.

Copy link

commented May 17, 2019

From the implementation perspective they're different. One belongs inside the companion object, the other hangs inside the class with other child classes and the companion object. They're easier to find in the companion where it's infrequent to declare classes. Semantically these instances behave more like static values.

sealed class NetworkResult {
  object Loading: NetworkResult()
  class ErrorFail: NetworkResult()
  data class Success(): NetworkResult()
  extension class EqNetwork(with val a: Eq<User>) : Eq<NetworkResult>() {
    ....
  }
  extension class CopyNetwork(with val a: Copy<User>) : Copy<NetworkResult>() {
    ....
  }
}
@sellmair

This comment has been minimized.

Copy link

commented May 17, 2019

Okay, I get it! So the benefit of the constraint to the companion object is easier for the compiler. This leads e.g. to faster compile times.
I would still put my vote to open this constraint, allowing extension implementations being put into the class itself as regular objects. IMO, this is more intuitive and "cleaner" to use. Thanks for the clarification. I am playing with the prototype right now and I love it!

@sellmair

This comment has been minimized.

Copy link

commented May 17, 2019

Another question that I would be interested in:
Right now, the prototype does not support splitting extension interfaces up into multiple interfaces. I think we could benefit from allowing something like this:

interface A<T>

interface B<T> : A<T>

class Model

internal extension class ModelBExtensions : B<Model>

fun f1(with a: A<Model>) {

}

fun f2() {
    f1() // <---- [UNABLE_TO_RESOLVE_EXTENSION] 'Unable to resolve parameter (a : A ).'
}
@pakoito

This comment has been minimized.

Copy link

commented May 17, 2019

Inheritance is not supported, it's a cause of conflicts. IIRC it only works for current class.

It's also contagious, you need to do this:

fun f2(with a: B<Model>)

The better way to compose it is via constructors:

extension class B(with val a: A)
@sellmair

This comment has been minimized.

Copy link

commented May 17, 2019

Let me give you an example by explaining what I was trying to do here.

I was playing around with the proposal to re-design a repository class, that we wrote. So I came across something like this

interface Persistence<T> {
    fun T.save()
    fun T.delete()
    fun get(id: Int): T
    fun getAll(): List<T>
}

After implementing some extension classes for this Persistence interface, I thought I could be sneaky by just writing something like:

extension class ListPersistence<T>(with persistence: Persistence<T>): Persistence<List<T>> {
   // ... TODO
}

Which I could use to create a Persistence for nested models like

class ParentModel(val id: Int, val children: List<ChildModel>)

internal extension class ParentModelPersistence(
    with childrenPersistence: Persistence<List<ChildModel>>): Persistence<ParentModel> {
    
    override fun ParentModel.save() {
        children.save()
        // ...
    }

    //...
}

But obviously, the get and getAll part of the Persistence are not making sense for List anymore. I therfore splitted the Persistence interface:

interface Saving<T> {
    fun T.save()
}

interface Deleting<T> {
    fun T.delete()
}

interface Loading<T> {
    fun get(id: Int): T
    fun getAll(): List<T>
}

interface Persistence<T> : Saving<T>, Deleting<T>, Loading<T>

With that my ParentModelPersistence could just depend on Saving<List<ChildModel>> and Deleting<List<ChildModel>> which will be there as long as I have a Saving<ChildModel> and Deleting<ChildModel>.

internal extension class ParentModelPersistence(
    with childrenSaving: Saving<List<ChildModel>>): Persistence<ParentModel> {
    
    override fun ParentModel.save() {
        children.forEeach { it.save() } 
    }

    //...

Now, since I also implemented Persistence for the ChildModel, I would expect that all my dependencies are fulfilled. Do you have any idea on how to model it differently?

I also uploaded the full source files, in case somebody wants to look into.
repository2.zip

@pakoito

This comment has been minimized.

Copy link

commented May 17, 2019

I don't think fulfilling Persistence<ParentModel> means that you can do Saving<ParentModel>. @raulraja may know better.

@raulraja

This comment has been minimized.

Copy link
Author

commented May 17, 2019

@sellmair the current prototype does not support resolving subtype constrains but it's something we can add down the road once we receive feedback from Jetbrains and the compiler team. We are currently waiting to see what they think of the proposal. We are trying to avoid cluttering the prototype with features we are going to have to ditch once we get feedback as the final form of this proposal can change considerably

@sellmair

This comment has been minimized.

Copy link

commented May 18, 2019

@raulraja Do you also have an opinion on potentially dropping the companion object constraint like proposed above?

@raulraja

This comment has been minimized.

Copy link
Author

commented May 19, 2019

I'm not in favor of adding things to the language unless strictly necessary. I'm not motivated in this case by syntactic reasons and companion order resolution is common practice in other langs like Scala so it's already a known feature to many people. In arrow we also project type class instances over companions as extensions functions and this is something that seems to work for the Kotlin FP community

@sellmair

This comment has been minimized.

Copy link

commented May 20, 2019

Thanks for the answer! I honestly just have problems understanding why something like this

class TypeA {
    extension object TypeAExtension {
        
    }
}

requires changes to the language? This even compiles with the current prototype. The only thing missing would be, that the compiler also looks for extension objects inside the class namespace itself not just inside the companion. From a language perspective, it is hard to see a reason why those extensions are required to be placed inside the companion. It's not about the one indentation that we want to save, but more about the intuition of the language.

@fvasco

This comment has been minimized.

Copy link

commented May 20, 2019

@raulraja

the current prototype does not support resolving subtype constrains but it's something we can add down the road once we receive feedback from Jetbrains and the compiler team.

I'm not motivated in this case by syntactic reasons and companion order resolution is common practice in other langs like Scala so it's already a known feature to many people.

Kotlin should be sexy for Kotlin developers, not for Scala ones.

@sellmair

I honestly just have problems understanding why something like this ... requires changes to the language

All KEEPs require changes in Kotlin language, included this one.

From a language perspective, it is hard to see a reason why those extensions are required to be placed inside the companion

Every instance searches stuff in its companion, class isn't a package and all inner classes do not have the same visibility of companion.

However I wish to consider the case where a companion object is not allowed, ie when the parent class is an object, how implement:

object A {
    private extension object B
}
@sellmair

This comment has been minimized.

Copy link

commented May 20, 2019

@fvasco

All KEEPs require changes in Kotlin language, including this one.

Sure! What I wanted to say was: "additional changes to the language". The current prototype already brought in almost all necessary changes. Granted: I did not think about the visibility. The compiler would need to check the visibility of the extension class. But I think it would be worth it. Checking the visibility of extension classes is something the compiler needs to do anyway later on since package level extension classes need to be internal as well.

However I wish to consider the case where a companion object is not allowed, ie when the parent class is an object, how implement:

In this case, I think it is even necessary to allow this kind of extensions, because allowing them in one single scenario, but not allowing them in others (for no obvious reason) seems to be confusing to me?

@sellmair

This comment has been minimized.

Copy link

commented May 20, 2019

@fvasco @raulraja @pakoito
I have an implementation to support this "in type extension resolution" feature that you can play with. I think to see this in the IDE itself gets us a better feeling for the language. I am still convinced that this is pretty nice and I hope you guys at least consider something like that ☺️
arrow-kt/kotlin#26

@raulraja

This comment has been minimized.

Copy link
Author

commented May 22, 2019

@sellmair sounds good. Feel free to PR the current implementation and proposal. Currently we are waiting for feedback from Jetbrains and compiler team. @fvasco this proposal is not for Scala devs. It was an example that companion based resolution is known topic. I don't really have an strong opinion where instances end up. We should hear feedback from Jetbrains first before we keep on introducing more changes to this proposal so they are in the right direction once we have feedback from the compiler team

Update proposals/compile-time-dependency-resolution.md
Co-Authored-By: Pablo Gonzalez Alonso <pabs87@gmail.com>
The above extension provides evidence of a `Repository<Group<A>>` as long as there is a `Repository<A>` in scope. The Call site would look like:

```kotlin
fun <A> fetchGroup(with repo: GroupRepository<A>) = loadAll()

This comment has been minimized.

Copy link
@ulrikrasmussen

ulrikrasmussen Jun 4, 2019

Shouldn't the argument be with repo: Repository<Group<A>>? I.e. it should be a requirement of some implementation of the given type, not the specific one given above.

This comment has been minimized.

Copy link
@pakoito

pakoito Jun 4, 2019

@raulraja this seems correct

This comment has been minimized.

Copy link
@raulraja

raulraja Jun 6, 2019

Author

I'm focusing right now on something more pressing in arrow but feel free to push a change or PR for all the suggested fixes. Thanks

@fatjoem

This comment has been minimized.

Copy link

commented Jun 4, 2019

This example from the proposal is not a very good one:

fun <A> fetchById(id: Int, with repository: Repository<A>): A? {
  return loadById(id) // Repository syntax is automatically activated inside the function scope!
}

As a Kotlin developer without knowledge about typeclasses I have a hard time to understand what the difference is to the more traditional Kotlin way of writing this function:

fun <A> Repository<A>.fetchById(id: Int): A? {
  // Repository syntax is automatically activated inside the function scope, 
  // as with any other extension function!
  return loadById(id)
}

I understand that the proposal suggests ways to resolve the Repository implicitly, and that is cool with me. But why can't the syntax follow the traditional Kotlin way of declaring receiver arguments?

@pakoito

This comment has been minimized.

Copy link

commented Jun 4, 2019

@fatjoem Two main differences. Firstly, you can do any number of injections. That's inline with the other proposal to have multiple receivers. The second one is that the compiler will check transitively that those receivers are defined somewhere in the repository, and they're implicitly passed into the function. That means that by resolving these dependencies' transitivity you can do dependency injection from the compiler.

@fatjoem

This comment has been minimized.

Copy link

commented Jun 4, 2019

Thanks for confirming that.

I believe that multiple receivers and implicit receiver resolving are orthogonal features, that should perhaps be made available for existing language constructs and independent of each other.

For example, I want to be able to use multiple receivers without enabling implicit passing for them. Likewise, I can imagine that implicit passing can be useful for ordinarily declared receivers, too.

Regarding multiple receivers I like the other proposal's suggested syntax more, because it feels like a natural extension of what is already available in Kotlin.

Regarding the implicit passing, isn't it sufficient that those receiver's types are declared as extension interfaces / classes? When a receiver has such a type, the compiler would try / fallback to passing it implicitly.


## Composition and chain of evidences

Constraint interface declarations and extension evidences can encode further constraints on their type parameters so that they can be composed nicely:

This comment has been minimized.

Copy link
@ulrikrasmussen

ulrikrasmussen Jun 5, 2019

Unless there are other mechanisms in place for ensuring termination of the resolver, I think extension classes with constraints need to be constrained. For example, if I have

extension interface Foo<A> {}
extension class Bar(with val fooA: Foo<A>): Foo<A> {}

then a naive resolver will run forever. In Haskell, I think the restriction is that the types mentioned in each constraint must be a strict subterm of one of the types mentioned in the goal, or the constraint must refer to a type classe with no cyclic dependency on the goal.

}
```

## Extension resolution order

This comment has been minimized.

Copy link
@ulrikrasmussen

ulrikrasmussen Jun 5, 2019

Why is a resolution order necessary? Shouldn't overlapping instances be straight out rejected?

Assuming that extensions are resolved according to the resolution order, are they then resolved using backtracking search? This seems like an important detail in the presence of extension classes with extension constraints. For example:

extension interface Foo<A> {
  companion object {
    extension class MyClassFoo(): Foo<MyClass> {}
  }
}
class MyClass() {
  companion object {
    extension class PriorityFoo(with val unsatisfiable: Foo<Unit>): Foo<MyClass> {}
  }
}

fun example(with myClassFoo: Foo<MyClass) = ...

In this example, will a call example() result in a type error or will it get resolved to example(Foo.MyClassFoo())? The former will happen if there is no backtracking, because the extension class on the companion object of the target type gets priority, but its own constraints are not satisfiable (no instance of Foo<Unit>). In the latter case, backtracking will continue the search among instances with lesser priority, and will thus find the instance on the companion object of the extension interface.

This comment has been minimized.

Copy link
@zeitlinger

zeitlinger Jun 5, 2019

The resolution order makes it easier to understand - for the developer, and potentially faster for the compiler.

Do you have a more realistic example where backtracking would make sense?

This comment has been minimized.

Copy link
@ulrikrasmussen

ulrikrasmussen Jun 5, 2019

I don't actually think backtracking is a good idea since it will make the process of resolving constraints very hard to follow - my comment was meant as a question to clarify whether the KEEP actually has backtracking in mind or not.

I don't understand the point about the resolution order making it easier for the programmer to understand and faster for the compiler. I understand the resolution order as a mechanism for resolving ambiguity when more than one extension class matches a given constraint. My question is: why is this ambiguity even allowed to occur? That is, shouldn't overlapping instances be rejected by the compiler? If overlapping instances are rejected, then there is no need for a resolution order because there is at most one extension class that matches.

This comment has been minimized.

Copy link
@ulrikrasmussen

ulrikrasmussen Jun 6, 2019

(Replying to #87 (comment) in this thread)

@zeitlinger
Why allow ambiguity? Think if the Equality use case as described in https://quickbirdstudios.com/blog/keep-87-typeclasses-kotlin/ Equality based on the dog interface (based on name) Equality based on the database dog implementation, which implements the dog interface Seems to make sense to me, but would be rejected according to your proposal.

That is actually an excellent example of why one would want to disallow overlapping instances :).

First of all, and this is not my main point, the premise of the article is a bit misguided, as it assumes that one would actually want to have an equals() method which compares specific implementations (DatabaseDog) according to one equivalence relation and use a "default" equivalence relation for everything else that happens to implement the Dog interface. This is a variant of the common trap of trying to implement equals() for open class hierarchies: it is impossible, in general, to implement equals() in this way because one of the required properties of symmetry or transitivity will fail. For example, we have DatabaseDog(1, "Klaus") == ApiDog("Klaus") and ApiDog("Klaus") == DatabaseDog(2, "Klaus") but DatabaseDog(1, "Klaus") != DatabaseDog(2, "Klaus"), so transitivity is broken (and it cannot be fixed).

Getting back to type classes, we put these problems aside and ignore the actual semantics of what @sellmair is trying to implement. He argues here that it is a good idea to be able to define a "default" instance for the general interface Dog and override it with a specific one for the concrete class DatabaseDog. But this is really dangerous! Remember that the target type used for looking up an extension class is the type of the object at compile time, and not the actual run-time type. This means that in the following example:

val dog1 = DatabaseDog(1, "Klaus")
val dog2 = DatabaseDog(2, "Klaus")
val b = dog1.eq(dog2) // evaluates to false

the compiler will infer DatabaseDog for the two variables dog1 and dog2, and b will get the value false because the ids of the two objects are not equal.
However, just by adding annotations on the first object, we can completely change the semantics of the program:

val dog1: Dog = DatabaseDog(1, "Klaus")
val dog2 = DatabaseDog(2, "Klaus")
val b = dog1.eq(dog2) //evaluates to true

This is because now the extension class for DatabaseDog no longer matches, and the fallback extension class defined for Dog is used instead. None of the actual objects changed, only the type annotation! Reified type parameters aside, this would, to my knowledge, be the only place where the semantics of a Kotlin program would be affected by the static types of the objects. Type inference means that types are rarely mentioned, making it really difficult to determine what implementation is used just by looking at the code. Worse, an apparently safe refactoring could cause an inferred type to be made more general than before, subtly changing the program in critical ways.

Haskell, another language with type classes, will fail at compile time if a constraint can be satisfied by more than one instance of a type class for the same reasons as I have stated above. And Haskell does not even have subtyping, which further increases the possible sources of errors.

I would strongly suggest disallowing overlapping instances. It can always be added back, but once it is the default behavior, it is going to be very hard to remove it again. If they are to be allowed, I suggest going in the direction of Haskell and require the programmer to judiciously add explicit annotations ({-# OVERLAPPING #-}, -XIncoherentInstances, etc.) if the design really cannot be changed to not use overlapping instances, e.g. because of legacy code.

This comment has been minimized.

Copy link
@fatjoem

fatjoem Jun 6, 2019

Reified type parameters aside, this would, to my knowledge, be the only place where the semantics of a Kotlin program would be affected by the static types of the objects.

Not true. This is auctually quite common, and it is called function overloading. Overloaded functions are statically resolved, i.e. via the static compile-time type of their arguments. Example:

fun whatDog(dog: Dog) = print("dog")
fun whatDog(dog: DatabaseDog) = print("database dog")

fun test() {
    val dog = DatabaseDog(1, "Klaus")
    whatDog(dog)  // prints "database dog"
}

In this example, changing the type annotation of the dog variable will change what is printed:

fun test() {
    val dog: Dog = DatabaseDog(1, "Klaus")
    whatDog(dog)  // prints "dog"
}

In my opinion this is very similar to the "issue" you are describing as your main argument. I don't think it is such a big problem.

The big difference to ordinary polymorphic equals implementations is that the concrete implementation is resolved at compile time. Nothing can change at runtime that would affect the chosen equals implementation.

This is also why the "common trap for implementing equals in open type hierarchies" is not as relevant here. In open equals implementations you cannot choose at compile-time which equals implementation to use. With statically resolved equals you can. For example you can choose the Dog-Equality, and that would lead to DatabaseDog(1, "Klaus") == DatabaseDog(2, "Klaus") being true, neglecting the transitivity issue you mentioned.

For example, you can fix the transitivity issue in your example by giving all variables the same static type. This is a fix that cannot be applied to polymorphic open equals implementations in open type hierarchies.

This comment has been minimized.

Copy link
@ulrikrasmussen

ulrikrasmussen Jun 6, 2019

Ah, yes, I forgot about function overloading. I would personally find the kind of overload exemplified by whatDog() to be a code smell for the same reasons as for why I am against overlapping instances, but I can see how it would not be as inconsistent with the current behavior of the language as I first thought it was. In my opinion, ambiguous overloads should also be disallowed, or at least result in a warning, but I acknowledge that the discussion of that is out of scope for this KEEP.

I can see how you don't have the transitivity issue when equals is statically resolved as an overload because then you technically implement two different equivalence relations without trying to combine them into one (which is impossible). I still find it confusing though, and something to be avoided. For example:

fun eq(dog1: Dog, dog2: Dog) = dog1.name == dog2.name
fun eq(dog1: DatabaseDog, dog2: DatabaseDog) = dog1.name == dog2.name && dog1.id == dog2.id
fun main() {
    val dog1 = DatabaseDog("Klaus", 1)
    val dog2 = ApiDog("Klaus")
    val dog3 = DatabaseDog("Klaus", 2)
    println(eq(dog1, dog2)) //true
    println(eq(dog2, dog3)) //true
    println(eq(dog1, dog3)) //false -- could just as well be true if the compiler resolved the ambiguity otherwise
}

Yes, it fixes the problem because both eq functions implement proper equivalence relations. But I think the code would be much clearer if the functions were simply named differently and if it didn't rely on the compiler resolving the ambiguity. If someone in the future for some reason changes the eq overload for DatabaseDog to also take a database connection, then the third comparison in the above will silently fall back to comparing the objects as Dog, returning true. It is, in my personal opinion, an example of using abstractions as the means to be vague rather than to be precise.

This comment has been minimized.

Copy link
@sellmair

sellmair Jun 6, 2019

@ulrikrasmussen
I think you raised some interesting points!

@fatjoem
I am convinced that you pointed out the key: The behavior of the eq function is decided at compile time for the specified expression. This differentiates it from the "common mistakes" made with the equals function and removes (almost) all ambiguity.

I think the given example here

fun eq(dog1: Dog, dog2: Dog) = dog1.name == dog2.name
fun eq(dog1: DatabaseDog, dog2: DatabaseDog) = dog1.name == dog2.name && dog1.id == dog2.id
fun main() {
   val dog1 = DatabaseDog("Klaus", 1)
   val dog2 = ApiDog("Klaus")
   val dog3 = DatabaseDog("Klaus", 2)
   println(eq(dog1, dog2)) //true
   println(eq(dog2, dog3)) //true
   println(eq(dog1, dog3)) //false -- could just as well be true if the compiler resolved the ambiguity otherwise
}

is pretty interesting, since it has the same traits the proposed Equality type class by being resolved entirely at compile time. Still, it is somewhat ambiguous since it seems like transitivity is broken at the call site. But honestly, I cannot see how one would be able to forbid something like this in the current type system (and with compile-time extension interfaces) and I am convinced that it is still much better than the current Any.equals, since it is possible to resolve problems arising with this function at the call site of the function and not deep inside of some implementation of some model in some type hierarchy. It's easy to here to specify the type class to use explicitly, change the type of dog (like proposed above) or specify the functions type variable!

fun solution1() {
    val databaseDog: DatabaseDog = getSomeDatabaseDog()
    val databaseDog2: DatabaseDog = getSomeDatabaseDog()

    databaseDog.eq<Dog>(databaseDog2)
}

fun solution2() {
    val databaseDog: Dog = object : DatabaseDog {}
    val databaseDog2: Dog = object : DatabaseDog {}

    databaseDog.eq(databaseDog2)
}

fun solution3() {
    val databaseDog: DatabaseDog = getSomeDatabaseDog()
    val databaseDog2: DatabaseDog = getSomeDatabaseDog()

    databaseDog.eq(databaseDog2, Dog.Companion.DogEquality)
}

After all, I wished I would have been more creative when naming those example entities than Dog, ApiDog and DatabaseDog when using them for such discussions =)

This comment has been minimized.

Copy link
@fatjoem

fatjoem Jun 6, 2019

The only problem with those example entities is that it is hard to follow what we are talking about without having read the blog post ;)

This comment has been minimized.

Copy link
@ulrikrasmussen

ulrikrasmussen Jun 7, 2019

But honestly, I cannot see how one would be able to forbid something like this in the current type system (and with compile-time extension interfaces)

Wouldn't that just amount to disallowing ambiguity at the call site?

Ambiguous method overloading serves no useful purpose while increasing the complexity of the code for no good reason. Going back to the example, in the third line

println(eq(dog1, dog3))

what is the programmer actually saying here? There are two possible overloads that match this one, namely eq(Dog, Dog) and eq(DatabaseDog, DatabaseDog). The compiler picks the most specific one. So, this line actually tells the compiler "generate code that compares dog1 and dog3 by database identifier, unless the implementation disappears, in which case compare them only by name". I would argue that the "unless" part is almost never what the programmer intended, unless eq(DatabaseDog, DatabaseDog) is an optimized version of eq(Dog, Dog) with the same semantics. If eq(DatabaseDog, DatabaseDog) disappears in a refactoring, then the program should fail to compile rather than just arbitrarily change its semantics. The programmer has gained nothing but saved a few keystrokes - the program would be more robust if he had just given the two overloads distinct names (e.g. eqDog and eqDatabaseDog).

Overloaded (i.e. overlapping) extension classes are even worse, for two reasons:

  1. Unlike method overloads, the overloads for an extension class are spread over multiple files instead of being defined in the same place. This makes it much harder for the programmer to get an overview of what is shadowing what.
  2. Unlike method overloads, it is harder to provide good tooling which tells you what overload is in effect at a given place in a program. Yes, for extension classes without constraints, you can use "go to definition" to go the extension class that matches in a given context. But, suppose we had another extension interface (admittedly contrived, but not unrealistic):
extension interface EqList<A> { fun eqList(x: List<A>, y: List<A>): Bool }
extension class EqListImpl<A>(with val constraint: Equality<A>) { ... }

If I compare two lists of type List<DatabaseDog> with eqList, then there will only be one unique extension class, EqListImpl that matches. "Go to definition" in my IDE will tell me this by placing the cursor on the EqListImpl class. However, this class has a constraint itself, and this constraint is ambiguous because it can be matched by both Equality<Dog> and Equality<DatabaseDog>. But how do you actually communicate that in the IDE? When you used "go to definition" at the call site, the IDE placed the cursor at EqListImpl, so the scope of the actual call site has been left. To show what is actually in effect, the IDE would have to generate the concrete instantiation of the extension class chain, e.g. EqListImpl(DatabaseDog.Companion.DogEquality) and show you that. The compiler might even be disambiguating several layers of overlapping extension classes in a long chain.

@zeitlinger

This comment has been minimized.

Copy link

commented Jun 5, 2019

Why allow ambiguity?

Think if the Equality use case as described in https://quickbirdstudios.com/blog/keep-87-typeclasses-kotlin/

  1. Equality based on the dog interface (based on name)
  2. Equality based on the database dog implementation, which implements the dog interface

Seems to make sense to me, but would be rejected according to your proposal.

@ulrikrasmussen

This comment has been minimized.

Copy link

commented Jun 6, 2019

@zeitlinger

This comment has been minimized.

Copy link

commented Jun 6, 2019

@fargus9

This comment has been minimized.

Copy link

commented Jun 11, 2019

I don't understand this at the same level you all do, but I'm excited for the potential. I'm curious if you'd considered other syntax though. The with notation as a modifier on a param seems a little kludgy, since the intention seems to be that it will usually be inferred at compile time... why not specifying the extension contract with the template type? e..g fun <T with Foo<T>>useFoo()... or perhaps even fun <T with U: Foo<T>>useFoo()... in the case that naming it is useful later...

@fatjoem

This comment has been minimized.

Copy link

commented Jun 11, 2019

@fargus9

This comment has been minimized.

Copy link

commented Jun 11, 2019

It means the method wouldn't accept instances of T for which there aren't a compile time resolvable extension. To me it seems like a natural extension, pardon the overload, of type parameters to allow constraints based on an extension of a type.

What exactly would that mean? All arguments that use T will be implicit receivers? What about arguments that want to use T ordinarily? I don't like the syntax proposed in this keep either, but adjusting type parameters in this way seems inconsistent with what type parameters are usually used for.

On Tue, Jun 11, 2019, 19:27 fargus9 @.***> wrote: I don't understand this at the same level you all do, but I'm excited for the potential. I'm curious if you'd considered other syntax though. The with notation as a modifier on a param seems a little kludgy, since the intention seems to be that it will usually be inferred at compile time... why not specifying the extension contract with the template type? e..g fun <T with Foo>useFoo()... or perhaps even fun <T with U: Foo>useFoo()... in the case that naming it is useful later... — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#87?email_source=notifications&email_token=ADMTV4WG7TNI42FLJIINSYLPZ7OBXA5CNFSM4D5MRA72YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODXN4Y3Q#issuecomment-500943982>, or mute the thread https://github.com/notifications/unsubscribe-auth/ADMTV4QOR6UEQEPTFHPSMGLPZ7OBXANCNFSM4D5MRA7Q .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.