Skip to content
This repository has been archived by the owner on Jun 5, 2023. It is now read-only.

Multiple overloads with context function arguments do not resolve #150

Open
neko-kai opened this issue Dec 2, 2020 · 21 comments
Open

Multiple overloads with context function arguments do not resolve #150

neko-kai opened this issue Dec 2, 2020 · 21 comments

Comments

@neko-kai
Copy link

neko-kai commented Dec 2, 2020

Minimized code

object x { 
  def f[A, B](f: A ?=> B) = println("1 argument")
  def f[A, B, C](f: (A, B) ?=> C) = println("2 arguments")
}

@main def Main = {
  x.f(implicitly[Int])
  x.f(implicitly[String] + implicitly[Int])
}

Output

no implicit argument of type Int was found for parameter e of method implicitly in object Predef

Expectation

Expected success with output

1 argument
2 arguments
@prolativ
Copy link

prolativ commented Dec 2, 2020

The error is correct. I think you have simply misunderstood how context functions work. The compiler complains about the lack of a defined given instance of type Int which you provide like this given Int = 1 (but please remember that in general defining givens for primitive types is a bad idea). Similarly you'll need given String = "abc". Then you can call e.g. x.f[Int, Unit](()) and x.f[Int, String, Unit](()) to get the expected outputs. Note that you have to specify the type parameters because otherwise the compiler won't be able to distinguish between the 2 variants of the method.

@neko-kai
Copy link
Author

neko-kai commented Dec 2, 2020

Note that you have to specify the type parameters

Seems like I have to, for multi-parameter context functions, however single parameter context functions do infer properly in polymorphic contexts:

scala> def provide[A, B](f: A ?=> B) = (a: A) => f(using a)
def provide[A, B](f: (A) ?=> B): A => B

scala> :t provide(implicitly[String] + implicitly[Int])
String & Int => String

Whereas a function with 2-parameter context function argument does not infer:

scala> def provide2[A, B, C](f: (A, B) ?=> C) = (a: A, b: B) => f(using a, b)
def provide2[A, B, C](f: (A, B) ?=> C): (A, B) => C

scala> :t provide2(implicitly[String] + implicitly[Int])
1 |provide2(implicitly[String] + implicitly[Int])
  |                                             ^
  |ambiguous implicit arguments: both value evidence$2 and value evidence$1 match type Int of parameter e of method implicitly in object Predef
1 |provide2(implicitly[String] + implicitly[Int])
  |                           ^
  |ambiguous implicit arguments: both value evidence$2 and value evidence$1 match type String of parameter e of method implicitly in object Predef

(Moved to scala/scala3#10609)

because otherwise the compiler won't be able to distinguish between the 2 variants of the method.

Context functions do work with inference (in single parameter case) as above. Overloading doesn't work yet because it's not attempted as overloads automatically cancel context function inference, but it doesn't have to not work. (e.g. such as in scala/scala3#7790)

@prolativ
Copy link

prolativ commented Dec 2, 2020

But in your particular case how in your opinion should the compiler know that

provide2(implicitly[String] + implicitly[Int])

should be e.g.

provide2[String, Int, String](implicitly[String] + implicitly[Int])

(which returns a (String, Int) => String)
and not

provide2[Int, String, String](implicitly[String] + implicitly[Int])

(which returns a (Int, String) => String)?

@neko-kai
Copy link
Author

neko-kai commented Dec 2, 2020

String summon comes before the Int summon in the source code and scalac/dotc both generally avoid rearranging the order IME. (note how provide's result is String & Int => String not Int & String => String despite the latter being equivalent)

@prolativ
Copy link

prolativ commented Dec 3, 2020

So you would expect that

provide2 {
  val s = implicitly[String]
  val i = implicitly[Int]
  s + i
}

would return a (String, Int) => String but

provide2 {
  val i = implicitly[Int]
  val s = implicitly[String]
  s + i
}

would return a (Int, String) => String?
I can't think of any other case where rearranging vals would influence type inference (that would be even more confusing if we replaced vals with defs)
@smarter any thoughts on this?

@prolativ
Copy link

prolativ commented Dec 3, 2020

On the other hand, while the ordering of arguments is important for ordinary functions, for context functions the difference between (String, Int) ?=> String and (Int, String) ?=> String is quite small if we only don't try to pass the implicit arguments explicitly with f(using ...). So making no distinction between the two could potentially make sense in some situations

@neko-kai
Copy link
Author

neko-kai commented Dec 3, 2020

Rearranging already changes the type in the single-parameter case - and the difference is observable due to whitebox macros potentially changing the types down the line based on the order of &.

scala> :t provide { def x = implicitly[String] ; def y = implicitly[Int] }
String & Int => Unit

scala> :t provide { def x = implicitly[Int] ; def y = implicitly[String] }
Int & String => Unit

Order also impacts normal, non-implicit contravariant narrowing in a similar fashion - which in Scala 2 wasn't commutative due to with and with affected type erasure.

scala> trait Reader[-R] { def <+>[R1 <: R](that: Reader[R1]): Reader[R1] }; def ask[R]: Reader[R] = ???
// defined trait Reader
def ask[R] => Reader[R]

scala> :t ask[Int] <+> ask[String]
Reader[Int & String]

scala> :t ask[String] <+> ask[Int]
Reader[String & Int]

I understand the reluctance to making context functions behave similarly - even though the counterpart contravariance behavior is similar and stable across versions - but multi-parameter context functions may be very hard to make use of without it.

So making no distinction between the two could potentially make sense in some situations

That would be ideal - Haskell does it, it also supports all the cases here without type annotations.

@prolativ
Copy link

prolativ commented Dec 3, 2020

But is the way how types are printed in REPL the only problem or do the two cases you mentioned have some real impact on evaluation of code?

@neko-kai
Copy link
Author

neko-kai commented Dec 3, 2020

The difference is observable with macros, at the minimum.
There could be cases where the typer behaves differently, but they could be classified as bugs since intersections are supposed to be commutative.

@b-studios
Copy link

b-studios commented Dec 3, 2020

I think there are two aspects to this issue:

  • overloading context functions in general
  • inference of type arguments A and B in your example

If we monomorphize the example (that is, drop the type parameters on f)

case class A(a: String)
case class B(b: Int)

object x { 
  def f(f: A ?=> Unit) = ??? //...
  def f(f: (A, B) ?=> Unit) = ??? //...
}

we still end up with an overloaded f that only varies in its contextual arguments. I think it is in general bad style to overload contextual functions in this way since at the call site, it might not be obvious which function the caller desires:

given B(42)
x.f { 
  println(summon[A].a)
  println(summon[B].b)
}

both choices for f are equally reasonable.

@neko-kai
Copy link
Author

neko-kai commented Dec 3, 2020

@b-studios
When written that way it's unambiguous that the (A, B) ?=> overload should be chosen, since then the lexically closest B will be chosen.

When written with given inside the f,

x.f { 
  given B(42)
  println(summon[A].a)
  println(summon[B].b)
}

I think it's also unambiguous that A ?=> overload should be chosen, since the local B is lexically closer and the constraint cannot be floated out to the context function type, since it's satisfied inside the body.

I think it is in general bad style to overload contextual functions in this way since at the call site

It could be reasonable for certain DSLs, e.g. there could be a "mode-switch" statement that adds a constraint and defines the type for the statements in the same block:

def robotDsl(f: Mode1 ?=> Unit)
def robotDsl(f: Mode2 ?=> Unit)

robotDsl {
  dryRunMode; step(right); step(down, 2);
}

However, allow me to describe my specific use-case for context functions, I maintain a dependency injection library, distage. The point of the framework is to represent the program fully as a first-class graph value and make all parameters manageable / overridable. That includes implicits.

It has a persistent issue in Scala 2, that makes using functions worse than using classes, because function eta-expansion causes eager resolution of implicits.

Functions with implicit parameters cannot even be referred in a way that allows deferring implicit arguments to the framework, instead of resolving them from lexical scope.

import distage._
import cats.effect._

def resourceCtor[F[_]: Sync]: Resource[F, String] = Resource.pure("hi")

def injectionModule[F[_]: TagK] = new ModuleDef {
  // Sync implicit is managed
  make[Sync[F]].from(...) 
  
  // implicit error, tries to get Sync immediately, instead of creating a reusable function to parameterize Sync later
  make[String].fromResource(resourceCtor[F] _)
  
  // to eta-expand must list all implicit arguments _and_ their types, does not scale
  make[String].fromResource(resourceCtor[F](_: Sync[F]))
}

Instead, the current workaround is to wrap the implicit function into a class.

Because a class can be referred without causing implicit resolution to be performed immediately. Then macros generate a constructor function based on the class constructor and we've worked around the issue:

// wrap the expression as a class
class ResourceCtor[F[_]: Sync](
) extends Lifecycle.OfCats(resourceCtor[F])

def fixedModule[F[_]: TagK] = new ModuleDef {
  make[String].fromResource[ResourceCtor[F]] // reference to a type does not cause implicit resolution, Sync[F] is now managed
}

But clearly this is very sub-par and intrusive.

To fix this issue all that's required is to be able to pass an expression in such a way that captures all of it's requirements, including givens, floating all possible constraints to the context function type without triggering implicit search.

It could be done without overloading if there was some kind of a context function super-type that could work as a catch-all for all context functions (such as if a union of all context function types didn't immediately cancel context function capture; or if typeclasses could work with context functions).
Or if there was a special syntax / function, to turn a method with given arguments into a context function, like f _, but with context.

Otherwise the current go-to method is a magnet pattern listing all function types:

// a function with captured arguments yielding A
class Functoid[+A]

object Functoid {
  // all normal functions:
  inline implicit def apply[A, R](f: A => R): Functoid[R]
  ...
  inline implicit def apply[A .. N, R](f: (A .. N) => R): Functoid[R]
  // all context functions:
  inline implicit def apply[Ac .. Nc, R](f: (Ac .. Nc) ?=> (A .. N) => R): Functoid[R]
  inline implicit def apply[R](f: ContextFunctionN[FunctionN[R]]): Functoid[R]
}

For which the expected best behavior is to choose the widest possible overload, capturing as many context parameters as possible, preferring to float constraints that are unresolved, similar to Haskell's treatment of higher-rank functions with typeclass parameters.

@b-studios
Copy link

I think I understand your usecase. But I am not sure how one could implement such a catch all implicit without giving up clarity in other use cases.

In particular, note that the arguments to context functions are in covariant position and we do not allow overloads that only vary in this position in other cases.

@b-studios
Copy link

b-studios commented Dec 3, 2020

Hence my strong intuition that we want to avoid overloads that only vary in the arguments of context functions.

What do you think @smarter ?

@neko-kai
Copy link
Author

neko-kai commented Dec 3, 2020

@b-studios

In particular, note that the arguments to context functions are in covariant position and we do not allow overloads that only vary in this position in other cases.

Hmm, I'm not sure I follow

scala> def f(f: Int => String) = 1; def f(f: Boolean => String)(using DummyImplicit) = 2

scala> def str[A](a: A) = a.toString
def str[A](a: A): String

scala> f(str[Int])
val res1: Int = 1

scala> f(str[Boolean])
val res2: Int = 2

These seem to differ only in function argument as well. Is there an example somewhere of this restriction on overloads?

@b-studios
Copy link

@neko-kai Sorry, you are completely right. I shouldn't have used "covariant", more "return position". It is a bit difficult to explain for me, but arguments to contextual function-parameters feel much more like returns than it is the case with explicit functions.

@prolativ
Copy link

prolativ commented Dec 3, 2020

Actually for @b-studios' example I would find the A ?=> overload the right choice because if B already given in scope, you don't need to require an additional parameter to provide it. Compare with this case:

case class A(a: String)

object x {
  def f(f: => Unit) = println("No A")
  def f(f: A ?=> Unit) = println("With A")
}

given A("cdcsc")

x.f {
  println(summon[A].a)
}

This does compile and prints No A.
But if you remove the given instance, you'll get no implicit argument of type A was found for parameter x of method summon in object Predef error, even though in this case the second overload could be taken.

For the DSL example I would say something like

def robotDsl[M <: Mode](mode: M)(f: M ?=> Unit)

robotDsl(dryRunMode) {
  step(right); step(down, 2);
}

would be clearer

@b-studios
Copy link

@prolativ Regarding the DSL example: I completely agree that this would be a clear way to express it. For my example above: I still think both would make sense and I would like to see situations like this being ruled out. Contextual abstractions are already quite sophisticated and combining them with this kind of overloading (which also can be non-trivial) seems overkill.

@neko-kai
Copy link
Author

neko-kai commented Dec 3, 2020

@prolativ

This does compile and prints No A.
But if you remove the given instance, you'll get no implicit argument of type A was found for parameter x of method summon in object Predef error, even though in this case the second overload could be taken.

I do not understand your example.

In current dotty context functions are flatly canceled by overloads – there is no way to choose the second overload whatsoever, with exception of an explicit context function literal, the interaction is currently disabled due to the need to figure it out and implement it. not for any profound reason.

The same applies for interaction with unions or intersections, e.g. meaningless tautologies also cancel context capture:

scala> def f(getInt: (Int ?=> Unit) | (Int ?=> Unit)) = getInt(using 0)
def f(getInt: ((Int) ?=> Unit) | ((Int) ?=> Unit)): Unit

scala> f(implicitly[Int])
1 |f(implicitly[Int])
  |                 ^
  |no implicit argument of type Int was found for parameter e of method implicitly in object Predef

scala> def f(getInt: (Int ?=> Unit) & (Int ?=> Unit)) = getInt(using 0)
def f(getInt: ((Int) ?=> Unit) & ((Int) ?=> Unit)): Unit

scala> f(implicitly[Int])
1 |f(implicitly[Int])
  |                 ^
  |no implicit argument of type Int was found for parameter e of method implicitly in object Predef

scala> def f(getInt: (Int ?=> Unit)) = getInt(using 0)
def f(getInt: (Int) ?=> Unit): Unit

scala> f(implicitly[Int])

Simply put, the compiler is not ready yet in this area, using it's current behavior as an example of "as-it-should-be" is not helpful.

I would find the A ?=> overload the right choice because if B already given in scope, you don't need to require an additional parameter to provide it.

But B is also given in the innermost scope, implicits now nest lexically, so the innermost implicit should be chosen, consider:

def withB(f: B ?=> Unit) = f(using B(42))

given B(1)
withB {
  println(summon[B])
 }
// B(42)

Here withB overrides the outer B. Despite the given being right there, it's ignored for one from context function.

Lexically, this makes sense.

Now, if we were to add an empty overload: def withB(f: Any): Unit. If we follow your proposed logic, and prefer a B in outer scope, then f: Any overload and B(1) given will be chosen. I think this would be odd and unprecedented, as a less specific type will be chosen for an overload and an implicit from a more outer scope will be chosen as well.

In the same situation, e.g. Haskell would float out the typeclass constraint into the type of the expression instead of resolving it eagerly, and choose the typeclass instance with more typeclass contexts, not less.

This makes sense if you consider the expression's type separately from all other context:

{ println(summon[B]) }

With all implicits floated out into the context function type, without attempting to resolve implicits eagerly within the body, this has a type B ?=> Unit, therefore IMHO the B ?=> Unit overload should be chosen.

@prolativ
Copy link

prolativ commented Dec 3, 2020

In case of

given B(1)
withB {
  println(summon[B])
 }

it is quite explicit that summon[B] should use the given instance provided by the method and not the one declared above the method call. And I believe that context functions were introduced with such usage in mind. But going back to your initial examples with provide, if I understood it correctly, your intention was to make the compiler guess the types for which given instances needed to be provided to make it possible to later evaluate the expression block passed as the method argument. If we considered this a proper use case for context functions, my intuition would be that the compiler should then try to infer the smallest set of implicit parameters needed for the evaluation. Then choosing A ?=> instead of (A, B) ?=> seems to make more sense to me if B is already given. And this seems to play well with my example, which already works: if an implicit is already in scope, don't require it to be provided later. This has the general benefit that when you call an overloaded method taking different contextual functions as a parameter, it is known what given instances are already in scope, so the compiler could use this knowledge to decide which overload should be taken. Then for

case class A(a: String)
case class B(b: Int)

object x {
  def f(f: A ?=> Unit) = ???
  def f(f: (A, B) ?=> Unit) = ???
}

given b as B = B(1)

x.f {
  println(summon[A].a)
  println(summon[B].b)
}

the method call would desugar to

x.f { (using a: A) =>
  println(summon[A](using a).a)
  println(summon[B](using b).b)
}

so this still would be consistent with withB example (given instance of B is taken from outside the closure not because we gave higher priority to givens from the closure's parameters but because it's not a parameter in this case)

@b-studios
Copy link

While @neko-kai's use case is important, I am closing this issue since it is not a bug with respect to the current specification of context functions.

@b-studios b-studios transferred this issue from scala/scala3 Dec 7, 2020
@b-studios
Copy link

Reopening as a feature request

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants