Add implicit function types #1775

Merged
merged 37 commits into from Dec 18, 2016

Projects

None yet
@odersky
Contributor
odersky commented Dec 5, 2016 edited

This is the first step to bring contextual abstraction to Scala.

@@ -1128,8 +1128,18 @@ object Parsers {
val name = bindingName()
val t =
if (in.token == COLON && location == Location.InBlock) {
+ if (false) // Don't error yet, as the alternative syntax "implicit (x: T) => ... "
+ // is not supported by Scala2.x
+ migrationWarningOrError(s"This syntax is no longer supported; parameter needs to be enclosed in (...)")
@smarter
smarter Dec 5, 2016 Member

I think we need two levels of migration warnings: those that can be fixed in scala/dotty and those that can only be fixed (currently) in dotty, for example the fact that we currently warn on every usage of with is quite annoying.

@odersky
odersky Dec 5, 2016 Contributor

Good point.

@edmundnoble

Excellently done. We'll beat Idris in no time. I have a few more questions:

  1. How does this interact with specialization of the FunctionN types for lower N?
  2. How does this interact with FunctionXXL?
+ - managing capabilities for security critical tasks,
+ - wiring components up with dependency injection,
+ - defining the meanings of operations with type classes,
+ - more generally, passing any sort of context to a computation.
@edmundnoble
edmundnoble Dec 5, 2016 Contributor

I can only see a very weak relationship between comonads and implicit functions. At best, implicit functions can replace coreader comonad stacks. Other than that, comonads are completely irrelevant.

@odersky
odersky Dec 6, 2016 Contributor

I meant to refer to the original paper "Implicit parameters: dynamic scoping with static types" (POPL2000) which speculated that the best fitting foundation of implicit parameters are comonads. But thinking about it I run into troubles to nail it down.

@odersky
odersky Dec 6, 2016 edited Contributor

I think I have worked it out now: implicit functions are both monads and comonads, with

M[A]        =  implicit X => A
unit a      =  implicit (_: X) => a
counit m    =  m (implicitly[X])
map m f     =  implicit (_: X) => f (m (implicitly[X]))
            =  unit (f (counit m))

join m      =  counit m 
duplicate m =  unit m

WDYT?

@edmundnoble
edmundnoble Dec 6, 2016 Contributor

You aren't just using the implicit function there, you're also using the implicitly[X] value in counit. The comonad you present is the Store comonad (a, a -> b), and the monad is the reader monad. I believe coeffects, which are a calculus of dependencies (http://tomasp.net/academic/papers/structural/), might provide a better theoretical underpinning.

@odersky
odersky Dec 6, 2016 Contributor

The monad is indeed close to the reader monad, but I fail to see that it's the Store comonad. Where is the tupling?

@odersky
odersky Dec 6, 2016 edited Contributor

Regarding co-efffects I think that's more involved and there is something else going on. Note that implicit functions themselves can model capabilities, but for effects we need a notion of capture prevention as well.

@odersky
odersky Dec 6, 2016 edited Contributor

I don't know category theory all that well, so I am not sure. But it seems admissible to have a structure that is both a monad and a comonad with different operators for unit and counit. Googling yielded Moore machines as an example here:

http://stackoverflow.com/questions/16551734/can-a-monad-be-a-comonad

So how would that not apply to implicit functions?

@edmundnoble
edmundnoble Dec 7, 2016 Contributor

Monads and comonads exist for the purpose of composition. The first thing you need to have either of them is a unary type constructor F[_]. You chose a different type constructor for the monad and comonad. To illustrate this, I will fix F[_] to ImplicitFunction1[I, ?] (for fixed I).
Now, the comonad instance:

def extract[A](function: ImplicitFunction1[I, A]): A

We're already stuck. We don't have an I. You get around this by assuming implicitly one exists in scope, so actually your type constructor is more like:

type FunctionAndParam[I, A] = (implicit I, ImplicitFunction[I, A])

Which explains where the Store is coming from. So `ImplicitFunction1[I, ?] is not a comonad. Now, let's see if it's a monad:

def pure[A](a: A): ImplicitFunction[I, A] = implicit I => a
def flatMap[A](fa: ImplicitFunction[I, A])(
               f: A => ImplicitFunction[I, B]): ImplicitFunction1[I, B]
 = (implicit i: I) => f(fa(i)).apply(i)

Remove the implicit parts, and this is just the Reader monad. Implicits do not really need much from the type system or foundations, because from another angle they are just the Reader monad with better syntax and no bind method provided (you can add your own, as you can see there, and that won't magically destroy performance). Then, this is monadic code and there can be no performance difference between Reader and implicits - assuming the same usage, that is. If both uses are closure-heavy there's nothing the compiler can do, and monadic code does tend to be heavily abstracted out as well just because of the demographics involved.

Essentially, my objection to that bimonad is that it's just a comonad and monad which already exist with "implicit" pasted at the start, so the monad will not actually teach us anything about the underlying dependency calculus that implicit params capture. All it teaches us is that implicit functions with the same param type compose, like ordinary functions.

Coeffects also include a model of implicit parameters, as you can see on the website, which may be useful as theoretical underpinnings, but I agree that otherwise the concept is very large (and I'm very excited to see capture-free closures or affine types if you're hinting that we're going that direction :D).

@odersky
odersky Dec 7, 2016 edited Contributor

I think we are talking in different contexts (forgive the pun). I am arguing that, for given X, the type implicit X => A is a bimonad with unit being lambda abstraction and counit being implicit application. You were showing that I cannot turn this into an instance of the Monad and Comonad class at the same time, because I have to represent an instance of X somehow in the type. But that's exactly the point: X is assumed to be in scope (and the types guarantee that one will be in scope), so I do not want nor need to represent it in the type.

Put in other words: If implicit functions were representable as code in the standard monad/comonad class hierarchy, there would be no point making them a language construct.

@odersky
odersky Dec 7, 2016 edited Contributor

There's something else at play here, and I am trying to wrap my head around it. If we drop the implicitness and just talk about function abstraction and application we also get a bimonad in some sense:

M[A]        =  X => A
unit a      =  (x: X) => a
counit m    =  m x
map m f     =  (x: X) => f (m x)
            =  unit (f (counit m))

join m      =  counit m 
duplicate m =  unit m

But this feels a little bit weirder because now we are dealing with open terms, with counit operations introducing free variables that are captured by unit operations. And the names of both free variables and lambda binders are fixed to be always the same name x.

By contrast, all terms in the implicit function bimonad representation are closed. I am not sure what difference it makes, though.

@odersky
odersky Dec 7, 2016 edited Contributor

Regarding composition, I believe one can explain the composability of implicit functions from the principle that they are a bimonad. Let's say you have two function types F and G, which take implicit parameters of types X and Y, but in different orders:

type F =  implicit X => implicit Y => A
type G =  implicit Y => implicit X => A

I can turn a value f: F into a value of type G like this:

implicit Y => implicit X => f(implicitly[X])(implicitly[Y])

Moreover that conversion is completely automatic. I just have to write

val x: G  = f

and the rest is produced automatically. I believe that gets us to the essence why implicit functions are better composable than general monads. Even if we disregard the aspect of implicit code insertion, what happens here is that we strip all (co)monads in the applications to the two implicitly arguments and then reapply the (co)monads in the outer lambda abstractions. The fact that this is a bimonad is essential here, because it means we can always get out of an entangled structure with counit operations, reorder at the outermost level, and get back with unit operations.

@edmundnoble
edmundnoble Dec 7, 2016 Contributor

I think that your insight that the regular function bimonad requires the ability to introduce terms into the environment shows that the implicit function comonad also requires that ability. But yes, I agree that being able to implicitly reorder implicit arguments decreases their syntactic overhead below reader.

@Blaisorblade
Blaisorblade Dec 7, 2016 Contributor

I agree with @edmundnoble: in particular what you get with implicits isn't closed terms, because they assume something in the context. I haven't read the paper, but maybe they had some other category in mind? (Warning: dense and unchecked ideas ahead). Beyond the category of functions you're using, you could probably have (equivalent I think) categories of expressions with x in scope and of expressions with some implicit in scope, and there your comonad might work. I haven't checked any of the details though. Whether that's insightful or useful, beyond making precise the two things are equivalent, is not clear.

@tpetricek
tpetricek Dec 8, 2016

I see a mention of my work on coeffects in the thread already, so I thought I'll add my perspective. I'm not all that familiar with Scala, so I'll stick to my Haskell/ML-style notation, but I hope that will be understandable.

What are coeffects
First of all, coeffects are, indeed, closely related to comonads (they are modelled using "indexed comonads", which is a generalisation of ordinary comonads). What coeffects add, is that they track more precisely what context is needed. The comonadic (or monadic) type C a or M a tells you that there is some context-requirement or some effect, but it does not tell you what precisely. Coeffects add an annotation so that you have types such as C {?p:int, ?q:int} a where the annotation {?p:int, ?q:int} says you need two implicit parameters ?p and ?q. So you can see coeffects as more precise comonads.

Are implicit parameters monads, comonads or coeffects
Implicit parameters are one of the motivating examples in the work on coeffects, but they are just one (coeffects capture other interesting contextual properties). The interesting thing about implicit parameters is that they can almost be modelled by both Reader monad and Product comonad. With monads, you use functions a -> M b for some monad and with comonads, you use functions C a -> b for some comonad.

With Reader monad, we have M a = State -> a and so:

a -> M b = a -> (State -> b)

With Product comonad, we have C a = a * State and so:

C a -> b = a * State -> b

Now you can see that the two functions, a * State -> b and a -> State -> b are related (via currying)! 🎉

There is one thing that reader monad does not let you do. It does not let you model the facts that implicit parameters (in GHC) support both lexical and dynamic scoping. Take for example this:

let ?x = 1
let f = (fun n ->?x + ?y)

If we model this using Reader monad and ?x and ?y mean "read implicit parameter from the monad", then the type of f will be a function that needs ?x and ?y. However, if we do this using comonads, the context available in the function body can combine context provided by the caller with the context provided by the declaration site - and so we can use ?x from the declaration site and end up with a function that needs just ?y (I'd be quite curious to see what Scala is doing here - are implicits just dynamically scoped or mix of both?

Links with more information
There is a lot more about this than I can fit in a comment, so for those interested:

+ - defining the meanings of operations with type classes,
+ - more generally, passing any sort of context to a computation.
+
+Implicit function types are a suprisingly simple and general way to
@edmundnoble
edmundnoble Dec 5, 2016 Contributor

s/suprisingly/surprisingly

+make coding patterns solving these tasks abstractable, reducing
+boilerplate code and increasing applicability.
+
+*First Step* My pull request is first implementation. In solves the
@edmundnoble
edmundnoble Dec 5, 2016 Contributor

s/first/the first

+accesses. Monads don't compose in general, and therefore even simple
+combinations need to be expressed on the level of monad transformers,
+at the price of much boilerplate and complexity. Recognizing this,
+peaple have recently experimented with free monads, which alleviate
@edmundnoble
edmundnoble Dec 5, 2016 Contributor

s/peaple/people

@edmundnoble
edmundnoble Dec 5, 2016 Contributor

The free monad is almost entirely orthogonal to this issue. It solves the monad composition problem, but the reader monad can be trivially composed with itself even without it. I'd also be very interested in a citation for your statement as to inefficient use of the reader monad, because the reader monad is just a function.

@odersky
odersky Dec 6, 2016 Contributor

I don't think I need a citation. It's evident: With a monad every primitive step (a semicolon if you will) becomes a bind, which means a closure object is created and discarded. That's at least an order of magnitude in performance loss on the JVM. So I think the proof obligation is clearly to show that monadic code is somehow not as inefficient as it looks.

@odersky
odersky Dec 6, 2016 edited Contributor

but the reader monad can be trivially composed with itself even without it.

But what about composing two reader monads that read each a different thing? Or composing a reader monad with a transaction monad, say? I don't see a general framework to do these things, even though one could think of ad-hoc solutions to compose some specific monads.

@edmundnoble
edmundnoble Dec 6, 2016 Contributor

For your first point: monadic code and code dispatching on a monad typeclass instance are not equivalent. The concrete use of a monad in particular will likely result in all binds being inlined, seeing as scalac is so paranoid about inlining higher order functions.

Composing two different reader monads Reader[A] and Reader[B] is as simple as Reader[(A, B)]; as inefficient as tupling and projection. You are completely correct though in that there is no general framework to compose or combine monads (or rather there are quite a few but no clear choice). I suppose my comment is just that despite the fact that monads in general do not compose, the Reader monad does, so it's not an apples-to-apples comparison with implicit params, which do compose.

@odersky
odersky Dec 6, 2016 edited Contributor

For the performance I believe we will need concrete data. Knowing the compiler well, my strong belief is that monadic composition has terrible performance (in particular if you add trampolining, that's another order of magnitude slowdown). I believe the proof obligation is firmly in the other camp: that monadic code is somehow not so terrible after all as it appears.

@edmundnoble
edmundnoble Dec 7, 2016 Contributor

I agree entirely that we need concrete data, but at the same time Readers are just functions. It's very easy to use function objects (and Readers) in a way which does a lot of unnecessary dispatch and allocation, and I agree that it's definitely worth investigating how the monadic API contributes to that. I have not seen any literature on Reader performance, only Writer and State, because they tend to do so incredibly badly. Also: yes, trampolining can shave an order of magnitude off your performance, but it's also a large hammer. Trampolining is a brute-force tool to make things stack-safe, and the FP community in Scala has noticed its terrible performance and started to move away from it where it can (especially in the case of monadic composition).

@Blaisorblade
Blaisorblade Dec 7, 2016 Contributor

Note @edmundnoble Haskell performance and Scala performance of the same construct are not related. Scala has a very minimal optimizer compared to GHC for various reasons. Also, Scalac does no automatic inlining IIRC, it's users that request inlining via annotations for higher-order functions.

+capabilities, dictionaries, or whatever contextual data the functions
+need. The only downside with this is that often there's a large
+distance in the call graph between the definition of a contextual
+element and the site where it is used. Conseuqently, it becomes
@edmundnoble
edmundnoble Dec 5, 2016 Contributor

s/Conseuqently/Consequently

+pattern because it is lightweight and can express context changes in a
+purely functional way.
+
+The main downside of implicit parameters is the verbosity of their
@edmundnoble
edmundnoble Dec 5, 2016 Contributor

Just editorializing: in my opinion, the main downside of implicit parameters is that they make code harder to refactor by making the place where your code exists change the semantics of the code. The verbosity is definitely an issue on its own, though.

@odersky
odersky Dec 6, 2016 Contributor

But that's nothing new. All free bindings of a piece of code affect its semantics. So the semantics of any expression that's not closed depends on its location (?)

@edmundnoble
edmundnoble Dec 6, 2016 Contributor

Fair point, free bindings impede refactoring whether they're implicit or not. My mistake.

@Blaisorblade
Blaisorblade Dec 8, 2016 Contributor

But that's nothing new. All free bindings of a piece of code affect its semantics. So the semantics of any expression that's not closed depends on its location (?)

Implicits simply hide the free variables in use, so that figuring out the context requires typechecking rather than just parsing. So, after you look up implicitly's type, you discover that implicitly[Foo] is an open term (which is sugar for open term implicitly[Foo](theImplicitDefinition)). This is confusing enough it was (indirectly) discussed above: #1775 (comment)

@odersky
Contributor
odersky commented Dec 7, 2016 edited

In light of the (very interesting!) discussion about monads and comonads here I think it's better to talk about "contextual" abstraction, so I have changed that line in the PR explanation. It seems like comonadic comes into play but what implicit functions provide cannot be explained exclusively (or even primarily) by their comonadicness.

@liufengyun
Contributor

I drafted the immature idea of dynamic parameters here, which is marginally related to this discussion.

@odersky odersky closed this Dec 7, 2016
@odersky odersky deleted the dotty-staging:add-implicit-funtypes branch Dec 7, 2016
@odersky odersky reopened this Dec 7, 2016
@odersky odersky referenced this pull request in scala/scala-lang Dec 7, 2016
Merged

Blog post on implicit functions #570

@notxcain
notxcain commented Dec 7, 2016

Could you please provide with more examples where it is really useful (the one from article is very synthetic IMHO)? Not sure how it is different from using ReaderT for passing context. While this kind of implicits (contextual) make things harder to reason about.

+
+ /** An implicit function type */
+ class ImplicitFunction(args: List[Tree], body: Tree) extends Function(args, body) {
+ override def toString = s"ImplicitFunction($args, $body"
@larsga
larsga Dec 7, 2016

Missing end-paren?

@nafg
nafg commented Dec 8, 2016

Thought while reading the blog post --- what if implicitly were redefined as def implicitly[A]: implicit A => A, then instead of def thisTransaction: Transactional[Transaction] = implicitly[Transaction] you could just use type inference with def thisTransaction = implicitly[Transaction]

(OT -- is there any chance of getting polymorphic function values one day?)

@acjay
acjay commented Dec 8, 2016 edited

This seems potentially really handy, but the one thing that's bugging me a bit is that if this is the most common application, it seems like it requires some nontrivial setup. First of all, thisTransaction (and its analogs for other contexts) would have to be imported all over the place, to take advantage of that syntax. Secondly, every apparent method output type has got to be decorated with the context the method requires for input. I can see this being really confusing, especially for output types that might already involve parameterized types. It seems like it could be better to annotate the input instead, somehow. Also, instead of having to use implicitly, directly or indirectly, a syntax that has that effect might help.

Suppose I could annotate an implicit function in the type parameter list, and then access the context directly:

  def f1[*Transaction](x: Int): Int = {
    [Transaction].println(s"first step: $x")
    f2(x + 1)
  }

I'm sure this syntax that drops implicitly probably conflicts with some existing Scala syntax, but it's not hitting me right away. It would be a bold move, but on the other hand, if implicit functions are a winning play for passing context, they could be ubiquitous in real Dotty code.

@liufengyun's dynamic parameters also seem to address my concerns, when it comes to the application of implicit functions for context-passing But do we know of any other killer applications of this feature? Shooting in the dark, but perhaps the ability to eta-convert methods with implicit parameter lists, preserving the implicitness?

If I understand @nafg's request for polymorphic function values, combined with implicit function types, functions and methods would almost be fully interchangeable. Default parameters and parameter names are the only things that come to mind as being left out. But now I think I've gone fully off on a tangent.

P.S. @nafg Heh, I had a very similar thought, I think: https://twitter.com/AlanJay1/status/806685555538980864, although I didn't make the connection between it and simplifying thisTransaction.

@nafg
nafg commented Dec 8, 2016
@liufengyun
Contributor

@nafg To summarize my view: I think dynamic parameters can do better than implicit function types.

Dynamic parameters is based on the following paper (renamed to avoid confusion with Scala implicits):

Implicit parameters: dynamic scoping with static types, Jeffrey R. Lewis, POPL '00

The detailed argument is in the Gist.

@nafg
nafg commented Dec 9, 2016

Here's another possible use case.
Some libraries, like Slick and Shapeless, have a lot of machinery happening based on implicits. The problem is that if I'm writing some polymorphic code, the errors about missing implicits are often very cryptic and it's hard to know what implicit parameters my method needs, and besides it's very tedious to add things like slick.lifted.Shape[Level <: ShapeLevel, -Mixed_, Unpacked_, Packed_], or all the operations you may need on your HList. If instead they would return implicit functions, and I would wrap them via function composition, perhaps the required typeclasses could just be inferred.

@odersky
Contributor
odersky commented Dec 12, 2016

For discussions about the concept of implicit parameters and the blog post, let's go to:

https://contributors.scala-lang.org

Let's narrow this discussion here on the specifics of the implementation.

@odersky
Contributor
odersky commented Dec 13, 2016

Rebased to master

@odersky
Contributor
odersky commented Dec 13, 2016

The last commits represent the second step to contectual abstraction in Scala. They make
implicit function types a close to zero-cost abstraction, by means of a phase that optimizes them
to plain methods.

+ * is expanded to two methods:
+ *
+ * def m(xs: Ts): IF = implicit (ys: Us) => m$direct(xs)(ys)
+ * def m$direct(xs: Ts)(ys: Us): R = E
@Blaisorblade
Blaisorblade Dec 13, 2016 Contributor

Any reason why this optimization is restricted to implicit functions? Done this way, this seems rather ad-hoc.

@odersky
odersky Dec 13, 2016 Contributor

The main reason is that the rewrite is based on the type of the result. For implicit function types we have a perfect match. The typing rules guarantee that every function that has an implicit function result type must be implemented by a closure. For normal functions this is not guaranteed, so we do not know whether the optimization is globally effective or not. It could make things worse, actually.

@DarkDimius
DarkDimius Dec 13, 2016 Member

@Blaisorblade, additionally note that normal function types are non-final and in case m is overriden in as subclass by a non closure with custom apply logic, you could have hard time figuring out what should $direct method do.

@Blaisorblade
Blaisorblade Dec 13, 2016 Contributor

The typing rules guarantee that every function that has an implicit function result type must be implemented by a closure.

So I can't implement an implicit function type by returning a closure? Are those non-first-class?

Quite a few things suggest the following: A => B is a value type, while implicit A => B is a non-value type (IMHO a method type). I see why you're doing that, but that doesn't sound very orthogonal to me—this is a small thing, but that's the sort of thing which gives Scala its reputation for complexity. And while this fits in the heads of compiler hackers, the heads of (advanced) users also matter.

On Discourse I started sketching, as an alternative, a (limited) form of abstraction on non-value types (in particular, method types). I expect there are questions there too, but right now it seems to me you are in fact implementing a special-case of that.
(I don't propose abstract non-value types—I'd expect them to be type aliases, according to the same rules you use here, whatever they are exactly).

@odersky
odersky Dec 14, 2016 edited Contributor

Let's say you have

 val cl: implicit A => B
 def f: implicit A => B = cl

Then by the rules of implicit expansion this will gve you an eta-expansion on the rhs of f:

def f: implicit A => B = implicit $x: A => cl($x)

That's what guarantees that the right-hand side of every method with implicit function result type is an implicit function value. For normal functions that guarantee does not hold, so we might well be pessimizing code with a shortcutting optimization.

@Blaisorblade
Blaisorblade Dec 15, 2016 Contributor

Ah, so here we inline the eta-expansion and emit a call to cl, while with normal functions we only have a non-inlined call to CL (which will be inlined anyway by the JVM). Fair enough.
Maybe some info about this should be part of the code docs?

@odersky
Contributor
odersky commented Dec 14, 2016

/rebuild

+ sym.copy(
+ name = sym.name.directName,
+ flags = sym.flags | Synthetic,
+ info = directInfo(sym.info))
@retronym
retronym Dec 15, 2016 Contributor

This sort of transform needs to take some care about what annotations end up on the old and new methods, for things like @synchronized and @strictfp semantics depend on it. For general annotations, having a copy on both methods might break assumptions of the consumer of the annotations.

Representative bug fix in Scala: scala/scala#5540

@retronym
retronym Dec 15, 2016 edited Contributor

The same applies to the flags. The original sym might have OVERRIDE, but that doesn't guarantee that the direct method will override.

trait T[A] { def foo: A = ??? }
class C extends T[Transactional[Int]] {
  override def foo: Transactional[Int] = ??? // direct method won't be an override
}
@retronym
retronym Dec 15, 2016 edited Contributor

Confused myself above, synchronized is a flag (on class methods after uncurry) not an annotation.

@odersky
odersky Dec 15, 2016 Contributor

We still don't do a systematic job for annotations. We just copy them wholesale when copying symbols. This should be guided by some meta-annotation scheme but that remains to be done. As to flags, I agree it's not clean to always copy "Override". I don't think it matters because we don't test the flag after the transform, but we should avoid it anyway. I'll add a change.

+ case Block(stats, expr) => cpy.Block(tree)(stats, directQual(expr))
+ case tree: RefTree =>
+ cpy.Ref(tree)(tree.name.directName)
+ .withType(directMethod(tree.symbol).termRef)
@retronym
retronym Dec 15, 2016 Contributor

I'm guessing that this transform is (correctly) skipped for:

scala> def foo(a: Any): Transactional[Int] = 42
def foo(a: Any): Transactional[Int]
scala> (if ("".isEmpty) foo("") else foo("")).apply("")

Because tree.qualifier.symbol would be NoSymbol.

But just wanted to check that I'm reading this correctly.

@odersky
odersky Dec 15, 2016 Contributor

Yes, that's correct.

+ flags = sym.flags | Synthetic,
+ info = directInfo(sym.info))
+ if (direct.allOverriddenSymbols.isEmpty) direct.resetFlag(Override)
+ direct
@retronym
retronym Dec 16, 2016 Contributor

Does this force the current info transform on the base types? If not, the result might be non-determistic. I found this a bit fiddly to get right in an example i once created for how to do this in a compiler plugin.

@DarkDimius
DarkDimius Dec 16, 2016 edited Member

It will invoke infoTransforms, but this phase doesn't register one(it actually does, but it's identity), it forcefully updates denotations.

@odersky
odersky Dec 16, 2016 Contributor

@retronym Any infoTransforms in phases up to this one would be forced on the basetypes, yes.

+ }
+
+ val (remappedCore, fwdClosure) = splitClosure(mdef.rhs)
+ val originalDef = cpy.DefDef(mdef)(rhs = fwdClosure)
@retronym
retronym Dec 16, 2016 Contributor

Do you need to reset the ABSTRACT flag on the original method, now that it always contains a forwarder?

@odersky
odersky Dec 16, 2016 Contributor

No, a Deferred method would not get a forwarder.

@odersky
Contributor
odersky commented Dec 16, 2016

Rebased to master

@odersky
Contributor
odersky commented Dec 17, 2016

Rebased again to master. I am going to merge as soon as tests pass because I am growing tired of this.

odersky added some commits Dec 3, 2016
@odersky odersky Add ImplicitFunctionN classes
These are always synthetic; generated on demand.
8450556
@odersky odersky Add syntax for implicit functions fd2c24c
@odersky odersky Always insert apply for expressions of implicit function type ad7edc7
@odersky odersky Refactor function operations in Definitions
Also: show implicit function types correctly.

Also: refine applications of implicit funcitons

 - don't do it for closure trees
 - don't do it after typer.
4fb19e4
@odersky odersky Handle erasure of implicit function types 415ff70
@odersky odersky Make implicit functions have implicit function type aa6d4fd
@odersky odersky Changes for matching and subtyping implicit methods
Implicitness is ignored for matching (otherwise
apply in ImplicitFunction could not shadow apply in Function).
And explicit trumps implicit in subtyping comparisons.
e6da213
@odersky odersky Cleanup of implicit modifiers scheme
Implicit modifiers were quite irregular compared
to the other ones. This commit does a cleanup.
63ba924
@odersky odersky Generalize syntax for implicit function values
 - allow more than one implicit binding
 - harmonize syntax in expressions and blocks
ee59c23
@odersky odersky Add code to disable old implicit closure syntax in blocks
This will no longer be supported. On the other hand, as long as
the alternative is not yet legal in Scala2.x we cannot flag this
as an error. So the migration warning/error and patch code is
currently disabled.
aecfb37
@odersky odersky Fix erasure of implicit functions
and check at runtime that it works
d5ff7e0
@odersky odersky Take nesting into account when ranking implicits
This will need a spec change. It's necessary in
order not to confuse synthetic implicits with each other
or with explicit ones in the environment.
0336785
@odersky odersky Create implicit closures to math expected implicit functions
When the expected type is an implicit function, create an
implicit closure to match it.
bcc80ad
@odersky odersky Don't look at nesting for implicit resolution under Scala2 mode. b804d91
@odersky odersky Enrich test case
Run a typical dotty compiler scenario with implicit
contexts.
43d69cc
@odersky odersky More tests and starting a blog post 4c55d2f
@odersky odersky Finished blog post c10a990
@odersky odersky Add conclusion to blog post b78150d
@odersky odersky Fix link 6ce5fb1
@odersky odersky Fixes to tests
1. I noted java_all was not running(it took 0.01s to complete); fixed by
   changing the test directory.

2. We suspected tasty_bootstrap was gettng the wrong classpath and
   had a lot of problems getting it to print the classpatg. Fixed
   by refactoring the options we pass to tasty_bootstrap (it has
   to be -verbose in addition to -classpath). For the moment,
   both a turned off but we have to just swap a false to a true
   to turn them on together.
c9f666f
@odersky odersky Fix "wrong number of args" reporting
"Wrong number of args" only works for type arguments but was called also for
term arguments. Ideally we should have a WrongNumberOfArgs message that works for
both, but this will take some refactoring.
cc4c3ac
@odersky odersky Ref copier that works for Idents and Selects
The Ref copier copies Idents and Selects, changing the name
of either.
71b900f
@odersky odersky initialDenot method for symbols
This avoids denotation transforms when called at a later
phase because it cuts out current. Not needed in final
version of ShortcutImplicits, but I thought it was
good to have.
5e2f7d1
@odersky odersky New ShortcutImplicits phase
Optimizes implicit closures by avoiding closure
creation where possible.
6eb1a72
@odersky odersky Fix toString in ImplicitFunction tree df4653c
@odersky odersky Fix rebase breakage 30faa7b
@odersky odersky Add benchmarks
Benchmark code to compare compilation schemes in
different scenarios. See results.md for explanations.
04adb53
@odersky odersky Make specialization tweakable
Introduce an option to not specialize monomorphic
targets of callsites.
86dea77
@odersky odersky Fix typos in results.md c26a8c8
@odersky odersky Add linked to code 65b48e0
@odersky odersky Fix more types, add link 5ad63c7
@odersky odersky Fix typo a6ae1a7
@odersky odersky Drop Override flag for non-overriding direct methods
Also, integrate Jason's test case with the conditional.
740bd42
@odersky odersky Fix typo in comment
c18d228
@odersky odersky Fix rebase breakage
0742ba8
+
+ /** Should `sym` get a ..$direct companion?
+ * This is the case if (1) `sym` is a method with an implicit function type as final result type.
+ * However if `specializeMonoTargets` is true, we exclude symbols that are known
@smarter
smarter Dec 17, 2016 Member

I think you mean "false", not "true" here.

@odersky odersky Fix formatting
7c0b163
@odersky
Contributor
odersky commented Dec 17, 2016
@odersky odersky Fix comment
2e99511
@odersky odersky merged commit 7866bc2 into lampepfl:master Dec 18, 2016

6 checks passed

cla @odersky signed the Scala CLA. Thanks!
Details
continuous-integration/drone/pr the build was successful
Details
validate-junit [26] SUCCESS. Took 31 min.
Details
validate-main [26] SUCCESS. Took 31 min.
Details
validate-partest [26] SUCCESS. Took 28 min.
Details
validate-partest-bootstrapped [26] SUCCESS. Took 17 min.
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment