Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disallow implicit conversions between unrelated types #2060

Closed

Conversation

odersky
Copy link
Contributor

@odersky odersky commented Mar 7, 2017

Disallow implicit conversions, unless one of the following holds:

  • language:Scala2 is set
  • the conversion is local to a block, or private, or protected
  • the conversion is defined in the same compilation unit as
    its source type or its target type

Quite a few things in the compiler and tests had to be changed to
conform to the new rule.

Disallow implicit conversions, unless one of the following holds:

 - language:Scala2 is set
 - the conversion is local to a block, or private, or protected
 - the conversion is defined in the same compilation unit as
   its source type or its target type

Quite a few things in the compiler and tests had to be changed to
conform to the new rule.
@odersky
Copy link
Contributor Author

odersky commented Mar 7, 2017

I am currently not sure under what conditions we should still allow such conversions. For the moment it's under Scala2 mode, which means: "in native dotty, never". But that might be too harsh.

On the other hand, implicit conversions are way over-used, in our code bases as well as in others. The problem is that they are a temptingly easy and powerful solution. So without strict prohibitions the short-term gain might always win over the long-term complications. And that would be a pity because from what I could see so far, refactoring to avoid such problematic conversions increased code quality in most cases.

@smarter
Copy link
Member

smarter commented Mar 7, 2017

This means that implicit class now behaves differently than just sugar for implicit def right?

@odersky
Copy link
Contributor Author

odersky commented Mar 7, 2017

This means that implicit class now behaves differently than just sugar for implicit def right?

That was the case already before. implicit defs needed a language import, implicit classes did not.

@sjrd
Copy link
Member

sjrd commented Mar 7, 2017

Er ... Why!?

This will break most of the implicit conversions defined in the Scala.js standard library, notably those defined in js.Any.scala, including those between JS and Scala functions, as well as the pimp of the collections API onto js.Array, js.Dictionary and js.Iterable.

This breaking change seems completely unjustified to me.

@odersky
Copy link
Contributor Author

odersky commented Mar 7, 2017

@jsrd Can't you package up the conversions with the JS function types as well as with js.Array, js.Dictionary and js.Iterable?

@sjrd
Copy link
Member

sjrd commented Mar 7, 2017

No, for a number of reasons:

  • js.Array's companion object is a native JS object, in which we cannot declare methods implemented in Scala code. So they need to be defined in js.Any's companion object.
  • We need the conversion of js.Array to js.ArrayOps to have higher precedence than that to js.IterableOps, because js.Array <: js.Iterable.
  • We need the conversion from Scala functions to JS functions to kick in when the expected type is js.Any.

@odersky
Copy link
Contributor Author

odersky commented Mar 7, 2017

We might need an escape hatch then to still allow these conversions in some cases. The escape hatch could be

  • a compiler flag (off by default), or

  • an annotation such as

     @IKnowItsDangerousButThereIsNoOtherWayToDoIt implicit def ...    😄 
    

    or,

  • a mechanism where the compiler specifically excludes some of the definitions of its stdlib from the check.

@sjrd
Copy link
Member

sjrd commented Mar 7, 2017

I still don't understand why this change is desirable to begin with. There's no motivation in this PR. The last comment suggests that it's "dangerous", but I fail to see why it's more dangerous than implicit conversions in general.

@odersky
Copy link
Contributor Author

odersky commented Mar 7, 2017

@sjrd It's more a trial balloon to see whether we can do it. It's inspired by

https://blog.rust-lang.org/2017/03/02/lang-ergonomics.html

According to their criteria (which look reasonable to me) unrestricted implicit conversions are way too dangerous. They are completely invisible, can apply anywhere, and can change anything.

Would be good to get people's comments whether we should go ahead with this.

@jto
Copy link

jto commented Mar 7, 2017

Would adding syntax for typeclasses instances like in scalaz / cats be impossible ? If so I think it's indeed a bit harsh :)

@shawjef3
Copy link

shawjef3 commented Mar 7, 2017

How does this affect imported conversions from libraries like https://github.com/shawjef3/Harmony, which provides what I think are conversions between unrelated types? My interpretation is that a user of the library would have to set language:Scala2.

@lihaoyi
Copy link
Contributor

lihaoyi commented Mar 7, 2017

Many of my libraries (Scalatags and FastParse for example) use the following pattern

package generic
class Foo[T](t: T)
object Cake1{
  type Foo = generic.Foo[String]
  implicit def makeFoo(s: String) = new generic.Foo(s)
}
object Cake2{
  type Foo = generic.Foo[Int]
  implicit def makeFoo(s: Int) = new generic.Foo(s)
}
  • Will this be banned?

  • What if I subclassed Foo[String] and Foo[Int] instead of using a type alias? Let's assume I did subclass Foo[String] as follows:

package generic
class Foo[T](t: T)
object Cake{
  class Foo(s: String) extends generic.Foo[String](s)
}

Would I be allowed to define

implicit def makeFoo(s: String): Cake.Foo = new Cake.Foo(s)

inside Cake, but not:

implicit def makeFoo(s: String): generic.Foo[String] = new Cake.Foo(s)

? In the first case, Cake.Foo is defined in the same file as makeFoo, while in the second case, generic.Foo is not. But Cake.Foo can be safely upcasted to generic.Foo[String], and the conversion to Cake.Foo is strictly more flexible (by LSP) than the conversion to generic.Foo[String]. Banning the conversion to generic.Foo[String] wouldn't seem to make sense to me, but by the 3-line specification above it seems that is what would happen

  • Would a dummy forwarder-trait in the same compilation unit as follows?
class Foo[T](t: T)
trait FooImplicits[T]{
  implicit def makeFoo(s: T): Foo[T] = realMakeFoo(s)
  def realMakeFoo(s: T): Foo[T]
}
object Cake1 extends FooImplicits[T]{
  type Foo = Foo[String]
  def realMakeFoo(s: String) = new Foo(s)
}
object Cake2 extends FooImplicits[T]{
  type Foo = Foo[Int]
  def realMakeFoo(s: Int) = new Foo(s)
}

If not, why not? If so, is it a good idea to force people to do this?

@tpolecat
Copy link

tpolecat commented Mar 7, 2017

re: @jto's question, the pattern we use for typeclass syntax in Cats is encoded by Simulacrum, which is very clearly documented. It would be good to verify that this pattern of usage will still work, in particular the mixin traits that you can use to bundle up a bunch of syntax conversions into a single import like cats.implicits._.

Scalaz's encoding is done by hand but follows this same pattern more or less.

@smarter
Copy link
Member

smarter commented Mar 7, 2017

@tpolecat As far as I can see, Simulacrum only uses implicit def foo(a: A): Foo = new Foo { ...} which would be disallowed but can be rewritten as implicit class foo(a: A) extends Foo { ... } which is authorized, there are some examples of this in the diff of this PR.

@tpolecat
Copy link

tpolecat commented Mar 7, 2017

@smarter how do you export the conversion associated with the implicit class? Or do you have to export the class itself? I believe we don't use implicit class in cats/scalaz because of the difficulty selectively getting the conversion into scope.

@smarter
Copy link
Member

smarter commented Mar 7, 2017

@tpolecat It's not very elegant but you could always wrap the implicit class in an object Foo and do import Foo._

@tpolecat
Copy link

tpolecat commented Mar 7, 2017

@smarter this might be adequate if it were also possible to alias an implicit class without losing implicitness; i.e.

scala> object A { implicit class Cow(n:Int) { def moo = "oink" } }
defined object A

scala> object B { type Cow = A.Cow }
defined object B

scala> import B._
import B._

scala> 1.moo
<console>:17: error: value moo is not a member of Int
       1.moo
         ^

scala> new B.Cow(1) // it's there but the implicit conversion is lost
res1: A.Cow = A$Cow@6025aa80

scala> import A._
import A._

scala> 1.moo
res2: String = oink

The work done in cats/scalaz to layer these conversions in a way that makes them available en masse or a-la-carte is the product of years of refinement and it's very useful and important to the way we do things. So something equivalent really needs to be expressible I think.

@odersky
Copy link
Contributor Author

odersky commented Mar 7, 2017

@lihaoyi Yes, these are good examples. Indeed you would be forced to jump through the hoops you quickly discovered to retain the functionality. Why make you do it? If you were the only Scala user I would see no need to bother 😄.

On the other hand, we have been bombarded repeatedly by experienced users who claim that code bases they inherit are train wrecks because of overzealous use of implicits. And I don't want to dismiss these comments, my suspicion is that they describe something all too real. I think we all agree that the worst misuses of implicits have to do with conversions, not parameters. In the majority of cases I have seen (and that includes my own code) implicit conversions in places unrelated to source and target types were a premature and suboptimal (and sometimes disastruous!) solution. So we should avoid them as much as possible. But it's so tempting to use them because they are both super powerful and super easy to set up! You pay the price later on maintenance, sure, but then it's too late.

So is there nothing we should do as a community to prevent people from ruining code bases with implicit conversions? We have tried language imports but it seems they are toothless, people just ignore them. But if the community thinks we should keep things as they are I won't insist on changing them. The purpose of this PR was to explore the possibilities, that's all.

@Jasper-M
Copy link
Contributor

Jasper-M commented Mar 7, 2017

Simulacrum generates something like this:

trait Semigroup[A] {
  def append(x: A, y: A): A
}

object Semigroup {
  def apply[A](implicit instance: Semigroup[A]): Semigroup[A] = instance

  trait Ops[A] {
    def typeClassInstance: Semigroup[A]
    def self: A
    def |+|(y: A): A = typeClassInstance.append(self, y)
  }

  trait ToSemigroupOps {
    implicit def toSemigroupOps[A](target: A)(implicit tc: Semigroup[A]): Ops[A] = new Ops[A] {
      val self = target
      val typeClassInstance = tc
    }
  }

  object nonInheritedOps extends ToSemigroupOps

  trait AllOps[A] extends Ops[A] {
    def typeClassInstance: Semigroup[A]
  }

  object ops {
    implicit def toAllSemigroupOps[A](target: A)(implicit tc: Semigroup[A]): AllOps[A] = new AllOps[A] {
      val self = target
      val typeClassInstance = tc
    }
  }
}

Aren't the target type Ops and the implicit conversion toSemigroupOps in the same compilation unit?

@odersky
Copy link
Contributor Author

odersky commented Mar 7, 2017

Aren't the target type Ops and the implicit conversion toSemigroupOps in the same compilation unit?

They are.

@dwijnand
Copy link
Member

dwijnand commented Mar 7, 2017

@Jasper-M Indeed, I'm keen to understand why @smarter would say

Simulacrum only uses implicit def foo(a: A): Foo = new Foo { ...} which would be disallowed but can be rewritten as implicit class foo(a: A) extends Foo { ... } which is authorized

@smarter
Copy link
Member

smarter commented Mar 7, 2017

Yes, I didn't look closely enough, it would be allowed, ignore my previous messages ;).

@vaclavsvejcar
Copy link

vaclavsvejcar commented Mar 8, 2017

What I always liked about the Scala is how powerful and expressive the language is. I agree this particular issue with implicit conversions might be quite problematic, but is really the permanent removal of such "problematic" features from the language the way we want to go?

@lihaoyi
Copy link
Contributor

lihaoyi commented Mar 8, 2017

@tpolecat wouldn't this work? Put your implicit class inside a trait and mix it in wherever you want it

trait A { implicit class Cow(n:Int) { def moo = "oink" } }

object B extends A

import B._

@odersky
Copy link
Contributor Author

odersky commented Mar 8, 2017

See also #2065, which complements this PR.

@alexandru
Copy link

I'm for restricting implicit conversions, but "same compilation unit" is a little restrictive - implicit conversions are very useful for providing interoperability, but interoperability packages are often optional.

Could it be in the same project that can have many sub-projects? Not sure how this could work for the compiler, but I like my stuff to be modular and optional.

@Ichoran
Copy link

Ichoran commented Mar 9, 2017

I am worried that the people who are complaining about "implicits" are not actually complaining about implicit conversions. Do we have good examples rather than just "GARRRH implicits, hate them!"?

If you use Akka and care deeply about execution context, you will find that because it is an implicit parameter to many methods, it's a little more slippery to keep track of than you might wish. (If you don't care deeply about execution context it "just works".)

If you use any sort of conversions, even JavaConverters, you have at least one extension method added ("asJava"). Are people fine with extension methods, or do they get into trouble there, too?

Basically, I wonder whether this isn't overly restrictive, trying to solve a problem by further diminishing the use of a feature that isn't the problem any more for people who complain about "implicits". I agree that implicit conversions can be overused, but I don't see the evidence that "I don't like implicits" means "I love typeclasses and implicit evidence and extension methods but hate implicit conversions".

@soronpo
Copy link
Contributor

soronpo commented Mar 9, 2017

My two cents. Something like this should be a warning at best. Not an error. You can warn and say maybe try THIS instead. Personally I don't like limiting the language. My view is that if a compiler can know how to compile what you wanted, then it should compile and not throw an error. If it can recommend another better/safer approach, then it throws a warning, but compiles anyway (unless -Xfatal-warnings). If we had an @ignoreWarning annotation in Scala, then the user would be fully responsible to either remove -Xfatal-warnings or add the @ignoreWarning annotation. I think it's better approach than a dedicated @IKnowItsDangerousButThereIsNoOtherWayToDoIt.

Also, If you want to limit accidental overuse of implicits maybe add an implicit keyword annotation to import to acknowledge we want to import implicits as well. E.g:

package object MyPackage {
   def foo[A](a : A) : B = ???
   implicit def implicitFoo[A](a : A) : B = ???
}
import MyPackage._ //only imports foo
import implicit MyPackage._ //imports foo & implicitFoo

Or even import unrelated implicit MyPackage._

@vladap
Copy link

vladap commented Mar 10, 2017

> import MyPackage._ //only imports foo
> import implicit MyPackage._ //imports foo & implicitFoo

I thought about this as well. I had access modifiers like syntax in mind. My motivation was to quickly see which imports import implicits. The question is if all or majority imports wouldn't be import[implicits] in the end. Without implicits modifier it would have to ignore inherited implicits as well. I can't think of all consequences though.

> import[implicits] MyPackage._ //imports foo & implicitFoo

As well it creates chance for another problem: Why it doesn't work? Ahhh, forgotten [implicits] modifier on some import. But compiler message would probably suggests quick solution.

And I would have to know that I need implicit modifier on import. How I would know? Looking into source code? Docs?

@vladap
Copy link

vladap commented Mar 10, 2017

Disallow implicit conversions between unrelated types

Here unrelated types are defined as types where I don't have control over at least one type source file. Is it the right definition for unrelated types?

How the restriction helps? What is the evidence that dropping 1 of 4 options (no control, control over source, control over target, control over both) will solve the problem? Developers doesn't create problematic implicit conversions between their own types and types they don't have control over? And between their own types only? If they don't, what prevents them?

What if Scala implicit conversions are risky not because I can define relation between types I have no control over. But because they allow to call any polymorphic method on its source, call side-effects and make heavy conversion processing? Together with capability to make types substitutable they allow to put arbitrary logic inside conversion. As soon as there is more logic outside of a simple projection between types, basically just a simple reference manipulation, it is very likely it shouldn't be implicit because it probably hides something which is better to keep explicit in general programming.

@odersky
Copy link
Contributor Author

odersky commented Mar 13, 2017

@vladap There is evidence that the restriction helps, but it is admittedly circumstantial: The restriction is similar to the one enforced by C#. I have never heard people complain about bad surprises using implicit conversions in C#, but I constantly hear such complaints in Scala.

Of course, implicit conversions with side effects are another code smell. But I would argue that they are much less common, because the ingrained culture of Scala is to minimize side effects anyway. Once we have an effect system we could potentially outlaw conversions with side effects as well.

@nafg
Copy link

nafg commented Mar 13, 2017 via email

@odersky
Copy link
Contributor Author

odersky commented Mar 13, 2017

The impression I got is that most C# developers don't even know about
implicit conversions (not based on much data though)

Yes, and that's a good thing, no? That's the idea: reduce the temptation to use them to a level where people will forget about them :-)

@vladap
Copy link

vladap commented Mar 15, 2017

Yes, and that's a good thing, no? That's the idea: reduce the temptation to use them to a level where people will forget about them :-)

Then we should ask why to have implicit conversions in the first place when the goal is to forget about them. What is their intended role (use cases) in Scala once people will forget about them? Is the final goal to slowly deprecate them?

I'm not C# dev. If I'm not mistaken C# implicit conversion operators are much more limited. They are not inherited (vs. searching implicit scope), they should support symmetric equality => they should be defined in pairs (to and back), conversion shouldn't loose information. These are more strict requirements than for subtyping.

It is limited to the point that I can imagine they are rarely used. Which in turn results not using them even more regardless there is a good case for them. Because whenever a forgotten feature is pulled from a hat it confuses even more.

It is my guess why nobody has a problem with C# variant - they are rarely used because they have little utility. It can be hardly said about Scala implicit conversions even with the suggested restriction applied.

However, I don't know. I can only guess. If it can be tracked down that high percentage of bad uses (btw what does it mean?) can be linked to conversions between "3rd party" types than it can be low hanging fruit.

I just have doubts that it is the elephant in the room. It might be just one leg.

@nafg
Copy link

nafg commented Mar 15, 2017 via email

@vladap
Copy link

vladap commented Mar 15, 2017

Yes, these are good examples. Indeed you would be forced to jump through the hoops you quickly discovered to retain the functionality. Why make you do it? If you were the only Scala user I would see no need to bother 😄.

Unfortunately, if hops stay here they will be used and the net result might be worse, it just adds additional indirection layers. How long it will takes to users to discover them? How can we know that one additional hop beats user motivation to use implicit conversion? It would have to disallow them as well. Now it starts to become more complex. It could lead to situation that I couldn't figure out why I can't define implicit conversion between my types because there is some 3rd party type involved deep in a hierarchy, compiler could probably suggest though.

@vladap
Copy link

vladap commented Mar 16, 2017

Another perspective.
I don't see a problem with the code below because it actually uses explicit conversion. Still it would be disallowed. I can achieve analogous one in Java using runtime DI.

package thirdparty1
class Foo
package thirdparty2
class Bar
package mycode

implicit def fooToBar(foo: Foo): Bar = new Bar

def baz(foo: Foo)(implicit convert: Foo => Bar) = {
  val bar: Bar = convert(foo) // this is explicit conversion
}

In Java I would define SAM interface (just a function) and create class implementations. Then I would use Spring framework and use @Autowired constructor dependency injection (which everybody uses) to achieve the same def baz(foo: Foo) at call site. I have to take a look one day if I can use Spring to inject Function<T, R> in Java8. It is long time I actually used it.

Should we be more restrictive than Java? Should I have to use runtime DI in Scala for this when it would be disallowed by implicit values?

I could do side-effects in Java in these conversion implementations and nobody would care.

I would care to the point that there is a spectrum between implicit conversion, explicit conversion and ordinary function/method. Even using explicit conversion I can make code bad. Explicit conversions shouldn't do arbitrary complex operations either otherwise we could call almost any function/method a conversion. Simply said, while I can do more in explicit conversion than in implicit conversion I still don't expect that square.toRect will launch a missile. Anything more complex still should go to a properly named function/method, separate object etc.

The conceptual problem
Actual Scala implicit conversions are of this kind:

implicit def fooToBar(foo: Foo): Bar = new Bar

def baz(foo: Foo) = {
  val bar: Bar = foo // implicit conversion
}

// or this one 
def baz(bar: Bar) = ???
baz(new Foo) // implicit conversion

// I put extension methods aside for now

What they are? They are another form of subtyping which allows type substitution. The closest concept seems to be coercive subtyping (wikipedia).

Unlike nominal and structural subtyping they create another instances but it is not that important here. To use them in a manageable way their usage should be closely related to implicit cast (upcasting), or the mechanic which enables structural subtyping (it is implicit as well). I don't have to care how it works, it never fails, I don't have to debug it, no surprises, well defined behaviour. Golang embedding and subsequent type substitution has the same properties. Break any of these properties and it can manifest as a problem.

Do you ever need to step into a body of implicit def? It is very likely a bad implicit conversion.

The right implicit conversion has to be actually implicit. It means it has to be invivisible and opaque. It has to be just a simple boilerplate wiring which can be forgotten. Just like implicit cast and other variants.

The problem is that both my first example and actual Scala implicit conversions can use the same source implicit def. This pretty much enforces that the first example has to follow the same rules like implicit conversion. Even it is an explicit conversion.

Yet another perspective
Scala implicit conversions are more flexible than structural subtyping. Flexible in a sense how many type substitution I can express. Both nominal and structural subtyping are limited by having to match field and operation names (simply said). Nominal creates a static hierarchy which limits it further. Scala implicit conversion can define arbitrary projection between types.

I would say that even if implicit conversions would be limited to simple projections like ValueKey(value, key) => (key, value) than Scala is still more flexible than a dynamic language. I have to enable type substitutions manually but they can be more flexible. It is how it can balance out in comparison to a dynamic language which allows all type substitutions when writing code, but more limited and fails on runtime.

I don't think we can get much further compared to a dynamic language. Dynamic languages are very popular => higher flexibility in type substitutions doesn't look like a problem itself. Hence I believe it is possible to be more flexible than strict Java.

But any step further has to be justified. I can imagine 2 use cases which could violate it. Very lower platform kind - like Scala.js. And very high level DSL kind - like DSL to write Spark notebooks directly in a web app. These could do some more complex and arbitrary implicit conversions to achieve a given goal. But it shouldn't be a norm.

And some more advanced well known design patterns could be acceptable - like implicit typeclass derivations.

Sometimes argument appears that implicit conversions break type safety in Scala. It is typically argued that they are fully type safe. It is true. But it is not what the argument is about. If I would have enough time I could sit down and write implicit conversions between all types. I would make most just to throw. This way I would convert Scala into a kind of strictly typed dynamic language:) Whatever I would write would compile but it would be failing on runtime.

@vladap
Copy link

vladap commented Mar 16, 2017

As an example I have implemented @Autowired (i.e. implicitly injected) function between String and String in Java Spring. Just to try it out after a while. If you are interested you can take a look into spring-playground.

It seems Java developers have little problems with hidden @Autowired private values, @componentscan, @import of multiple @configuration classes. They can deal with SpringContext failing on startup with ambiguity, SpringContext inheritance etc.

While every Java developer understand @componentscan I often find that Scala developers in our team has no knowledge about 2nd stage of implicit search. Even after 1 year if I didn't care and left them alone. Imagine that he somehow triggers it. He has little chance to understand what is going on. Implicits blamed.

We could find close analogy to Scala implicit values in Spring. But one can get with Spring analogy only so far though. What I can't do with Spring is to write highly generic function and Spring would infer correct instances for me based on passed in values. Something what can be done in Scala in tandem with type inference and implicits. It is at the same time what makes Scala powerful.

Make a step back, take the Simulacrum generated code. Ask enterprise commodity developer who is working with Scala for 1 year to explain the code. I guess there will be very high failure rate. And I'm affraid that the main response will be blaming implicits. Imagine that advanced Scala developer leaves and a less skilled one takes over. If there is this kind of code implicits can be blamed just easily.

Without further understanding we can just add additional obstacles, limit functionality but gain a very little.

EDIT: just for a bit of fun and horror I have added Scala version to spring-playground.

@vladap
Copy link

vladap commented Mar 17, 2017

I used to think so as well, and wrote one of the earliest interop packages based on implicits: JavaConversions. But it turned out to be a bad idea. The consensus that developed later was that it's better to be explicit the way JavaConverters are. In retrospect I think it's a bad technique to base fundamental but invisible functionality on the presence or absence of an import. It's too much magic.

@odersky I wasn't there at that time and I never used JavaConversions. Was the import the only reason? After all JavaConverters are based on import as well. Could it be summarized what were practical problems with it and how they manifested? It wouldn't pass my criteria for implicit conversion because converting a collection to another collection is potentially expensive which in itself can leak implicit conversion when debugging performance regression. It possibly could be fine in my book if Java collection would be embedded in Scala source type (it is not how Scala collections are designed).

For a record, I can't name a single dev from my team, including me, who likes Scala interop with Java collections. It is an important feature and it feels too much bolted-on, like an after-thought. It seems that nothing is going to change with 2.13 collection overhaul. You wouldn't like to hear what Java developer says seeing the resulting Scala code.

Devs are routinely producing code like this xs.asScala.map(_ + 1).asJava.

I don't know if it can be done any better. But when working with Java API I would like to work with Java collections but with Scala syntax and explicitly convert to immutable Scala collection only when passing boundaries to Scala code. Meaning there would be none conversion if call takes result from Java and returns back to Java. And only one when it goes from Java to Scala. I wouldn't mind if it would be limited compared to native Scala collections as far as it would cover the most common use cases to manipulate them.

@felixmulder
Copy link
Contributor

@vladap - IIRC JavaConversions does the conversions "auto-magically" without the need to explicitly call toJava whereas JavaConverters you need to explicitly call the conversion decorator.

@vladap
Copy link

vladap commented Mar 17, 2017

I don't mind if it is auto-magical. I probably understand about 1% how all things I use actually works internally in complete details. Magic, abstraction however we call it help us to beat complexity. The thing I care about is if the magic is leaky or not. If it forces me at some point to dig into complex internal details I have to understand than it is bad.

In what ways JavaConversions were leaky? E.g. how they could force me to debug implicits? How they could lead me to hard to debug bugs? How they could confuse me easily? Forgotten import is one thing. What else? Code samples how it went bad?

I use Scala in enterprise and I just want to know my enemy (if there is actually any) before it hunts me down. I don't buy hate posts against implicit conversions that easily because they typically boil down to an argument that implicit conversions are bad because they convert implicitly. Or that they are auto-magical. Without actually explaining details and concrete failing scenarios.

And to avoid confusion I don't deny they have issues. I prefer explicitness over implicitness. And I agree with linked Rust ergonomics. And yes, I think we have probably zero implicit conversions in our own code base. But we use 3rd party ones routinely, like Spark.

@odersky
Copy link
Contributor Author

odersky commented Mar 17, 2017

@vladap I'll take your points under consideration. But this issue tracker is the wrong format for extended discussions and arguments. Please use scala-contributors for that.

@vladap
Copy link

vladap commented Mar 20, 2017

I'm sorry to mess it up here. I will use the other option next time.

@vladap
Copy link

vladap commented Mar 20, 2017

I have posted an example of "non-deterministic" implicit conversion on scala-contributors. This suggestion doesn't help to solve it.

@nafg
Copy link

nafg commented Jun 8, 2017

Indeed you would be forced to jump through the hoops you quickly discovered to retain the functionality. Why make you do it? If you were the only Scala user I would see no need to bother 😄.

So is there nothing we should do as a community to prevent people from ruining code bases with implicit conversions? We have tried language imports but it seems they are toothless, people just ignore them.

So is the logic that language imports aren't a big enough hoop, if we make the hoop bigger then it will have teeth?

At what point do we say implicits don't kill [codebases], people kill?

@odersky
Copy link
Contributor Author

odersky commented Jun 17, 2017

See also:

mesosphere/marathon@fbf7f29

@nafg
Copy link

nafg commented Jun 20, 2017

Maybe if we changed the implicit conversion warning / language import to only warn / be needed if the conversion was between unrelated types, and made the warning language stronger, people wouldn't start mentally filtering them out.

Incidentally there were people that predicted, at the time of the language imports proposal, that it might have such an effect.

@nafg
Copy link

nafg commented Jun 20, 2017

I wonder what people involved in the Marathon issue (the committer is @timcharper) have to say

@nafg
Copy link

nafg commented Jun 20, 2017

That commit doesn't even seem to remove import mesosphere.mesos.protos.Implicits._

@Jasper-M
Copy link
Contributor

Jasper-M commented Jun 20, 2017 via email

@OlivierBlanvillain
Copy link
Contributor

I propose we close this for now, WDYT @odersky?

@odersky
Copy link
Contributor Author

odersky commented Apr 2, 2018

Superseded by #4229

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.