Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Concurrency problem when using the subtype operator <:< #10766

Closed
akamali opened this issue Mar 8, 2018 · 20 comments

Comments

@akamali
Copy link

commented Mar 8, 2018

Using Scala 2.11.6 with Java 1.8.0_111 on Ubuntu 16.04.2 LTS.

I have the following code

import scala.reflect.runtime.universe.{Type, typeOf}

object bug {

  trait PA[+AttrType]

  def main(args: Array[String]): Unit = {
      val lock = new Object()

      0 until 100 foreach { t =>
        (0 until Runtime.getRuntime.availableProcessors() * 4).par.foreach { id =>
          val someStringProperty: Type = typeOf[PA[String]]
          val someProperty: Type = typeOf[PA[_]]

          /*lock.synchronized*/ {
            if (!(someStringProperty <:< someProperty))
              println("Is not subtype in iteration " + t)
          }
        }
      }
  }
}

I get this output when I run the above in a loop:

$ while true; do scala bug.scala; done
Is not subtype in iteration 2
Is not subtype in iteration 1
Is not subtype in iteration 0
Is not subtype in iteration 11
Is not subtype in iteration 69
Is not subtype in iteration 6
Is not subtype in iteration 90
Is not subtype in iteration 16
Is not subtype in iteration 0
Is not subtype in iteration 2
Is not subtype in iteration 0

It's expected that someStringProperty be a subtype of someProperty, and someStringProperty <:< someProperty returns true most of the time. But when used in multiple threads (on a cold JVM) it appears to sometimes returns false.

The problem seems to go away if I add a synchronized keyword around the subtype check.

@SethTisue

This comment has been minimized.

Copy link
Member

commented Mar 8, 2018

Duplicate of #6240, it looks like. Scala reflection is known not to be thread-safe. see below

Note that in general when filing a bug report, we ask that you try it on a current Scala version before reporting (at present 2.12.4 or 2.13.0-M3; and the latest 2.11.x release was 2.11.12).

@akamali

This comment has been minimized.

Copy link
Author

commented Mar 8, 2018

I was able to reproduce the problem with Scala 2.11.12, 2.12.4, and 2.13.0-M3 with Java 1.8.0_151 on Windows 10.

@joroKr21

This comment has been minimized.

Copy link

commented Mar 8, 2018

That moment when you realize <:< is not just a simple boolean check 😄

@allertonm

This comment has been minimized.

Copy link

commented Mar 9, 2018

SethTisue both the bug #6240 and https://docs.scala-lang.org/overviews/reflection/thread-safety.html suggest that problem was fixed as of 2.11. Based on profiling, if it is really non-threadsafe code, it spends a surprising amount of time holding locks.

@SethTisue

This comment has been minimized.

Copy link
Member

commented Mar 9, 2018

You’re right about 6240. But I also feel like there’s further history on this, maybe on another ticket...?

@SethTisue

This comment has been minimized.

Copy link
Member

commented Mar 9, 2018

re: holding locks, https://github.com/scala/scala/pull/3386/files gives some idea of the contortions involved trying to make this stuff threadsafe

@mwlon

This comment has been minimized.

Copy link

commented Mar 14, 2019

Is anyone working on this? If not, any recommendations on fixing it?

This seems like a pretty major issue, and it currently affects Apache Spark, making certain methods not thread safe.

@deanwampler

This comment has been minimized.

Copy link

commented Mar 14, 2019

@SethTisue I know the Scala team is trying not to do any more work on Scala 2.11, but it will be a long time before all Scala 2.11-based Spark disappears from The Internets, so I hope the Scala team will consider back-porting a fix to 2.11 and 2.12.

@adriaanm

This comment has been minimized.

Copy link
Member

commented Mar 15, 2019

scala.reflect is experimental. Making it thread safe without killing performance is nearly impossible. I was against even trying.

The workaround is to synchronize access by the user, or not use it at all.

2.11 is EOL.

When you consider all this together, I don’t see how we can justify cutting another 2.11 release

@SethTisue

This comment has been minimized.

Copy link
Member

commented Mar 15, 2019

(just my own thoughts, not a team statement:)

Quite apart from 2.11's EOL-ness, tinkering with scala.reflect in 2.11.x or even in 2.12.x, this late in their release cycles, seems quite risky to me.

The workaround is to synchronize access by the user

I was going to suggest this as well.

I'm not sure if the wording is clear. Just in case, I'll rephrase: Spark could, we suggest, add locks to their own code at the <:< use sites. Has this been tried?

or not use it at all

I'm curious to what extent Spark really needs TypeTag and <:<. TypeTag is quite heavyweight stuff; it has the entire Scala type system inside it. I have sometimes seen TypeTag used in the wild where it was overkill.

Perhaps TypeTag really is the right choice for Spark, but if so, I'd like to understand how and why. This understanding would be useful for the Scala team to have, going forward, even apart from the narrower question of 2.11.x changes.

@eed3si9n

This comment has been minimized.

Copy link
Member

commented Mar 15, 2019

@SethTisue

I'm curious to what extent Spark really needs TypeTag and <:<.

(I don't know much about Spark, but) An example usage looks like https://github.com/apache/spark/blob/v2.4.0/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/ScalaReflection.scala#L61-L69, which uses the Scala type information to grab Spark SQL DataType.

Here's another. https://github.com/apache/spark/blob/v2.4.0/sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala#L262-L271.

Generally speaking, it seems like "take this datatype written in Scala, and serialize it in some other Spark datatype"? To support things like Array, what other facility would you recommend?

@mwlon

This comment has been minimized.

Copy link

commented Mar 15, 2019

I wasn't expecting a backport of a fix to 2.11 or 2.12, but IMO this behavior subverts expectations, and we should try to make scala reflection better for future versions.

I'm adding a workaround in Spark: apache/spark#24085. Still, I think this should be fixed.

There is ALWAYS a way to achieve near-identical performance in a thread-safe way (for all cases that were previously correct). For example, even doing the dumb solution of synchronizing on calls to <:< provides imperceptible slowdown in all currently correct (single-threaded) cases. I've done my own experiment, and you can verify this experimentally as well.

@joroKr21

This comment has been minimized.

Copy link

commented Mar 16, 2019

It's unfortunate that Spark is using runtime reflection to generate the serialization code. The long term solution would be to use typeclasses + compile-time typeclass derivation, but it might require some re-archirecturing.

Flink doesn't have this problem, because it's generating the serialization code at compile time with a macro: https://github.com/apache/flink/blob/master/flink-scala/src/main/scala/org/apache/flink/api/scala/codegen/TypeInformationGen.scala This is still not as ideal as using typeclasses, but much better than runtime reflection.

I guess another question to ask would be why this code needs to be generated in parallel? Would it not be good enough to generate the serialization code beforehand and then initiate the job? Or am I missing something.

@mwlon

This comment has been minimized.

Copy link

commented Mar 16, 2019

Spark is intended to be fully thread safe, not only within a Spark job, but also across parallel jobs in the same application: https://spark.apache.org/docs/latest/job-scheduling.html#scheduling-within-an-application

Inside a given Spark application (SparkContext instance), multiple parallel jobs can run simultaneously if they were submitted from separate threads. By “job”, in this section, we mean a Spark action (e.g. save, collect) and any tasks that need to run to evaluate that action. Spark’s scheduler is fully thread-safe and supports this use case to enable applications that serve multiple requests (e.g. queries for multiple users).

One use case where this becomes important is for reading multiple data sources at once. Here the bottleneck is often waiting on each data source to return results, so it is best for the user to start multiple threads, one for each data source, and join all of them once each one is materialized into an RDD/DataFrame/Dataset or its side effect is complete.

I agree about type classes. It would be a better solution, but tricky to switch to now.

@Jasper-M

This comment has been minimized.

Copy link
Member

commented Mar 16, 2019

It's unfortunate that Spark is using runtime reflection to generate the serialization code. The long term solution would be to use typeclasses + compile-time typeclass derivation, but it might require some re-archirecturing.

The worst part is that Spark actually has an Encoder typeclass, but (1) it's hard to implement your own and (2) they don't compose at all. E.g. when I manage to define my own Encoder[MyType] and I have a case class Foo(a: Int, b: MyType) then I will still get a runtime error because Encoder[Foo] doesn't delegate to my encoder but uses a limited set of hardcoded encoders.

@joroKr21

This comment has been minimized.

Copy link

commented Mar 17, 2019

Ah yes, I forgot about Encoder.

To be fair it's not easy to write typeclass derivation that composes well in Scala, just take for example shapeless and kittens. But it should be possible, I've done this exercise for Flink: https://github.com/joroKr21/flink-shapeless, but haven't tried it for Spark.

@adriaanm adriaanm added the won't fix label Mar 18, 2019

@adriaanm

This comment has been minimized.

Copy link
Member

commented Mar 18, 2019

Happy to see the workaround is doable! Just to be clear, we do not intend to provide (full) thread-safety for scala.reflect -- I don't think it's feasible. Closing this ticket to reflect that.

@adriaanm adriaanm closed this Mar 18, 2019

@SethTisue SethTisue removed this from the Backlog milestone Mar 18, 2019

@mwlon

This comment has been minimized.

Copy link

commented Mar 18, 2019

@adriaanm @SethTisue I don't understand.

  1. What's wrong with the simple solution of wrapping the <:< operator in a synchronized block internally? Isn't it much better than doing nothing?
  2. Even if this is such a difficult issue to solve, it seems like egg in the face of scala. I want scala to be a language I'm happy to work in / tell my friends about. So why close the issue?
@adriaanm

This comment has been minimized.

Copy link
Member

commented Mar 19, 2019

Apparently our communication around scala.reflect being experimental hasn't been very effective. It's a layer built on top of the compiler, and you'll just have to take my word for it (or look around in the PR queue) that it's not an easy thing to make thread safe. If it was as easy as you suggest in 1., we'd have done it.

  1. 🤷‍♂️
@HyukjinKwon

This comment has been minimized.

Copy link

commented Apr 2, 2019

Sorry if I misunderstand or missed something. What's wrong with 1. way? If there's a problem Spark side shouldn't take this approach too.

kaishh added a commit to 7mind/izumi that referenced this issue May 30, 2019

JoshRosen added a commit to JoshRosen/spark that referenced this issue Jun 19, 2019

[SPARK-26555][SQL] make ScalaReflection subtype checking thread safe
Make ScalaReflection subtype checking thread safe by adding a lock. There is a thread safety bug in the <:< operator in all versions of scala (scala/bug#10766).

Existing tests and a new one for the new subtype checking function.

Closes apache#24085 from mwlon/SPARK-26555.

Authored-by: mwlon <mloncaric@hmc.edu>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>

JoshRosen added a commit to apache/spark that referenced this issue Jun 20, 2019

[SPARK-26555][SQL][BRANCH-2.4] make ScalaReflection subtype checking …
…thread safe

This is a Spark 2.4.x backport of #24085. Original description follows below:

## What changes were proposed in this pull request?

Make ScalaReflection subtype checking thread safe by adding a lock. There is a thread safety bug in the <:< operator in all versions of scala (scala/bug#10766).

## How was this patch tested?

Existing tests and a new one for the new subtype checking function.

Closes #24913 from JoshRosen/joshrosen/SPARK-26555-branch-2.4-backport.

Authored-by: mwlon <mloncaric@hmc.edu>
Signed-off-by: Josh Rosen <rosenville@gmail.com>

kaishh added a commit to 7mind/izumi that referenced this issue Jul 1, 2019

kaishh added a commit to 7mind/izumi that referenced this issue Jul 3, 2019

Synchronize =:= too just in case, due to strange failures... (#568)
* Synchronize =:= too just in case, due to strange failures... scala/bug#10766

* CompilerTest linux permission ro-Z

* Revert "CompilerTest linux permission ro-Z"

This reverts commit 9744a4c.

kaishh added a commit to 7mind/izumi that referenced this issue Jul 16, 2019

kaishh added a commit to 7mind/izumi that referenced this issue Jul 16, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
You can’t perform that action at this time.