Skip to content

SI-4918 Remove val in for comprehension value definitions #102

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Oct 22, 2013
Merged

SI-4918 Remove val in for comprehension value definitions #102

merged 1 commit into from
Oct 22, 2013

Conversation

soc
Copy link
Contributor

@soc soc commented Sep 24, 2013

No description provided.

@soc
Copy link
Contributor Author

soc commented Oct 13, 2013

Review by @odersky, please.

odersky added a commit that referenced this pull request Oct 22, 2013
SI-4918 Remove val in for comprehension value definitions
@odersky odersky merged commit f818652 into scala:master Oct 22, 2013
liancheng pushed a commit to liancheng/scala-dist that referenced this pull request Jul 20, 2014
Added new Spark Streaming operations

New operations
- transformWith which allows arbitrary 2-to-1 DStream transform, added to Scala and Java API
- StreamingContext.transform to allow arbitrary n-to-1 DStream
- leftOuterJoin and rightOuterJoin between 2 DStreams, added to Scala and Java API
- missing variations of join and cogroup added to Scala Java API
- missing JavaStreamingContext.union

Updated a number of Java and Scala API docs
liancheng pushed a commit to liancheng/scala-dist that referenced this pull request Jul 20, 2014
Add clustered index on edges by source vertex
liancheng pushed a commit to liancheng/scala-dist that referenced this pull request Jul 20, 2014
This reopens PR 649 from incubator-spark against the new repo

Author: Sandy Ryza <sandy@cloudera.com>

Closes scala#102 from sryza/sandy-spark-1064 and squashes the following commits:

270e490 [Sandy Ryza] Handle different application classpath variables in different versions
88b04e0 [Sandy Ryza] SPARK-1064. Make it possible to run on YARN without bundling Hadoop jars in Spark assembly
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants