-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generic tuples improvements #41
Conversation
as is the case for `Concat`
to only require being defined on the element types. Similar to what we have for `type FlatMap`
and refine `type IndexOf` doc
library/src/scala/Tuple.scala
Outdated
arr(i) = this.productElement(toInclude(i).asInstanceOf[Int]).asInstanceOf[Object] | ||
/** A tuple consisting of all elements of this tuple that satisfy the predicate `p`. */ | ||
inline def filter[This >: this.type <: Tuple, P[_ <: Union[This]] <: Boolean] | ||
(p: (x: Union[This]) => P[x.type]): Filter[This, P] = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
question - should this be a poly function (like map), or function with union? I guess with map
the intention is to box a value and not inspect it, but with filter you actually need to read the value
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Imo def map
should not have been a poly fun either
scala#19600 has a version of map with a dependant function type if you're curious about what the diffs look like
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need the term parameter of filter at all?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If filter
only worked on types, why should it be a method? it seems more like something you would accomplish with constValueTuple[Filter[This, P]]
, or if it needs to work on widened types (non-precise), then why not inspect the runtime value?
@bishabosha do you know what was the motivation for making |
This is for the same reason as we changed `type Head[X <: NonEmptyTuple] = ...` to `type Head[X <: Tuple] = ...` Also, this is no more unsafe than the other operations already defined for all tuples. `drop(1)` for example was always defined, even though `tail` wasn't.
/** A boolean indicating whether there is an element `y.type` in the type `X` of `x` | ||
*/ | ||
transparent inline def contains(y: Any): Boolean = constValue[Contains[X, y.type]] | ||
/** A boolean indicating whether there is an element `y.type` in the type `X` of `x` */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we can do that. (a: X).contains(b)
does not necessarily have type Contains[X, b.type]
. Equality could mix values with different types. `.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Equality could mix values with different types
I'm not sure I understood, did you mean the types in X
could be widened ?
My reasoning, perhaps incorrect, was the following. For the definition to be sound, we only need to verify that the term and type level agree in the cases where the match type reduces.
If x
is an empty tuple, then the result is immediate.
Otherwise let X := x1 *: xs1
and x := (hd: x1) *: (tl: xs1)
, we need to check
Contains[X, y.type]
reduces the the first case (i.e. true), impliesx1
matchesy.type
, implieshd == y
Contains[X, y.type]
proceeds to the second case, impliesx1
is disjoint fromy.type
, implieshd != y
Which I believe is correct according to the match type spec. \cc @sjrd
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not directly linked, but interesting; we can implement an unoptimised version which does not require using asInstanceOf
. Provided restructuring of the type Contains
to account for limitations of gadt reasoning.
type Contains1[X <: Tuple, Y] <: Boolean = X match
case x *: xs => x match
case Y => true
case _ => Contains1[xs, Y]
case EmptyTuple => false
extension [X <: Tuple](self: X) def contains1(y: Any): Contains1[X, y.type] =
self match
case self: (x *: xs) =>
val (hd: x) *: (tl: xs) = self
hd match
case _: y.type => true // NOTE this uses eq rather == (which productIterator.contains uses)
case _ => tl.contains1(y)
case _: EmptyTuple => false
val tup = (1, 2, 3)
assert(tup.contains1(2))
assert(!tup.contains1(0))
assert(!tup.contains1("hi"))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Equality could mix values with different types
I think it means you can have a value in the tuple that always equals whatever is passed to it (e.g. a
), so a.type
the singleton reference a
might not be in the tuple.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Contains[X, y.type] proceeds to the second case, implies x1 is disjoint from y.type, implies hd != y
Not necessarily. For instance Int
is disjoint from Float
yet an Int value can
==a
Float` value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, good catch ! I did not think of that.
We're coming back to the original version, but putting this here for reference in case we ever reconsider the tuple definitions. The issue can by fixed by replacing productIterator
with toArray
which returns objects and requiring y: AnyRef
.
library/src/scala/Tuple.scala
Outdated
arr(i) = this.productElement(toInclude(i).asInstanceOf[Int]).asInstanceOf[Object] | ||
/** A tuple consisting of all elements of this tuple that satisfy the predicate `p`. */ | ||
inline def filter[This >: this.type <: Tuple, P[_ <: Union[This]] <: Boolean] | ||
(p: (x: Union[This]) => P[x.type]): Filter[This, P] = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need the term parameter of filter at all?
The motivation for having a term level predicate / term equality is the same for For example, (1, 2, 3).containsType[2] // is an Error: Contains[(Int, Int, Int), (2 : Int)] is not a constant type The result depends on inference, i.e. we need to do
Even then, a more precise ascription is not always an option. An example could be: type NonZ[N <: Int] = N match
case 0 => false
case _ => true
def filterTypeNonZ(tup: (Int, Int)) = tup.filterType[(Int, Int), NonZ]
// Error: Tuple element types must be known at compile time Whereas with a term predicate: def nonZ(n: Int): NonZ[n.type] = n match
case _: 0 => false
case _ => true
def filterNonZ(tup: (Int, Int)) = tup.filter[(Int, Int), NonZ](nonZ)
val r1: Tuple = filterNonZ((3, 0))
assert(r1 == Tuple(3)) // ok Note that we still require an ascription for a precise inferred type. But the runtime result is independent of that. Of course, we can avoid the error at the definition of inline def filterTypeNonZ(tup: (Int, Int)) = tup.filterType[(Int, Int), NonZ]
val r2 = filterTypeNonZ((5, 0)) // Error
| ^^^^^^^^^^^^^^^^^^^^^^
| Tuple element types must be known at compile time
|-------------------------------------------------------------------------------------------------------------------
|Inline stack trace
|- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
|This location contains code that was inlined from Tuple.scala:108
| val toInclude = constValueTuple[IndicesWhere[This, P]].toArray
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
|This location contains code that was inlined from Tuple.scala:108
| inline def filterTypeNonZ(tup: (Int, Int)) = tup.filterType[(Int, Int), NonZ] At which point we may be tempted to add a bounded type parameter, which then hits the limitation of match type variance: | inline def filterTypeNonZ[T <: (Int, Int)](tup: T) = tup.filterType[T, NonZ]
| ^
|Type argument NonZ does not conform to upper bound [_ <: scala.Tuple.Fold[T, scala.Nothing, [x, y] =>> x | y]] =>> scala.Boolean
|
|Note: a match type could not be fully reduced:
|
| trying to reduce scala.Tuple.Fold[T, scala.Nothing, [x, y] =>> x | y]
| trying to reduce scala.Tuple.Fold[T, scala.Nothing, [x, y] =>> x | y]
| failed since selector T
| does not uniquely determine parameters x, xs in
| case x *: xs => x | scala.Tuple.Fold[xs, scala.Nothing, [x, y] =>> x | y]
| The computed bounds for the parameters are:
| x <: scala.Int
| xs <: scala.Int *: scala.Tuple$package.EmptyTuple.type Using constValue essentially means limiting the usages of the term level definitions to cases where the corresponding match type reduces. Which again, I'm not arguing is right or wrong, only pointing out there are differences. |
The tuple term level definitions were not being tested before
LGTM now |
Addresses scala#19174 (comment)
See scala#20285 for the CI
For reference, we are still missing the corresponding term level operation: fold, flatMap, indicesWhere, disjoint.
Not saying they are necessary to merge scala#19174 or needed in general, just for reference.
Another thing worth pointing out is that there a still few cases which are limited by the invariance of match types in their scrutinee. The trick to get around this used extensively in
Tuple.scala
is to have a type parameter likeThis >: this.type <: Tuple
. toList, map, apply, take, drop, and splitAt all have arguments which which don't use said trick (and are just as eligible). But changing them isn't so straightforward as the new definitions then rely on inference which breaks several usages.