Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add primitive compiletime operations on singleton types #7628

Merged
merged 17 commits into from Jan 8, 2020

Conversation

@MaximeKjaer
Copy link
Contributor

MaximeKjaer commented Nov 27, 2019

This PR adds support for type-level operations on singletons, making it possible to write the following:

val x: 2 + 3 = 5
val y: 3 * 4 = 12

Having these primitive operations makes it possible to define more complex operations through match types, e.g.:

type GCD[A <: Int, B <: Int] <: Int = B match {
    case 0 => A
    case _ => GCD[B, A % B]
}
val z: GCD[252, 105] = 21

Background

With scala.compiletime.S and match types, it's possible to implement a variety of operations on singletons. For instance, addition and multiplication can be implemented as follows:

type +[A <: Int, B <: Int] <: Int = A match {
    case 0 => B
    case S[aMinusOne] => S[aMinusOne + B]
}

type *[A <: Int, B <: Int] <: Int = A match {
    case 0 => 0
    case _ => MultiplyLoop[A, B, 0]
}

type MultiplyLoop[A <: Int, B <: Int, Acc <: Int] <: Int = A match {
    case 0 => Acc
    case S[aMinusOne] => MultiplyLoop[aMinusOne, B, B + Acc]
}

However, as these are implemented on the low-level primitive of scala.compiletime.S, they are highly recursive. Interpreting these match types quickly runs out of memory (for instance, 100 * 100 exceeds the recursion limit on default settings).

Further work and discussion

A natural extension to this would be to be able to type term-level operations as the precise type-level equivalent, perhaps as an opt-in, like val x: 1 + 2 = 1 + 2.

Related

Copy link

dotty-bot left a comment

Hello, and thank you for opening this PR! 🎉

All contributors have signed the CLA, thank you! ❤️

Have an awesome day! ☀️

@soronpo

This comment has been minimized.

Copy link
Contributor

soronpo commented Nov 27, 2019

This is exciting! I contributed a lot to singleton-ops, and I wouldn't mind at all to retire it in favor of true compiler support. I will provide extensive feedback when I'll review the work.

@soronpo

This comment has been minimized.

Copy link
Contributor

soronpo commented Nov 27, 2019

Topics for discussion:

  1. What about Long, Char, Float, Double, and String operations that singleton-ops supports?
  2. If say we add Long operations support, what do we do for a Long + Int singleton operation?
  3. What do you think about compile-time NumberOfLeadingZeros (Java.lang.Integer.numberOfLeadingZeros). I use it from singleton-ops.
@MaximeKjaer

This comment has been minimized.

Copy link
Contributor Author

MaximeKjaer commented Dec 1, 2019

It seems that I've broken some tests. I'll work on fixing those.

@soronpo, thank you! I think the guiding principle should be to aim for support for all primitive operations on all supported singleton types. It would seem arbitrary to decide that certain types or certain operations should not be supported, especially when adding support for a primitive operation is as trivial as it is.

So to answer your points:

  1. Those should also be supported. For now, I'm thinking of splitting the work in two: first this PR as a proof of concept, and if it's merged, I'll make a second PR aiming for completeness.

  2. The standard library does this by overloading the + method, but overloading a type alias doesn't work in Scala:

    type +[X <: Int, Y <: Int] <: Int
    type +[X <: Int, Y <: Float] <: Float // Error: + is already defined as type +

    However, it may possible to emulate overloading by using union types and match types, as follows:

    type That = Double | Float | Long | Int | Char | Short | Byte | String
    type Result[X <: That] = X match {
    	case Double => Double
    	case Float => Float
    	case Long => Long
    	case String => String
    	case _ => Int
    }
    
    type +[X <: Int, Y <: That] <: Result[Y]

    With this solution, the types indicate something slightly more general than what will be supported by the compiler. For instance, 1 + (2 | 3L) is not reduced, but the type checker will not complain about this type application.

    This example opens up a discussion on how to deal with union types. I think it could make sense to distribute over the union, so that the resulting type would be 3 | 4L. This is not something I have implemented yet, but it's an interesting discussion.

  3. I think that should be supported too.

Copy link
Contributor

soronpo left a comment

2\. However, it may possible to emulate overloading by using union types and match types, as follows:

I think (as I mentioned in my review comment) that <: AnyVal should suffice. A union type is an interesting option, but I think we over-complicate the compiler. The match operation will throw an error anyway if the type is not supported, so I don't think the tight upper-bound limit is required.

library/src/scala/compiletime/int/package.scala Outdated Show resolved Hide resolved
tests/neg/compiletime-singleton-ops.scala Outdated Show resolved Hide resolved
tests/neg/compiletime-singleton-ops.scala Outdated Show resolved Hide resolved
MaximeKjaer added 4 commits Dec 1, 2019
A NoDenotation throws an AssertionError when getting its owner. Instead
of crashing when passed a NoDenotation (which may happen in certain
erroneous cases such as tests/neg/i3083.scala), return false.
Unconditionally normalizing type applications results can result in an
infinite loop (e.g. in tests/neg-custom-args/matchtype-loop.scala), so
this normalization should happen more conservatively.
As suggested in code review
@soronpo

This comment has been minimized.

Copy link
Contributor

soronpo commented Dec 2, 2019

What do you think about type a = Int + 1 (meaning, a non-singleton addition with a singleton)?
In the singleton-ops I created something called TwoFace values, which is a mechanism to allow best-effort inlining. Example:

val oneCT : TwoFace.Int[1] = TwoFace.Int(1)
val one : Int = 1
val oneRT :  TwoFace.Int[Int] = TwoFace.Int(one)

val twoCT : TwoFace.Int[2] = oneCT + oneCT
val twoRT1 : TwoFace.Int[Int] = oneCT + oneRT
val twoRT2 : TwoFace.Int[Int] = oneRT + oneCT
val twoRT3 : TwoFace.Int[Int] = oneRT + oneRT

Currently the dotty inline parameters mechanism only allows compile-time values, so I'm still forced to use the TwoFace concept. So for this to work, I require Int + 1 =:= Int. If it's not an acceptable default behavior then perhaps we can have a separate flexible_ops (or a better named) package.

@soronpo

This comment has been minimized.

Copy link
Contributor

soronpo commented Dec 2, 2019

To clarify further, ideally I would like to be able to write:

import scala.compiletime.ops._
object TwoFace {
  opaque type Int[T <: scala.Int] = scala.Int
  object Int {
    def apply[T <: scala.Int](value : T) : Int[T] = value
    protected def force[T <: scala.Int](value : scala.Int) : Int[T] = value
    given Ops: {
      def [L <: scala.Int, R <: scala.Int](left : Int[L]) + (right : Int[R]) : Int[L + R] = force[L + R](left + right) 
    }
  }
}
MaximeKjaer added 2 commits Dec 2, 2019
These are simply checked as the supertype. For instance, `Int + Int`
is subtype of `Int`. This allows for more graceful failure when more 
general type arguments are provided.
@MaximeKjaer

This comment has been minimized.

Copy link
Contributor Author

MaximeKjaer commented Dec 2, 2019

This is a good point, thank you! I think typing Int + 1 as an Int is acceptable. It's consistent with S[Int] (for which val x: S[Int] = 1 type-checks), and it allows for more graceful failure when non-singleton values are provided, like in your example of addition on Int.

I've implemented it in 4d73d89 by falling back to a type check with the super-type when constant folding fails.

With this change, I'm able to make the program below compile
import scala.compiletime.ops._

object TwoFace {
  opaque type Int[T <: scala.Int] = scala.Int
  object Int {
    def apply[T <: scala.Int](value : T) : Int[T] = value
    protected def force[T <: scala.Int](value : scala.Int) : Int[T] = value
    def [L <: scala.Int, R <: scala.Int](left : Int[L]) + (right : Int[R]) : Int[L + R] = force[L + R](left + right)
  }
}

object TestTwoFace {
  import TwoFace.Int.+

  val oneCT : TwoFace.Int[1] = TwoFace.Int(1)
  val one : Int = 1
  val oneRT :  TwoFace.Int[Int] = TwoFace.Int(one)

  val twoCT : TwoFace.Int[2] = oneCT + oneCT
  val twoRT1 : TwoFace.Int[Int] = oneCT + oneRT
  val twoRT2 : TwoFace.Int[Int] = oneRT + oneCT
  val twoRT3 : TwoFace.Int[Int] = oneRT + oneRT
}
@soronpo

This comment has been minimized.

Copy link
Contributor

soronpo commented Dec 2, 2019

This is awesome! We need to make sure we don't screw up invariance.
So:

val invarianceError : TwoFace.Int[Int] = TwoFace.Int[1](1) //error
val worksDueToWidening : TwoFace.Int[Int] = TwoFace.Int(1) //works due to widening
val weirdButOK : TwoFace.Int[Int+0] = TwoFace.Int[1](1)
Of course, I would change the `TwoFace` implementation above to be covariant:
import scala.compiletime.ops._

object TwoFace {
  opaque type Int[+T <: scala.Int] = scala.Int
  object Int {
    def apply[T <: scala.Int](value : T) : Int[T] = value
    protected def force[T <: scala.Int](value : scala.Int) : Int[T] = value
    def [L <: scala.Int, R <: scala.Int](left : Int[L]) + (right : Int[R]) : Int[L + R] = force[L + R](left + right)
  }
}

object TestTwoFace {
  import TwoFace.Int.+

  val oneCT : TwoFace.Int[1] = TwoFace.Int(1)
  val one : Int = 1
  val oneRT :  TwoFace.Int[Int] = TwoFace.Int(one)
  val oneRT2 :  TwoFace.Int[Int] = TwoFace.Int[1](1) //works due to covariance

  val twoCT : TwoFace.Int[2] = oneCT + oneCT
  val twoRT1 : TwoFace.Int[Int] = oneCT + oneRT
  val twoRT2 : TwoFace.Int[Int] = oneRT + oneCT
  val twoRT3 : TwoFace.Int[Int] = oneRT + oneRT
}
@soronpo

This comment has been minimized.

Copy link
Contributor

soronpo commented Dec 2, 2019

Maybe now it will be easier to convince the SIP committee to support unary types (SIP36) so we can replace Negate[T] and ![T] with normal looking code 😄

@soronpo

This comment has been minimized.

Copy link
Contributor

soronpo commented Dec 2, 2019

This might be a little more tough, but what about implementing algebraic rules within the type system (for the discussed ops)?

def commutative[A <: Int, B <: Int](implicit ev : A + B =:= B + A) : Unit = {}
def associative[A <: Int, B <: Int, C <: Int](implicit ev : A + (B + C) =:= (A + B) + C) : Unit = {}
//others rules...

Of course, in case of overflows or precision issues, these rules may not apply, so I'm open to other opinions on the matter.

@MaximeKjaer

This comment has been minimized.

Copy link
Contributor Author

MaximeKjaer commented Dec 4, 2019

The covariant implementation you gave pretty much works as expected, so that's good news. The only line that I cannot compile in your example is the following:

val twoCT : TwoFace.Int[2] = oneCT + oneCT

I've narrowed down the problem to type inference (and not a bug in the scala.compiletime.opsimplementation, thankfully!). Running the compiler with -Xprint:frontend tells us that the type parameters of TwoFace.Int.+ are inferred as follows:

val twoCT: TwoFace.Int[2.type] = 
      TwoFace.Int.+[Int, Int](TestTwoFace.oneCT)(TestTwoFace.oneCT)

I'm not exactly sure of why it infers Int instead of 1.type, especially seeing that oneCT is correctly typed as TwoFace.Int[1.type]. It seems like the type inference is being a little too conservative in this instance. There may be a good reason for this, so I will try to find out; if there is no good reason for it, I can open an issue or submit a PR.

Helping out the type inference by explicitly writing the type parameters solves the problem, and makes the line compile:

val twoCT: TwoFace.Int[2] = TwoFace.Int.+[1, 1](oneCT)(oneCT)
@MaximeKjaer

This comment has been minimized.

Copy link
Contributor Author

MaximeKjaer commented Dec 4, 2019

What would be a good use-case for these algebraic rules?

As a small aside, the examples you gave (commutativity and associativity) behave more like a test, rather than like a law that can be proven. Because these rules are true (for Int at least), they will always be able to find an implicit. However, I could also write a rule that is incorrect in general, but happens to be true when B has type 0:

def badLaw[A <: Int, B <: Int](implicit ev: A + B =:= A - B) : Unit = {}

Because Int + Int =:= Int, the rule also passes with Int type parameters, which may be a little counter-intuitive at first. All the following lines will compile:

badLaw[Int, Int]
badLaw[Int, 20]
badLaw[1, 0]
badLaw[0, 0]

It only complains if I can give it a counter-example:

badLaw[3, 1] // error, implicit not found
@soronpo

This comment has been minimized.

Copy link
Contributor

soronpo commented Dec 4, 2019

What would be a good use-case for these algebraic rules?

Not much of a use-case, but the goal is regularity (notice the switched positions of L and R):

import scala.compiletime.ops._

object TwoFace {
  opaque type Int[T <: scala.Int] = scala.Int
  object Int {
    def apply[T <: scala.Int](value : T) : Int[T] = value
    protected def force[T <: scala.Int](value : scala.Int) : Int[T] = value
    def [L <: scala.Int, R <: scala.Int](left : Int[L]) + (right : Int[R]) : Int[L + R] = force[R + L](left + right) // switched positions
  }
}
@OlivierBlanvillain

This comment has been minimized.

Copy link
Contributor

OlivierBlanvillain commented Dec 5, 2019

I'm not convinced about Int * 2 =:= Int, these two types do not denote the same set of values... Don't we want subtyping in one direction instead?

@soronpo

This comment has been minimized.

Copy link
Contributor

soronpo commented Dec 5, 2019

I'm not convinced about Int * 2 =:= Int, these two types do not denote the same set of values... Don't we want subtyping in one direction instead?

It won't work with one direction. If Int * 2 <:< Int and Int * 3 <:< Int, how Int * 2 and Int * 3 are related?
The point is that if you're having any operation between a singleton type and its super type you will always get the super type. I don't see the problem here.

@soronpo

This comment has been minimized.

Copy link
Contributor

soronpo commented Dec 5, 2019

To clarify, this is exactly what happens at the term level:

val one : Int = 1
one * 2  // : Int
one * 3 // : Int
1 * 2 // : 2
1 * 3 // : 3
@smarter

This comment has been minimized.

Copy link
Member

smarter commented Dec 5, 2019

It won't work with one direction. If Int * 2 <:< Int and Int * 3 <:< Int, how Int * 2 and Int * 3 are related?

What do you mean by "won't work" ? you can have two subtypes of a given type whose sets of values partially overlap.

@soronpo

This comment has been minimized.

Copy link
Contributor

soronpo commented Dec 5, 2019

What do you mean by "won't work" ? you can have two subtypes of a given type whose sets of values partially overlap.

Well, this actually depends on whether or not algebraic rules are applied to the typesystem.

val one : Int = 1
val tfA = TwoFace.Int(one) + TwoFace.Int(1) // : TwoFace.Int[Int + 1]
val tfB = TwoFace.Int(1) + TwoFace.Int(one) // : TwoFace.Int[1 + Int]
summon[tfA.type =:= tfB.type] //error

This error is weird.
I guess subtyping might be OK, but it just aggravates the lack of algebraic rules.

@smarter

This comment has been minimized.

Copy link
Member

smarter commented Dec 5, 2019

When A <:< B is false, it never means "A is not a subtype of B", it means "I could not prove that A is a subtype of B".

@OlivierBlanvillain

This comment has been minimized.

Copy link
Contributor

OlivierBlanvillain commented Dec 6, 2019

I think adding algebraic rules on integer operations is outside the scope of this PR. The first step, and what's the most urgently missing at the moment, is a way perform integer operations on singleton types.

One argument to limit the scope to only reduce when both types are singleton would be that doing something more elaborate might embed more knowledge in the compiler than there is in the runtime. The JVM know about 1 + 2 = 3, but has no idea about A + B = B + A, so let's first get on par with the runtime before considering going further...

@MaximeKjaer

This comment has been minimized.

Copy link
Contributor Author

MaximeKjaer commented Dec 6, 2019

I think that @OlivierBlanvillain raises a very good point. Currently, the following type-checks:

object TwoFace {
  opaque type Int[T <: scala.Int] = scala.Int
  object Int {
    def apply[T <: scala.Int](value : T) : Int[T] = value
    protected def force[T <: scala.Int](value : scala.Int) : Int[T] = value
    def [L <: scala.Int, R <: scala.Int](left : Int[L]) / (right : Int[R]) : Int[L / R] = force[R / L](left + right) // switched positions
  }
}

This may be fine for addition, which is commutative, but it's not acceptable for division, in my opinion.

I suggest that we go back to only reducing singleton args for now, and not cases like Int + 1. I think the discussion we've already had shows that the latter should maybe be tied to slightly more complex machinery (like support for algebra), which I think is out of the scope of this PR. It's good that we've had some discussion about it now already, so that we know what's ahead of us, but I suggest that we take it up again in another PR more specifically focused on the topic.

So to sum up the conversation thus far:

Remains in this PR

  1. Removing handling of cases like Int + 1
  2. Adding an error when dividing by 0 (or modulo by 0)

Once I've done those two things, I think the PR is ready to be merged.

Possible extensions, for future PRs

  1. Completeness: support for Long, Float, etc (as discussed here)
  2. Algebraic rules (as discussed here)
  3. Unary types and SIP-36 (as discussed here)
  4. More precise type inference (as discussed here)
@soronpo

This comment has been minimized.

Copy link
Contributor

soronpo commented Dec 6, 2019

I suggest that we go back to only reducing singleton args for now, and not cases like Int + 1.

I'm inclined to agree because covariance should handle this properly.
If it's TwoFace.Int[+T], then this should still work:

val one : Int = 1
val a : TwoFace.Int[Int] = TwoFace.Int(one) + TwoFace.Int(1) 

So to sum up the conversation thus far:

The most important thing- the covariance error you mention here must be dealt with, IMO.

@soronpo

This comment has been minimized.

Copy link
Contributor

soronpo commented Dec 6, 2019

For proof of greater usability, I think this PR should at least add the following:

  • String + String concatenation.
  • ToString[T] conversion from Int.
  • Require[Cond <: Boolean, Msg <: String] to create logic constraints on values (if Cond is untrue generate the error message Msg).

All the above will bring us much further for people to be able to use this feature to replace the libraries refined and singleton-ops.

Example code:

import scala.compiletime.ops._

opaque type Positive = Int
object Positive {
  type Check[T] = Require[T > 0, "The value provided isn't positive. Found: " + ToString[T]]
  def apply[T <: Int](value : T)(given Check[T]) : Positive = value
}

val eight = Positive(8)
val negError = Positive(-1) //error: The value provided isn't positive. Found: -1
@soronpo

This comment has been minimized.

Copy link
Contributor

soronpo commented Dec 6, 2019

Instead of Require, we can just have Error[Msg] special case in the compiler. We can then write:

type Require[Cond <: Boolean, Msg <: String] = Cond match {
  case true => Nothing
  case false => Error[Msg]
}
@soronpo

This comment has been minimized.

Copy link
Contributor

soronpo commented Dec 6, 2019

All the above will bring us much further for people to be able to use this feature to replace the libraries refined and singleton-ops.

Oh, it seems I forgot about inline methods 🤦‍♂
Then I don't think there is need for this.

@soronpo

This comment has been minimized.

Copy link
Contributor

soronpo commented Dec 7, 2019

All the above will bring us much further for people to be able to use this feature to replace the libraries refined and singleton-ops.

Oh, it seems I forgot about inline methods man_facepalming
Then I don't think there is need for this.

Well, it turns out that it is currently impossible to mix opaque types and inline defs, so using the compiletime ops for this will be nice. I created an issue for the inline defs (feature-lacking) problem: lampepfl/dotty-feature-requests#82

MaximeKjaer added 3 commits Dec 8, 2019
Division by 0 and modulo by 0 would otherwise crash the compiler. 
Throwing an error prints an error message positioned on the operator,
and does not prevent other type errors from being found and shown.
As discussed in #7628
@MaximeKjaer

This comment has been minimized.

Copy link
Contributor Author

MaximeKjaer commented Dec 8, 2019

The most important thing- the covariance error you mention here must be dealt with, IMO.

I agree, and I think we can open an issue for now. As far as I can tell, it's a type inference bug, not a bug with this implementation or with covariance, so I'm inclined to treat them separately. If merged, this PR should provide some nice motivation to fix it, IMO.

As for the other points:

  • I haven't added String + String because that would require overloading the + type. As we discussed earlier, I think that merits its own PR.

  • I've added ToString[T <: Int]

  • I haven't added a Require or an Error on this branch just yet. I am more in favor of an Error[Msg <: String] that mirrors scala.compiletime.error, and I have a draft implementation of that that generates the nice error below on this program. I think requires a bit more thorough testing, for now I've just hacked it together.

    Generated Error
    -- Error: tests/playground/Errors.scala:22:32 ----------------------------------
    22 |  val y: Positive = Positive[-1](-1)
       |                                ^
       |                                The value provided isn't positive
    one error found

Preferably, I'd like to keep changes somewhat incremental rather than have a big PR doing many different things. So I think this is ready for your review @OlivierBlanvillain, and if it's all fine and the tests pass, I think we can merge.

@soronpo

This comment has been minimized.

Copy link
Contributor

soronpo commented Dec 8, 2019

OK, I think it's better to have an .md documentation.

MaximeKjaer added 3 commits Dec 16, 2019
* Use NoType instead of Option[Type]
* Move a Set to a val to avoid recomputing it on every call
This commit serves as a proof-of-concept of "overloaded" singleton ops.
`+` is already addition, so adding concatenation requires "overloading"
the `+` op. Scala does not support overloaded type aliases, so we must
have two + types in different objects. However:

- unqualified references (`+`) can be ambiguous (`int.+` or `string.+`)
- qualified types (e.g. `int.+`) cannot be used infix

The solution is to have a top-level `+` match type that redirects to
the qualified `int.+` or `string.+`. These are kept private, as an
implementation detail. The top-level match type is not considered as a
compiletime applied type, as we cannot do constant folding before the
match type has been applied.
As discussed IRL with @OlivierBlanvillain, the previous approach with a
match type dispatching to the correct overloaded op is not ideal, since
adding more ops will mean modifying previous ops. For now, we therefore
focus on having a good internal implementation of the ops.

In some cases, the syntax is not ideal. For instance:

    import scala.compiletime.ops.int._
    import scala.compiletime.ops.string._

    summon[1 + 2]
    //       ^ Reference to + is ambiguous
    //         it is both imported by import scala.compiletime.ops.int._
    //         and imported subsequently by import scala.compiletime.ops.string._

Or:

    import scala.compiletime.ops._
    summon[1 int.+ 2]
    //          ^ an identifier expected, but '.' found

These cases can be improved by allowing infix qualified types, or 
implementing resolution of member types (i.e. `A + B` as `A#+[B]`
instead of `+[A, B]`).
Copy link
Contributor

soronpo left a comment

In some cases, the syntax is not ideal. For instance:
import scala.compiletime.ops.int._
import scala.compiletime.ops.string._
summon[1 + 2]
// ^ Reference to + is ambiguous
// it is both imported by import scala.compiletime.ops.int._
// and imported subsequently by import scala.compiletime.ops.string._

IMO this split is the wrong move in terms of usability. Especially if we look at it in long-term when we want to add other operations support. There is also irregularity in == and !=. Why are they not split?
However, I believe the code you used to demonstrate how to combine +[ <: Int, <: Int] and +[ <: String, <: String] offers a good compromise. I think that ops.int._, op.string._ etc. should be regarded as internal and use the match types trick to create ops._ (or ops.general._ / ops.all._) which most users will use.

@@ -6,39 +6,33 @@ package object ops {
@infix type ==[X <: AnyVal, Y <: AnyVal] <: Boolean
@infix type !=[X <: AnyVal, Y <: AnyVal] <: Boolean

@infix type +[X <: Int | String, Y <: Int | String] = (X, Y) match {

This comment has been minimized.

Copy link
@soronpo

soronpo Dec 20, 2019

Contributor

I actually like this example a lot. It's a shame to delete it. I propose to change it to a test code (or provide it as example in a document).

This comment has been minimized.

Copy link
@MaximeKjaer

MaximeKjaer Dec 29, 2019

Author Contributor

Good point -- I've done both in 2cab591, and generally documented the feature.

@@ -6,39 +6,33 @@ package object ops {
@infix type ==[X <: AnyVal, Y <: AnyVal] <: Boolean

This comment has been minimized.

Copy link
@soronpo

soronpo Dec 20, 2019

Contributor

I believe == and != should be split as well. Either everything is split according to the supertype or non of it is.

This comment has been minimized.

Copy link
@MaximeKjaer

MaximeKjaer Dec 29, 2019

Author Contributor

I moved them into scala.compiletime.ops.any in 8c117c1.

The alternative was to duplicate them into each subpackage for each supported type, and then duplicate the constant folding code... which felt like a lot of duplication. Seeing that == and != are defined on Any, I think this solution makes the most sense. It also emphasizes that equality is between to Any values, and that 1 == "1" will return false.

MaximeKjaer added 2 commits Dec 27, 2019
+ Add test to check that code example in the documentation compiles
This is more consistent with the other ops, which are also split 
depending on the type of the argument.
@frobinet

This comment has been minimized.

Copy link

frobinet commented Dec 30, 2019

This is a very interesting idea and would have many applications in anything related to numerics. From the looks of it, it could help with having compile-time shape checks for something like Tensorflow!

@MaximeKjaer

This comment has been minimized.

Copy link
Contributor Author

MaximeKjaer commented Dec 30, 2019

@frobinet I haven't mentioned it in the PR, but that's actually exactly what I'm hoping to achieve with this feature! I've been working on that as my semester project at EPFL over the past few months, and I plan on open-sourcing my implementation within the next two weeks or so.

There are indeed many good applications for this feature. For instance, a paper called "Dex: array programming with typed indices" mentions the following:

Is our Hindley-Milner-based type language expressive enough to admit the programs that numerical programmers want to write? What about the dreaded reshape, which would seem to require type-level arithmetic?

Well, that's what this PR implements 🎉 It would make it possible to type operations like reshape in Scala.

@frobinet

This comment has been minimized.

Copy link

frobinet commented Dec 30, 2019

Sounds great. The only place I've seen this is C++ where you can just stick random computations inside template definitions, but that's just fancy copy-pasting. I guess it would be hard to get the reshape correct there, not to mention the horrendous error messages the compiler would throw at you if you get shapes wrong. Anyway, interesting project, I'll follow it one you're open-source!

package object ops {
object any {
/** Equality comparison of two singleton types.
* ```scala

This comment has been minimized.

Copy link
@OlivierBlanvillain

OlivierBlanvillain Jan 8, 2020

Contributor

No need to specify scala here: that's the default inside scaladoc.

This comment has been minimized.

Copy link
@MaximeKjaer

MaximeKjaer Jan 8, 2020

Author Contributor

This doesn't seem to be true. In the screenshot below, && has the scala annotation on the Markdown code block, and || does not. I've therefore left the annotations in.

image

}

object int {
/** Addition of two `Int` singleton types. Type-level equivalent of `scala.Int.+`

This comment has been minimized.

Copy link
@OlivierBlanvillain

OlivierBlanvillain Jan 8, 2020

Contributor

I would remove all the Type-level equivalant comments since it's pretty obvious. Also it's inaccurate as + is not actually a method on object Int...

This comment has been minimized.

Copy link
@MaximeKjaer

MaximeKjaer Jan 8, 2020

Author Contributor

All right, fixed in 7e0a1db.

Remove unnecessary comments about singleton ops being type-level
equivalents of term-level operations
@OlivierBlanvillain OlivierBlanvillain merged commit 2aeb5ae into lampepfl:master Jan 8, 2020
2 checks passed
2 checks passed
CLA User signed CLA
Details
continuous-integration/drone/pr Build is passing
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

7 participants
You can’t perform that action at this time.