Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The Eval class proposal #27

Closed
wants to merge 11 commits into from
Closed

Conversation

blamario
Copy link
Contributor

@blamario blamario commented Nov 10, 2016

This is a short proposal for a new language extension (or perhaps retraction) that tames seq by putting it back in the Eval class from Haskell versions 1.3 and 1.4.

Rendered proposal

@simonmar
Copy link

Nicely written proposal.

Why does ($!) need to be a method of Eval? Since you can't write instances of Eval, that doesn't seem necessary.

The main question I have is, when is the user (or the compiler) able to assume η-equivalence? Clearly they can't assume it if they use -XUniversalEval, but what if they import functions from a module that was compiled with -XUniversalEval? Can we only assume η-equivalence if the current module and its transitive closure of imports all use -XNoUniversalEval?

@blamario
Copy link
Contributor Author

blamario commented Nov 10, 2016

Why does ($!) need to be a method of Eval? Since you can't write instances of Eval, that doesn't seem
necessary.

You're right. I guess the only reason I included it in the class is that it was already there in Haskell 1.3 under a different name:

class Eval a where
    strict ::  (a->b) -> a -> b
    seq    ::  a -> b -> b
    strict f x = x seq f x

Can we only assume η-equivalence if the current module and its transitive closure of imports all
use -XNoUniversalEval?

That was my initial assumption, which is why I had included the EtaSafe module property. But I don't think it's necessary to know that the entire module satisfies η-equivalence for any practical purpose. For example, in

-- polymorphic seq, not eta-safe, compiled with -XUniversalEval
strictPair :: a -> b -> (a, b)
strictPair !x !y = (x, y)

strictifyList :: Eval a => [a] -> [a]
strictifyList = foldr (uncurry (:) . strictPair) []

it seems to me that strictifyList is perfectly safe, despite depending on the unsafe function strictPair which may have been imported from a different module. Now that you made me spell it out, I realize that there's a large hidden assumption in this version of the proposal.

I said that the inferred type of strictPair with the (Eval a, Eval b) constraint would be considered compatible with the declared one, but I omitted to mention (even to myself) that the inferred constraints would be preserved. In the above example, the inferred type of strictifyList (even without the explicit type signature) would be Eval a => [a] -> [a] even though the explicit signature of strictPair doesn't mention Eval. Would GHC be capable of this feat?

@nomeata
Copy link
Contributor

nomeata commented Nov 10, 2016

What are the implementation choices here? What will Eval desguar to? Will

foo :: Eval a => Maybe a
foo = fib 10000 `seq` Nothing

be a thunk or a function or a thunk?

@cartazio
Copy link

Isnt there also the related question of whether or not seq should have the
stronger semantics of parseq?

Would parseq as the semantics simplify understanding this example or
properties in this proposal? (Or am I raising a totally orthogonal concern?)

On Thu, Nov 10, 2016 at 10:14 AM Joachim Breitner notifications@github.com
wrote:

What are the implementation choices here? What will Eval desguar to? Will

foo :: Eval a => Maybe a
foo = fib 10000 seq Nothing

be a thunk or a function or a thunk?


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#27 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAAQwvkFPrI6AKIExGqy_46twk492OSBks5q8zTrgaJpZM4KuSDk
.

@blamario
Copy link
Contributor Author

I'm not an expert on the implementation side, but I expect the Eval constraints would be completely erased by run time, perhaps as early as Core.

The only difference for pseq would be that both its arguments are strict. For the purpose of type inference, pseq a b would be treated the same as seq a (seq b b).

@goldfirere
Copy link
Contributor

I'm not entirely convinced by this proposal, though I'm more in favor of it after reading than I thought I would be.

A few technical points:

  • You say that -XUniversalEval would mean that you get an Eval a constraint for every a :: *. But what about type variables at other kinds? For example, f :: a b -> Int. I suppose you could mean this would automatically include Eval (a b), but that's not what the proposal says.
  • I think it's possible your -XUniversalEval would not be backward compatible, especially in the presence of polymorphic recursion: there's just no way that GHC (or anyone) can tell, just by looking at a type, what the Eval blah constraints should be. Perhaps you want to rephrase as saying that -XUniversalEval implies that all type signatures f :: t (where t stands for any type) is like f :: _Eval => t, where _Eval is a magical named type hole (as in a partial type signature) that can expand only to Eval blah constraints. This semantics would allow GHC to use the function definition to fill in the right constraints.
  • The approach above still fails in the presence of polymorphic recursion, I think. For example:
type family F a
whimwham :: F a -> F (a, a)

f :: Int -> F a -> ()
f 0 !_ = ()   -- NB: bang pattern!
f n !x = f (n-1) (whimwham x)

To my eyes, you would need the type of f to be something like (forall b. Eval (F b)) => Int -> F a -> (). Note that the type now has (essentially) an implication constraint. GHC doesn't have those, even if GHC could look at the function definition.

  • There is also the problem of datatypes with existential type variables. Do these also get Eval constraints with -XUniversalEval. I suppose they would have to. But now what happens when you use a datatype compiled in one mode in a module compiled in the other? I think there would be some unexpected interactions.
  • We would not need the Eval constraints at runtime, at all. So they could be completely erased. I do think foo :: Eval blah => wurble would still have to remain a thunk, just to be consistent with other constraints. Until we get unboxed constraints, that is!
  • For debugging: we could always have a unsafeSeq that works just like today's seq. Indeed, perhaps an easier way of looking at a proposal like this is to consider introducing Eval and safeSeq as entirely new constructs. We could then consider deprecating good old seq. Note that there would be no trouble at all with adding the Eval constraints to standard classes. And we wouldn't really need the troublesome -XUniversalEval extension. Those people can just use the old, deprecated seq.
  • There would be measurable compile-time performance degradation with this, with the need for GHC to deal with the new Eval constraints. Though under my mini-proposal one bullet up, this would only happen when someone opts into the new behavior. The original proposal, with Eval constraints everywhere, would likely be quite a bit worse in this regard.

@blamario
Copy link
Contributor Author

You are correct about Eval (a b). I can't think of a better way to phrase this part of the proposal.

What do you mean by

Note that there would be no trouble at all with adding the Eval constraints to standard classes.

We still need to do something about the existing code that uses the standard classes and has explicit type signatures. I don't see how replacing seq with safeSeq helps with this problem.

Your type family example blows a big hole in the proposal as it stands; it would only work if GHC could infer the signature of all top-level bindings. I guess reading through the Haskell 1.3 specification took me back to the simpler times when this was still true.

All is not lost; the proposal can still fall back on the EtaSafe module alternative.

@goldfirere
Copy link
Contributor

Even Haskell98 has functions with an uninferrable type: polymorphic recursive ones. Indeed, my example doesn't require a type family:

f :: [a] -> ()
f [] = ()
f (x : xs) = x `seq` f (zip xs xs)

This function f is Haskell98. (I don't know about 1.3.) But it absolutely needs a type signature in order to compile, as it's polymorphic-recursive. Now perhaps we're saved because instance Eval (a,b) exists -- with no dependence on Eval a or Eval b. We would not be so lucky if we used a strict pair instead of a lazy one, however.

There's a chance the situation isn't as dire as I'm describing. For example, I believe my whimwham is not implementable. There would have to be some class constraint in the mix, and writing whimwham would require the same implication constraint business as I'm worried about. And the example above with lists is also OK, as I've confirmed with this code. Do all such examples work out? Perhaps. Even if they don't, I wouldn't be surprised if there are precisely 0 instances of this problem cropping up in the wild.

You're absolutely right about adding the constraint to standard classes. We would need two versions of everything, which is quite distasteful. I fully retract that claim.

@simonpj
Copy link
Contributor

simonpj commented Nov 11, 2016

Some small responses, in haste for now (PLDI is pressing)

  • Are you sure that "Haskell is exceptionally unsafe" is related to this proposal? It looks to me as if it's about the unsafeness of allowing user-defined Typeable instances (which we now do not); the post doesn't mention seq.
  • Presumably you specifically do NOT want instance Eval (a->b)? After all the ability to distinguish bottom from (\x.bottom) is the whole point of "Haskell has no state monad ". So not all types of kind * are in Eval! Is that right?
  • But if functions are not in Eval, then I have no way of deep-seq'ig a list of functions to eliminate a space leak. This is more what the original Haskell debate about seq was about, I think.

I'm frankly dubious about this business about implicitly (silently, invisibly) adding Eval constraints.

  • What type does GHCi display? With or without Evals?
  • What about higher rank types?
  • What about data types?
data T a = MkT !a

I guess MkT :: Eval a => a -> T a, but we'd better say so.

  • Higher kinds is a real issue:
f :: forall (m :: * -> *). T m -> Int

What Eval constraints are implicitly added? Well, it depends on T! For example

data T m where
  T1 :: m Int -> T m
  T2 :: m Bool -> T m

f :: T m -> Int
f (T1 x) = x `seq` 3
f (T2 y) = y `xeq` 5

We need (Eval (m Int), Eval (m Bool). What we really want is the constraint (forall a. Eval a => Eval m a), but that's another story.

I think this rabbit hole could get pretty deep. My instinct is to bite the bullet and say that programmers must declare Eval constraints explicitly; and provide both safe and unsafe seq.

@simonpj
Copy link
Contributor

simonpj commented Nov 11, 2016

I have found a hoard of 215 email messages in which the Haskell 1.3 committee debated these issues. Lots of interesting things are said, and anyone who is thinking seriously about this proposal should digest them.

I'm not keen on publishing them publicly because there are strongly expressed opinions, and it'd be too much work to anonymise; and I don't want to get individual permissions from everyone. But I think it would be kosher for me to send the collection to anyone who asks me individually, and undertakes to use it for the ideas it contains, without quoting who said what.

Simon

@blamario
Copy link
Contributor Author

I have found a hoard of 215 email messages in which the Haskell 1.3 committee debated these issues.
Lots of interesting things are said, and anyone who is thinking seriously about this proposal should
digest them.

Oooh, gossip!
Since I started the thread, I feel obliged to volunteer. I'll read through the whole thread, then extract, anonymize, and report a summary of all technical points and no others.

I assume there was another long thread when the Haskell 98 committee reverted some of the earlier decisions? Can I have that one as well?

@cartazio
Copy link

I guess my naive question is do we need this to be a type class or just something we can interrogate?

A related sort of question comes up with exceptions and wanting to ask what exceptions might be thrown by a computation.

In both cases this is metadata that could be added to the ghc interface files and we could add info to haddock and commands to ghci for asking "may seq" or may throw.

I guess I'm trying to articulate that we may want this information composibly, but not have it interfere with composition?

@simonpj
Copy link
Contributor

simonpj commented Nov 11, 2016

You'll have to send me email (simonpj@microsoft.com) so that I get your email address.

@rwbarton
Copy link

Like Typeable, the Eval class is implicitly derived for every data type and its instances cannot be explicitly defined.

I'm assuming that seq is always the same as our seq today, whenever it is defined. In fact it might be simpler to think of the proposal as

class Eval a
seq :: Eval a => a -> b -> b
seq = unsafeSeq

where unsafeSeq is today's seq function (which we'd probably still want to provide as an unsafe alternative). UniversalEval can be thought of as having an instance Eval a in scope for the type checking of that module. (But the question of types exported to other modules is a tricky one.)

Does "every data type" include functions? I see SPJ already asked this among other questions but without knowing this it's hard to evaluate the proposal. (FWIW, functions were in Eval in Haskell 1.3.)

If it does, then seq is still available on functions regardless of whether UniversalEval is enabled. Then without some other (difficult-to-imagine) changes, we can still distinguish \x -> undefined x from undefined using seq, so we don't gain eta equivalence, and so don't address any of the issues in the bulleted list of references, much less satisfy Bob Harper. The thing we would gain is a stronger form of parametricity (no more f :: a -> (); f x = x `seq` ()). Worth it from a practical point of view? It's not yet clear whether the cure is worse than the disease.

If functions aren't to be an instance of Eval, then the ramifications are much harder to understand. Personally, I think it's a bit misguided to try to "fix" the eta law in this way. We don't have the eta law for regular lazy products either (p = (fst p, snd p)), but I've never heard of a proposal to remove seq for tuples...

Mario Blažević added 2 commits January 1, 2017 23:09
# Please enter a commit message to explain why this merge is necessary,
# especially if it merges an updated upstream into a topic branch.
#
# Lines starting with '#' will be ignored, and an empty message aborts
# the commit.
@simonmar
Copy link

simonmar commented Jan 9, 2017

but I've never heard of a proposal to remove seq for tuples...

Phil Wadler has argued for unlifted tuples, I managed to dig this up: http://www.mail-archive.com/haskell@haskell.org/msg01466.html

(that whole thread is very relevant to this proposal, incidentally).

@cartazio
Copy link

cartazio commented Jan 10, 2017 via email

@blamario
Copy link
Contributor Author

blamario commented Jan 11, 2017

I'm glad some of these e-mails are still publicly accessible, because it means I won't need to summarize and anonymize
as much discussion. It's a good thing I procrastinated this long. Laziness wins again!

I've read through the e-mails simonpj gave me a couple of times. My summary would be that the Eval class got doomed in four steps:

  1. unlifted products were never added to Haskell,
  2. for consistency with products, all functions got lifted as well,
  3. making Eval apply to all data types including functions, and thus,
  4. the class became of little use and much annoyance and got removed.

For 1, one of the proposals that had a good run was to allow multiple fields on the newtype constructor, making it an unlifted product. I cannot point to any firm decision not to do this, this proposal and some similar ones seemingly just petered out. Unlifted products would have to be kept out of the Eval class, as explained by this message:

http://www.mail-archive.com/haskell@haskell.org/msg01476.html

Decisions 2 and 3 have been made in these three messages for the most part:

http://www.mail-archive.com/haskell@haskell.org/msg00338.html
http://www.mail-archive.com/haskell@haskell.org/msg00333.html
http://www.mail-archive.com/haskell@haskell.org/msg00334.html

The Eval class was still kept in Haskell 1.3 for the free parametricity theorems. I'm going to include in full the message that explains this below, because it cannot be find in the mail archives any more. But ultimately it was felt that this purely theoretical benefit of the Eval class could not justify all the practical difficulties it introduced.

Return-Path: haskell-list-glasgow-request@dcs.gla.ac.uk
Return-Path: haskell-list-glasgow-request@dcs.gla.ac.uk
Delivery-Date: Tue, 19 Sep 1995 19:38:27 +0100
Received: from dcs.gla.ac.uk by vanuata.dcs.gla.ac.uk
id 26382-0@vanuata.dcs.gla.ac.uk; Tue, 19 Sep 1995 19:36:36 +0100
From: jl@cse.ogi.edu (John Launchbury)
Subject: Re: Haskell 1.3
Date: Tue, 19 Sep 1995 11:44:03 -0800
To: Andrew Gordon Andrew.Gordon@cl.cam.ac.uk
Old-Resent-From: haskell-request@dcs.gla.ac.uk
Errors-To: haskell-request@dcs.gla.ac.uk
Approved: haskell@dcs.gla.ac.uk
Resent-Date: Tue, 19 Sep 1995 19:36:36 +0100
Resent-From: kh@dcs.gla.ac.uk
Resent-To: haskell-list-dist@dcs.gla.ac.uk

Andrew,

I'll copy my reply to the Haskell list as others may be interested.

[JL]
Lifted functions are merely a logical ugliness - I don't see anything that
they will break. It was polymorphic seq that would do the breaking. Overloaded
seq is fine, even if overloaded at every type, because then uses of seq
are recorded in the type.

[AG]
I guess I don't really understand this. I'd have thought that if Haskell
has lifted functions, there'd then be no need for the Data class, as seq
would work at every type. I don't understand why overloaded seq is fine,
but polymorphic seq is not. Is there anything written up about this
anywhere?

A formal treatment of parametricity in the presence of overloading has
not been written up (Eric Meijer has talked of doing so). The problem with
writing it up is that it's too simple: it reduces to a single observation,
namely that the parametricity theorem coming from an overloaded type
is the regular parametricity theorem that arises after performing the
dictionary expansion.

So, for example, consider the list membership function. Its type is

elem :: Eq a => a -> [a] -> Bool

Ignoring the class declaration we get the patently false "theorem"
(the full theorem is slightly more general yet):

all x::A, xs::[A] .
all h::A->B . elem x xs = elem (h x) (map h xs)

Lots of things might go wrong: h may not be 1-to-1, for example. To
repair this we need to expand the "Eq a =>" part of the type using
the dictionary expansion. For simplicity we will suppose the equality
class merely had the one function (==). Then we get,

elem :: (a->a->Bool) -> a -> [a] -> Bool

The theorem for this type is:

all x::A, xs::[A], eq::A->A->Bool, eq'::B->B->Bool .
all h::A->B . (all p,q::A . eq p q = eq' (h p) (h q))
=> elem eq x xs = elem eq' (h x) (map h xs)

and, happily, this is a valid theorem for elem.

Lets do the same thing with Data. Suppose we have a highly strict version
of tail, which evaluates its head before discarding it (perhaps a contrived
example, but never mind). It's definition is

sTail (x:xs) = seq x xs

If seq is polymorphic (seq :: a -> c -> c), then tail and sTail are
indistinguishable at the type level, so

sTail :: [a] -> [a]

The parametricity theorem is

all xs::[A] .
all h::A->B . map h (sTail xs) = sTail (map h xs)

which clearly fails to be true (suppose h maps the element at the head of the
list to bottom).

On the other hand, if seq is overloaded, then

sTail :: Data a => [a] -> [a]

which by dictionary expansion is

sTail :: (all c . a -> c -> c) -> [a] -> [a]

Note the nested polymorphism that can arise in dictionary expansion, the "a"
is universally quantified by default, i.e.,

sTail :: all a . (all c . a -> c -> c) -> [a] -> [a]

The reason for this is that while seq is overloaded on its first argument,
it is parametric on its second.

The parametricity theorem is (again, rather more complicated, and this time
I will be more precise about type quantification):

All A, All B,
all xs::[A], seq::all c . A->c->c, seq'::all c . B->c->c .
all h::A->B .
(All C, All D, all p::A, q::C, g:C->D. g (seq p q) = seq (h p) (g q))
=> map h (sTail seq xs) = sTail seq' (map h xs)

Once again, this is now a true theorem for sTail.
The extra complexity in the antecedant is because it is possible for
seq to be used at more than one C instance in an expression and, while
this is not done in sTail, the type provides no guarantee of it.

In summary, overloading seq rather than claiming it is polymorphic forces
its occurrences to show up in the types, and hence the parametricity
theorems become modified appropriately. All of this is entirely independent
as to which or how many instances of an overloaded operator are declared.

Also, Phil mentioned you have a nice paper with Ross Paterson about a Rec
class. I think we discussed this briefly at FPCA. I'd be interested to see a
copy.

I only hope it lives up to Phil's accolade. You can access it from my
home page:

http://www.cse.ogi.edu/~jl

in the Functional Programming section.

John.

@bgamari
Copy link
Contributor

bgamari commented Jan 24, 2017

@blamario, while this proposal is technically at the end of its discussion period, it didn't see much comment from the broader Haskell community. If you would like we can extend the discussion period by a few more weeks to give you an opportunity to solicit more feedback, or we can conclude the comment period and you can opt to bring the proposal to the steering committee for consideration. Which would you rather?

@blamario
Copy link
Contributor Author

I will try to solicit some wider feedback, but I should first clarify the proposal a little bit. Give me a day or two.

@blamario
Copy link
Contributor Author

I have updated the pull request. For those that have already read it, the only changes were:

  • a clarification that function types would still be in Eval for now, as in Haskell 1.3,
  • dropped -WetaUnsafe warning,
  • a nod to unlifted products as another motivation, and
  • reference to the INCOMPLETE_CONTEXTS proposal as a possible implementation strategy.

@nomeata
Copy link
Contributor

nomeata commented Feb 27, 2017

This proposal has not attracted discussion for a while. Please consider submitting it to the Committee for discussion, if you think it is ready, or close it, if you have abandoned this proposal.

@blamario
Copy link
Contributor Author

Very well. The proposal can go before the commitee.

@simonmar
Copy link

simonmar commented Mar 6, 2017

Correct me if I'm wrong, but if functions are an instance of Eval, doesn't that invalidate some of the motivation for this proposal? e.g. Haskell would still have no state monad (http://www.cse.chalmers.se/~nicsma/no-state-monad.html) because (\x -> _|_) is distinguishable from _|_.

@cartazio
Copy link

cartazio commented Mar 6, 2017 via email

@cartazio
Copy link

cartazio commented Mar 6, 2017

I'm also not sure if the issue with bottoms is the main technical barrier with strict / unlifted tuples ... or at least I don't see how newtypes side step the issue.

Unboxed tuples, at least as they currently exist, are a representation of register / stack references, and unless they are used to define a field in a data type , they are never on the heap.

Or perhaps I'm not understanding how a hypothetical pair as new type would work

@blamario
Copy link
Contributor Author

blamario commented Mar 6, 2017

Correct me if I'm wrong, but if functions are an instance of Eval, doesn't that invalidate some of the motivation for this proposal?

We have to start somewhere. This proposal is the first step that has to be taken if we ever want to take functions out of Eval.

As for the next step, one thought I had this morning is that provably total functions could be allowed a trivial Eval instance. But there's no point discussing this unless the base proposal is accepted.

@blamario
Copy link
Contributor Author

blamario commented Mar 6, 2017

I'm also not sure if the issue with bottoms is the main technical barrier with strict / unlifted tuples ... or at least I don't see how newtypes side step the issue.

I don't know if they're the main technical barrier either, but bottoms are the main theoretical one. They seem to be the main reason unlifted tuples got rejected during the Haskell 1.3 design. I've already provided the link to the relevant discussion above.

Unboxed tuples are not the same thing. Implementation-wise, an unlifted tuple could contain thunks in its fields, but it would never be a thunk itself. I think an unlifted tuple whose fields are all unboxed should be semantically equivalent to an unboxed tuple of today.

@goldfirere
Copy link
Contributor

It sounds like you want this:

  • With -XUniversalEval, GHC adds precisely those constraints to functions necessary to get those functions to compile. If this is impossible, reject the function.

This is much simpler than existing descriptions. Given the predictably simple shape of Eval instances, it might even be implementable. It wouldn't fix the Functor problem, but this nabs the other examples in your long list of examples earlier. It doesn't nab this, though:

data E m where
  MkE :: m a -> E m

f11 :: E m -> ()
f11 (MkE x) = x `seq` ()

-XUniversalEval would presumably add a Eval a context to MkE. So the use of seq in f11 has Eval a, but no Eval (m a). I can easily imagine a bit of magic that says Eval a implies Eval (m a) for any m. (When -XUniversalEval is on.) Does this break properties? I don't think it does, but I could be mistaken.

In any case, it sounds like this should be considered under review. If that's the case, please update the label. I would expect that, after this round of revisions, this proposal is ready for final consideration.

@blamario
Copy link
Contributor Author

With -XUniversalEval, GHC adds precisely those constraints to functions necessary to get those functions to compile. If this is impossible, reject the function.

I like the simplicity. :)

I don't see how to update the pull request label myself.

@int-index
Copy link
Contributor

I can easily imagine a bit of magic that says Eval a implies Eval (m a) for any m

newtype P a = MkP X

assuming X has no Eval instance, Eval a does not imply Eval (P a), does it?

@goldfirere goldfirere added Pending committee review The committee needs to evaluate the proposal and make a decision and removed Needs revision The proposal needs changes in response to shepherd or committee review feedback labels Mar 10, 2017
@goldfirere
Copy link
Contributor

But how can X not have an Eval instance? It seems only abstract types (that is, types with quantified variables) might not have an Eval instance.

@Ericson2314
Copy link
Contributor

Maybe we could subclass Eval with something without a function instance to get more bang for buck?

@blamario
Copy link
Contributor Author

Maybe we could subclass Eval with something without a function instance to get more bang for buck?

Definitely, but that would be a follow-on proposal.

@int-index
Copy link
Contributor

But how can X not have an Eval instance?

If there can be no such type, Eval is useless.

@Ericson2314
Copy link
Contributor

@blamario

I appreciate the incrementalism, but I'm worried that unless the deal is sweetened up front, there will be less motivation to make this happen.


have you considered making -XUniversalEval more of a hack so as not to break existing code? Can't we just, as a sledgehammer, discharge all eval constraints (per-module) with it?

[I suppose the answer to this could relate to how much this class can be erased at runtime.]


Granted this isn't a normative part of the RFC, but you mention that unlifted types might not be Eval. I think it's easier and better to simply always consider them in weak head normal form: certainly they'll be represented like head-normalized data of lifted types with GHC (root is boxed, non-closure). This induces a difference between the single field, single constructor unlifted datatype and newtype, but I think that's fine.

@blamario
Copy link
Contributor Author

have you considered making -XUniversalEval more of a hack so as not to break existing code? Can't we just, as a sledgehammer, discharge all eval constraints (per-module) with it?

That is actually a workable proposal that could serve as a stepping stone to the present one and further. Another way of putting it is that with -XUniversalEval one can insert or import Eval constraints anywhere, and they'd have no effect whatsoever.

The trouble is that the flow of information would then be one way only. The explicit Eval constraints imported from modules with -XNoUniversalEval would go to die in modules with -XUniversalEval. The present proposal would be able to infer the Eval constraints from the existing modules. Still, if your idea is easier to implement and eliminates the backward compatibility problems, it would be a good first step to a proper Eval class. The second step would be to add {-# LANGUAGE NoUniversalEval #-} to all base library modules.

@cartazio
Copy link

cartazio commented Mar 12, 2017 via email

@blamario
Copy link
Contributor Author

Ie : the eta distinction seems to presume I'm worried about bottoms. I'm not ... either I get a program result or [t]here's a bug in my code.

Or there's a bug in GHC. Or GHC avoids the bug by conservatively refraining from some useful optimizations.

Your argument boils down to Fast and Loose Reasoning is Morally Correct. But that particular result only holds for a subset of Haskell 98. Would you be worried about bottoms in Agda, for example? Haskell appears to be extending in that direction, among others. The lack of a proper theoretical foundation will close off some of those quite practical potential directions.

I should make clear that I am not particularly well versed in category nor domain theory. I'm a practical guy, it's just that I'm aware that in practice theory matters. Otherwise I'd probably be programming in PHP or some other thoroughly practical language.

@rwbarton
Copy link

This proposal is the first step that has to be taken if we ever want to take functions out of Eval.

I'm not sure again precisely what you mean to suggest by this.

  • Do you mean to take functions out of the Eval class, but still provide a means to force them (functionSeq :: (a -> b) -> c -> c)? I'm not sure what this would achieve, but it wouldn't gain any of the things listed as the motivation for the proposal.

  • Or do you mean to make it really impossible to use seq on functions? The problem is that seq on functions is actually useful; and making it impossible would have ramifications that we haven't even begun to discuss here.

You say that this proposal is the first step to a goal, but I'm not sure at all yet that we actually want to get to the goal.

@rwbarton
Copy link

Granted this isn't a normative part of the RFC, but you mention that unlifted types might not be Eval. I think it's easier and better to simply always consider them in weak head normal form: certainly they'll be represented like head-normalized data of lifted types with GHC (root is boxed, non-closure). This induces a difference between the single field, single constructor unlifted datatype and newtype, but I think that's fine.

I think there might be a collision here between different senses of the word "unlifted". I guess the question comes down to whether these unlifted products (or functions, potentially) have kind # or *.

If #, then there's no problem in treating them as always evaluated, and even making them instances of Eval if it is levity-polymorphic.

If *, then there is a real problem with parametricity. seq (let x = x in x) () must not depend on the type of x, as long as it has kind *. It cannot consider x to be evaluated if its type is an unlifted tuple.

@blamario
Copy link
Contributor Author

blamario commented Mar 13, 2017

This proposal is the first step that has to be taken if we ever want to take functions out of Eval.

I'm not sure again precisely what you mean to suggest by this.

There is a good reason for my lack of precision: the statement you quoted is the only thing I'm sure of. In other words, I see the Eval class re-introduction as a necessary but insufficient step to eta-safety and other goodies. I have only vague speculations about the possible steps following this proposal, as I haven't done the necessary research yet.

I think there might be a collision here between different senses of the word "unlifted". I guess the question comes down to whether these unlifted products (or functions, potentially) have kind # or *.

It would be the latter.

@Ericson2314
Copy link
Contributor

@rwbarton

....eta...

Eval a => Data a is a bit bureaucratic, but does get the job done. I.e. while there's stuff to bikeshed, it certainly isn't an open question that there's at least one possible way to eta-safety leveraging Eval.

If *, then there is a real problem with parametricity...

It took me a minute, but I think I see the problem you bring up :), If x is unlifted, regardless of * or #, my understanding is that mere let x = x in ... will diverge, whether x is or isn't bound in .... I see the switch to call-by-value being a parametricity issue, but it thus presents itself even without seq. Put another way, in a seq-less world, if there is no way to make the evaluation of a term of all *-types depend on the type involved, then we need another kind :/.

OTOH, if the representation for unlifted types coincides with the representation of forced thunks of some type of kind *, then code that happens to be strict wrt any polymorphic types dodges this issue. In other words, practical parametricity concerns come from code, not data.

[# is a mess because it mixes up boxed and unboxed types, I assume you meant boxed. Hopefully that is dealt with by the time we get unlifted types.]

@rwbarton
Copy link

Right, so this is where the terminology collision comes in I think.

In GHC # is the kind of unlifted types, in the sense that if x is a variable of type a and a :: #, then x is something that has already been evaluated; and let x :: a; x = x in ... diverges regardless of whether x is used, as you say. Basically they are the things for which we use strict evaluation. (in recent GHC # is split into many different kinds depending on whether the unlifted thing is represented by a pointer to a heap object or by a machine word or a floating-point register, etc. But the distinction between those representations isn't really relevant here so it's easier to just call them all kind #.)

Types of kind # don't cause any problem for parametricity of seq, because we only have polymorphism over types of kind * (or in recent GHC, types of the same kind/runtime representation).

However, there is a second meaning of "unlifted" from domain theory. The unlifted product, for example, of two domains is the domain whose underlying set is just the product of the underlying sets of the two domains; while the lifted product (which is what models Haskell's existing (,)) has a new element ⊥ adjoined. GHC's runtime representation of products naturally corresponds to lifted products. But we could try to provide unlifted products P a b of kind * also, by making indistinguishable from P ⊥ ⊥. That means

  • all pattern matches on P would be lazy

  • seq on P would do nothing; or else simply not be allowed at all.

The problem with the first option is that it is incompatible with an implementation

seq :: Eval a => a -> b -> b
seq = unsafeSeq

because unsafeSeq cannot do different things to different types of undefined, due to parametricity. That means we would have to carry runtime evidence for an Eval a constraint, which seems quite terrible, especially for the backwards-compatibility story.

I'm not sure whether currently have the technology in GHC to make the constraint Eval a not require runtime evidence anyways, which is a separate issue.

The second option, of removing unlifted products P a b from Eval, seems workable.

This still leaves the problem of whether to make functions lifted or unlifted, where just providing both flavors doesn't seem that satisfactory.

@Ericson2314
Copy link
Contributor

(or in recent GHC, types of the same kind/runtime representation)

Yeah, I was thinking of this new polymorphism over boxed, unlifted things. My domain theory is pretty weak, but I do think that denotationally, boxed unlifted products are very similar to products that one can only lazily match and never seq. The difference is in programs like let x :: a; x = x in ..., but the former can recover the latter's semantics with "tuple eta": let t in (fst t, snd t) (assuming (,) :: * -> * -> #). This is analogous how normal eta expansion delays the evaluation of an unlifted function. [I think I'd propose making a type class for such "same-type" delays for the boxed unlifted kind.]

On one hand, it's annoying that, with the "No-eval route", it's easier to accumulate chained thunks (whereas one would need to explicitly do such an delaying eta expansion for the "boxed-unlifted route" for that to happen). OTOH, it's annoying to have multiple kinds.

@rwbarton
Copy link

rwbarton commented May 1, 2017

Correct me if I'm wrong, but if functions are an instance of Eval, doesn't that invalidate some of the motivation for this proposal?

We have to start somewhere. This proposal is the first step that has to be taken if we ever want to take functions out of Eval.

Okay, but that is a big 'if". Fundamentally the problem is that seq on functions can actually be useful, so disallowing it in the name of supporting eta expansion involves an entirely separate set of tradeoffs which haven't really been discussed much here (since they are not directly relevant to the proposal as it stands).

@goldfirere
Copy link
Contributor

On behalf of the GHC steering committee:

This proposal is rejected in its current form. The general consensus was that a language with Eval might be a nice language to program in, but that the transition costs are just too high for little tangible gain. The main practical advantage cited is safety under eta-expansion, but that's still not enough. Parametricity is, of course, a wonderful thing, but that, too, isn't enough. This proposal is also often described as the first step toward some better place, but that better place isn't well-enough described to be effective motivation for the considerable pain adopting this proposal would entail.

I will thus close this PR. However, given that the decision hinged more on motivation than a fundamental flaw, do feel free to reopen, revise, and resubmit.

Lastly, (on behalf of me, not the committee) I would like to apologize for the long delay in this post. That is no one's fault but my own. Thanks for your patience, and for writing up this proposal.

@goldfirere goldfirere closed this May 30, 2017
@nomeata nomeata added Rejected The committee has decided to reject the proposal and removed Pending committee review The committee needs to evaluate the proposal and make a decision labels Nov 10, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Rejected The committee has decided to reject the proposal
Development

Successfully merging this pull request may close these issues.