New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Linear types #91

Closed
wants to merge 8 commits into
base: master
from

Conversation

Projects
None yet
@aspiwack
Contributor

aspiwack commented Nov 9, 2017

This thread has been closed, however, a new thread discusses a revised version of the proposal at #111

This proposal introduces a notion of linear function to GHC. Linear functions are regular functions that guarantee that they will use their argument exactly once. Whether a function f is linear or not
is called the multiplicity of f. We propose a new language extension, -XLinearTypes. When turned on, the user can enforce a given multiplicity for f using a type annotation.

rendered

By @aspiwack, @facundominguez, @mboes, and @jyp

regnat and others added some commits Nov 9, 2017

Fix typos and syntax errors
A few nits found while reading the proposal.

I fixed a few things that didn't render correctly on github, maybe it's just their rst renderer being nazi, in which case I can remove them from the pr.
Merge pull request #2 from tweag/regnat-fix-typos
Fix typos and syntax errors
@phadej

This comment has been minimized.

Show comment
Hide comment
@phadej

phadej Nov 9, 2017

Contributor

This technically steals syntax as (:) is a valid type operator under the discouraged -XDataKinds syntax. But this should not be a problem in practice.

Does discouraged syntax mean the syntax without the tick? (i.e. Proxy :: Proxy (x : xs), vs. Proxy :: Proxy (x ': xs)?)

Contributor

phadej commented Nov 9, 2017

This technically steals syntax as (:) is a valid type operator under the discouraged -XDataKinds syntax. But this should not be a problem in practice.

Does discouraged syntax mean the syntax without the tick? (i.e. Proxy :: Proxy (x : xs), vs. Proxy :: Proxy (x ': xs)?)

@aspiwack

This comment has been minimized.

Show comment
Hide comment
@aspiwack

aspiwack Nov 9, 2017

Contributor

@phadej yes, that's what I meant.

Contributor

aspiwack commented Nov 9, 2017

@phadej yes, that's what I meant.

@ggreif

This comment has been minimized.

Show comment
Hide comment
@ggreif

ggreif Nov 9, 2017

Contributor

typo: levitiy

Contributor

ggreif commented Nov 9, 2017

typo: levitiy

@AshleyYakeley

This comment has been minimized.

Show comment
Hide comment
@AshleyYakeley

AshleyYakeley Nov 9, 2017

I'd like to see a comparison to GHC's previous attempt at linear types, the Linear Implict Parameters extension, which was available in GHC versions 5.04 to 6.4. Since LIP was eventually removed from GHC, it would be useful to know what the problems were and how this new proposal fixes them.

AshleyYakeley commented Nov 9, 2017

I'd like to see a comparison to GHC's previous attempt at linear types, the Linear Implict Parameters extension, which was available in GHC versions 5.04 to 6.4. Since LIP was eventually removed from GHC, it would be useful to know what the problems were and how this new proposal fixes them.

@thoughtpolice

This comment has been minimized.

Show comment
Hide comment
@thoughtpolice

thoughtpolice Nov 9, 2017

I voiced my complaints on Reddit, but in the name of meritous discussion I'm posting a copy here, too. I hope they don't come off as harsh, but as a serious evaluation of the potential of this proposal. This change is extremely big and will have (I am certain, from experience) unforseen consequences and interactions down the line, and due to the fact it can change so much of the surface language, I think careful examination is important. That doesn't mean it shouldn't happen, but the bar is damn high. (And I know that I'm an old fart who isn't really involved as much anymore, so I have little weight, but I still think this is an important point to answer.)

(And if I missed something obvious that is either addressed here or in the recent papers, I'd appreciate a pointer and I'll shut up.)

Real talk: what are the expected performance gains from these features? Do you have plans to exploit the information offered to yield better programs? What do you need in order to achieve meaningful speedup? Does the current proposal introduce anything that can offer improved performance? Or is there further work needed to understand exactly how to improve the generated code (for GHC Haskell), with this as a starting point? How much work is that? What do you think it might look like?

A major motivation of the proposal is allegedly "performance of scientific applications". Yet there is not even an list of expected things that this change enables to achieve this, other than hand-wavy examples! I am not suggesting you show an example of code that could be improved -- I want to know exactly how you expect the compiler to go from "this is consumed once" to "this is not tracked by the GC anymore, and thus free from the working set".


Perhaps this is not the right place to put such information, and I'm being too harsh or asking too much. But I think it is extremely relevant to the proposal in question. Why? Because there are 1000 half-baked "ideas" and PhD papers with some SystemF extension -- even ones people keep writing about frequently -- that we could add to GHC's type system that would rule out (some) incorrect programs, at the expense of excessive complexity and low usability.

The only reason people actually entertain linear types, one of those thousands of ideas that add a lot of complexity, IMO, is because it promises improved performance in many ways. "Safe file handles" are an old-hat trick Oleg did years ago, and I'm not immediately convinced this is substantially better and worth the complexity (the lack of broad usage of Rank2 Handle tricks may say something about this.) If it cannot deliver on performance, I would consider it to be in the "excessive complexity with very minimal gain" category. (Dependent types are on the other side: they offer to rule out huge amounts of incorrect programs as opposed to a 'go fast button', but they are very complicated all the same. Different tradeoffs.) And we know "industrial" implementations are never so easy or simple.

In short: do you actually expect you can deliver on what this change promises to offer, in the long run? Or is "Practical compilation of Linear types in GHC" another research paper, yet to be written? Because if it is, that is a very big factor to consider when weighing in on this proposal, IMO.

thoughtpolice commented Nov 9, 2017

I voiced my complaints on Reddit, but in the name of meritous discussion I'm posting a copy here, too. I hope they don't come off as harsh, but as a serious evaluation of the potential of this proposal. This change is extremely big and will have (I am certain, from experience) unforseen consequences and interactions down the line, and due to the fact it can change so much of the surface language, I think careful examination is important. That doesn't mean it shouldn't happen, but the bar is damn high. (And I know that I'm an old fart who isn't really involved as much anymore, so I have little weight, but I still think this is an important point to answer.)

(And if I missed something obvious that is either addressed here or in the recent papers, I'd appreciate a pointer and I'll shut up.)

Real talk: what are the expected performance gains from these features? Do you have plans to exploit the information offered to yield better programs? What do you need in order to achieve meaningful speedup? Does the current proposal introduce anything that can offer improved performance? Or is there further work needed to understand exactly how to improve the generated code (for GHC Haskell), with this as a starting point? How much work is that? What do you think it might look like?

A major motivation of the proposal is allegedly "performance of scientific applications". Yet there is not even an list of expected things that this change enables to achieve this, other than hand-wavy examples! I am not suggesting you show an example of code that could be improved -- I want to know exactly how you expect the compiler to go from "this is consumed once" to "this is not tracked by the GC anymore, and thus free from the working set".


Perhaps this is not the right place to put such information, and I'm being too harsh or asking too much. But I think it is extremely relevant to the proposal in question. Why? Because there are 1000 half-baked "ideas" and PhD papers with some SystemF extension -- even ones people keep writing about frequently -- that we could add to GHC's type system that would rule out (some) incorrect programs, at the expense of excessive complexity and low usability.

The only reason people actually entertain linear types, one of those thousands of ideas that add a lot of complexity, IMO, is because it promises improved performance in many ways. "Safe file handles" are an old-hat trick Oleg did years ago, and I'm not immediately convinced this is substantially better and worth the complexity (the lack of broad usage of Rank2 Handle tricks may say something about this.) If it cannot deliver on performance, I would consider it to be in the "excessive complexity with very minimal gain" category. (Dependent types are on the other side: they offer to rule out huge amounts of incorrect programs as opposed to a 'go fast button', but they are very complicated all the same. Different tradeoffs.) And we know "industrial" implementations are never so easy or simple.

In short: do you actually expect you can deliver on what this change promises to offer, in the long run? Or is "Practical compilation of Linear types in GHC" another research paper, yet to be written? Because if it is, that is a very big factor to consider when weighing in on this proposal, IMO.

@jeltsch

This comment has been minimized.

Show comment
Hide comment
@jeltsch

jeltsch Nov 9, 2017

How much is there for commenting on this before any decisions are made? I am very interested in engaging in the discussion, but do not have time immediately.

jeltsch commented Nov 9, 2017

How much is there for commenting on this before any decisions are made? I am very interested in engaging in the discussion, but do not have time immediately.

@andrewthad

This comment has been minimized.

Show comment
Hide comment
@andrewthad

andrewthad Nov 9, 2017

Contributor

Overall, I'm +1 on this. It has motivating use-cases and has been thought out well. Here are several unrelated thoughts as I read through the proposal:

First thought: I like that -o has been dropped in favor of ->.. To me, this is more visually appealing.

Next thought: The array functions have the following type signatures:

newMArray :: Int -> (MArray a ->. Unrestricted b) ->. Unrestricted b
write :: MArray a ->. (Int, a) -> MArray a
read :: MArray a ->. Int -> (MArray a, Unrestricted a)
freeze :: MArray a ->. Unrestricted (Array a)

Why is the result wrapped in Unrestricted? From the preceeding paragraph, we learn that

the type a ->. b is the type of linear functions, Unrestricted is such that Unrestricted a ->. b is isomorphic to a -> b

So as a function argument, Unrestricted has a well-explained meaning. But, it's not used that way in any of the Array-related functions. My understanding is that the following two functions should have identical behavior:

freeze1 :: MArray a ->. Unrestricted (Array a)
freeze2 :: MArray a ->. Array a

Do they or do they not? If they do, Unrestricted should be removed from these type signature. If they don't, the meaning of Unrestricted should be explained further.

Next thought: I like that there are no proposed changes to base except for further changes to the accursed dollar operator.

Next thought: I like the syntax for specifying multiplicities. It's ugly, but it's less ugly than anything else I can imagine. I'm glad that the "unification of multiplicities will be performed up to the semiring laws".

Next thought: I'm not convinced that 0-multiplicity arrows carry their weight. According to the proposal:

None of our examples use 0, however, 0 turns out to be useful for implementation of type-checking. Additionally, 0 has been used by Conor McBride to handle dependent types, which may matter for Dependent Haskell. In both cases, the use of 0 could be seen as an internal use, but there is no real reason to deny access to the programmer. Hence it is included in the syntax.

Later in the proposal, a complication is noted:

Because we want to allow case_p for a variable p, This creates a small complication where variables never stand for 0, in particular type-application of multiplicity variables must prohibit 0.

That's pretty strange. I would expect something polymorphic in p to accept any value of type p. In the Alternatives section, the workarounds for this all seem cumbersome. I'm sure the designers/implementers of this extension have much better insight into this than I do, but I want to at least raise the question: Is 0 worth it?

That's all. Great paper and great proposal. I look forward to whatever form this eventually takes when it makes it into GHC.

Contributor

andrewthad commented Nov 9, 2017

Overall, I'm +1 on this. It has motivating use-cases and has been thought out well. Here are several unrelated thoughts as I read through the proposal:

First thought: I like that -o has been dropped in favor of ->.. To me, this is more visually appealing.

Next thought: The array functions have the following type signatures:

newMArray :: Int -> (MArray a ->. Unrestricted b) ->. Unrestricted b
write :: MArray a ->. (Int, a) -> MArray a
read :: MArray a ->. Int -> (MArray a, Unrestricted a)
freeze :: MArray a ->. Unrestricted (Array a)

Why is the result wrapped in Unrestricted? From the preceeding paragraph, we learn that

the type a ->. b is the type of linear functions, Unrestricted is such that Unrestricted a ->. b is isomorphic to a -> b

So as a function argument, Unrestricted has a well-explained meaning. But, it's not used that way in any of the Array-related functions. My understanding is that the following two functions should have identical behavior:

freeze1 :: MArray a ->. Unrestricted (Array a)
freeze2 :: MArray a ->. Array a

Do they or do they not? If they do, Unrestricted should be removed from these type signature. If they don't, the meaning of Unrestricted should be explained further.

Next thought: I like that there are no proposed changes to base except for further changes to the accursed dollar operator.

Next thought: I like the syntax for specifying multiplicities. It's ugly, but it's less ugly than anything else I can imagine. I'm glad that the "unification of multiplicities will be performed up to the semiring laws".

Next thought: I'm not convinced that 0-multiplicity arrows carry their weight. According to the proposal:

None of our examples use 0, however, 0 turns out to be useful for implementation of type-checking. Additionally, 0 has been used by Conor McBride to handle dependent types, which may matter for Dependent Haskell. In both cases, the use of 0 could be seen as an internal use, but there is no real reason to deny access to the programmer. Hence it is included in the syntax.

Later in the proposal, a complication is noted:

Because we want to allow case_p for a variable p, This creates a small complication where variables never stand for 0, in particular type-application of multiplicity variables must prohibit 0.

That's pretty strange. I would expect something polymorphic in p to accept any value of type p. In the Alternatives section, the workarounds for this all seem cumbersome. I'm sure the designers/implementers of this extension have much better insight into this than I do, but I want to at least raise the question: Is 0 worth it?

That's all. Great paper and great proposal. I look forward to whatever form this eventually takes when it makes it into GHC.

@simonpj

This comment has been minimized.

Show comment
Hide comment
@simonpj

simonpj Nov 9, 2017

Contributor

How much is there for commenting on this before any decisions are made?

Plenty of time. Many of the GHC proposals are about very specific things with purely local effects. This one would be a big and fairly experimental change, and needs careful discussion. So you have time!

Meanwhile, I urge everyone on this thread to read the paper; it's linked from the proposal. Without the paper I doubt that you'll be able to make much sense of the proposal; it's pretty much essential to an informed disucssion. (Of course it doubtless has shortcomings too, but that's partly why Anaud has turned it into a proposal.)

Contributor

simonpj commented Nov 9, 2017

How much is there for commenting on this before any decisions are made?

Plenty of time. Many of the GHC proposals are about very specific things with purely local effects. This one would be a big and fairly experimental change, and needs careful discussion. So you have time!

Meanwhile, I urge everyone on this thread to read the paper; it's linked from the proposal. Without the paper I doubt that you'll be able to make much sense of the proposal; it's pretty much essential to an informed disucssion. (Of course it doubtless has shortcomings too, but that's partly why Anaud has turned it into a proposal.)

@jeltsch

This comment has been minimized.

Show comment
Hide comment
@jeltsch

jeltsch Nov 9, 2017

I remember that there were several versions of “the paper”. Where can I find the final version, which I guess is the one I am supposed to read?

jeltsch commented Nov 9, 2017

I remember that there were several versions of “the paper”. Where can I find the final version, which I guess is the one I am supposed to read?

@simonpj

This comment has been minimized.

Show comment
Hide comment
@simonpj

simonpj Nov 9, 2017

Contributor

You can find a link in the proposal (second para). DO NOT read our ICFP submission. The POPL version (at the above link) is a total rewrite.

Simon

Contributor

simonpj commented Nov 9, 2017

You can find a link in the proposal (second para). DO NOT read our ICFP submission. The POPL version (at the above link) is a total rewrite.

Simon

@AntC2

This comment has been minimized.

Show comment
Hide comment
@AntC2

AntC2 Nov 9, 2017

typo: s/scrutiny/scrutinee/

@simonpj

This one would be a big and fairly experimental change ...

The idea looks terrific. Because the impact is big, does implementing it mean freezing all other type-related development while type inference is lying on the operating table having open heart surgery?

I'm thinking particularly about the interaction with Type-in-type, Type-level type application, visible dependent quantification.

AntC2 commented Nov 9, 2017

typo: s/scrutiny/scrutinee/

@simonpj

This one would be a big and fairly experimental change ...

The idea looks terrific. Because the impact is big, does implementing it mean freezing all other type-related development while type inference is lying on the operating table having open heart surgery?

I'm thinking particularly about the interaction with Type-in-type, Type-level type application, visible dependent quantification.

@cartazio

This comment has been minimized.

Show comment
Hide comment
@cartazio

cartazio Nov 10, 2017

cartazio commented Nov 10, 2017

@aspiwack

This comment has been minimized.

Show comment
Hide comment
@aspiwack

aspiwack Nov 10, 2017

Contributor

@andrewthad

Re: Unrestricted a as a return type: think of the effect it has when post-composing with a linear function. It ought to be clear that a ->. b and a ->. Unrestricted b are distinct. This is explained in more detail further down the proposal (and even more in the paper, if you're into the details): the definition for consuming a value of type Unrestricted b exactly once is evaluating it to head normal form no matter what happens to the b field. Hence linearity guarantees that forcing the result of a function of type a ->. Unrestricted b (as opposed to consuming it "entirely" in the case of a->.b) will consume the argument (a) exactly once. It is probably more intuitive to think of Unrestricted as a way to escape linearity. For instance freeze :: MArray a ->. Array a would make the Array pointer unique (because linear functions preserve unicity) which is not what we want.

In the case of the array, it enforces well-scoping of the linear arguments: the linear arguments are not accessible passed the scope because they can't be returned as part of the Unrestricted thing. This scoping idiom is crucial for the correctness of most of the examples we've come up with.

Contributor

aspiwack commented Nov 10, 2017

@andrewthad

Re: Unrestricted a as a return type: think of the effect it has when post-composing with a linear function. It ought to be clear that a ->. b and a ->. Unrestricted b are distinct. This is explained in more detail further down the proposal (and even more in the paper, if you're into the details): the definition for consuming a value of type Unrestricted b exactly once is evaluating it to head normal form no matter what happens to the b field. Hence linearity guarantees that forcing the result of a function of type a ->. Unrestricted b (as opposed to consuming it "entirely" in the case of a->.b) will consume the argument (a) exactly once. It is probably more intuitive to think of Unrestricted as a way to escape linearity. For instance freeze :: MArray a ->. Array a would make the Array pointer unique (because linear functions preserve unicity) which is not what we want.

In the case of the array, it enforces well-scoping of the linear arguments: the linear arguments are not accessible passed the scope because they can't be returned as part of the Unrestricted thing. This scoping idiom is crucial for the correctness of most of the examples we've come up with.

@simonpj

This comment has been minimized.

Show comment
Hide comment
@simonpj

simonpj Nov 10, 2017

Contributor

does implementing it mean freezing all other type-related development

No. Like any other feature, work on prototyping linearity will have to go in parallel with everything else.

Contributor

simonpj commented Nov 10, 2017

does implementing it mean freezing all other type-related development

No. Like any other feature, work on prototyping linearity will have to go in parallel with everything else.

@mboes

This comment has been minimized.

Show comment
Hide comment
@mboes

mboes Nov 10, 2017

@thoughtpolice if you're going to bring the reddit thread in, then let me paste a link to your original comment thread: https://www.reddit.com/r/haskell/comments/7bulxb/linear_types_ghc_proposal/dpl6r2n/. It already includes answers to many of your points.

mboes commented Nov 10, 2017

@thoughtpolice if you're going to bring the reddit thread in, then let me paste a link to your original comment thread: https://www.reddit.com/r/haskell/comments/7bulxb/linear_types_ghc_proposal/dpl6r2n/. It already includes answers to many of your points.

@aspiwack

This comment has been minimized.

Show comment
Hide comment
@aspiwack

aspiwack Nov 10, 2017

Contributor

@cartazio

Addressing your points (sorry about the weird formatting):

  1. Linear types are not unsafe in the presence of exceptions. But exceptions certainly make it harder to make a safe implementation of a linear API. In the case of sockets you may want either to have a finaliser, or enclose the entire IOL computation in a runRessourceT-like clean-up function. This is the sort of facilities that we will included in any user land linear type prelude library.

  2. I’m afraid you are incorrect about the restriction on ω (at least in our presentation). Also 1 means "exactly once" so is not compatible with 0 (as is explained in the proposal).

  3. I don't yet understand the question about optimisation. Regarding type-safety: on paper exceptions are irrelevant. But of course, see (1) about what is typically needed to make API’s correctly enforce key invariants in the face of exceptions.

  4. You can always pass something of multiplicity ω to a linear function. Constructors included. But when you pattern-match on such a linear constructor, you will not be able to observe this (unless you are using a case_ω). Primitive tuples are like this. But of course, you have mixed-multiplicity constructors.

  5. Primitive unboxed tuples have linear constructors. If we had user-defined unboxed datatypes they could have mixed multiplicity. We can already define IO a as RealWorld ->. (#RealWorld, Unrestricted a#). But it adds a new indirection, so until there are user-defined unboxed datatypes, I think IO ought to stick to its current definition.

6a) Only to the argument. a :p-> b means consuming the result exactly once will consume the argument with multiplicity p.
6b) You really can't in a lazy language. For much the same reasons as Unrestricted not being definable as a newtype.

  1. We hope it will be useful in the future, but it's not been explored yet.

  2. Indeed, but they don't have a good reason to be primitive. You can define, e.g. type Par k a b = (a->.k, b->.k) ->. k and it is actually useful to be able to vary the k. So we prefer having these as defined construct. (on a slightly theoretical note, I do prefer linear type systems which are nicely symmetric and have a Par rather than a function arrow as the primitive. But it would not be reasonable to try to modify Haskell in this direction. And unclear whether it would be a good idea on practical terms).

Contributor

aspiwack commented Nov 10, 2017

@cartazio

Addressing your points (sorry about the weird formatting):

  1. Linear types are not unsafe in the presence of exceptions. But exceptions certainly make it harder to make a safe implementation of a linear API. In the case of sockets you may want either to have a finaliser, or enclose the entire IOL computation in a runRessourceT-like clean-up function. This is the sort of facilities that we will included in any user land linear type prelude library.

  2. I’m afraid you are incorrect about the restriction on ω (at least in our presentation). Also 1 means "exactly once" so is not compatible with 0 (as is explained in the proposal).

  3. I don't yet understand the question about optimisation. Regarding type-safety: on paper exceptions are irrelevant. But of course, see (1) about what is typically needed to make API’s correctly enforce key invariants in the face of exceptions.

  4. You can always pass something of multiplicity ω to a linear function. Constructors included. But when you pattern-match on such a linear constructor, you will not be able to observe this (unless you are using a case_ω). Primitive tuples are like this. But of course, you have mixed-multiplicity constructors.

  5. Primitive unboxed tuples have linear constructors. If we had user-defined unboxed datatypes they could have mixed multiplicity. We can already define IO a as RealWorld ->. (#RealWorld, Unrestricted a#). But it adds a new indirection, so until there are user-defined unboxed datatypes, I think IO ought to stick to its current definition.

6a) Only to the argument. a :p-> b means consuming the result exactly once will consume the argument with multiplicity p.
6b) You really can't in a lazy language. For much the same reasons as Unrestricted not being definable as a newtype.

  1. We hope it will be useful in the future, but it's not been explored yet.

  2. Indeed, but they don't have a good reason to be primitive. You can define, e.g. type Par k a b = (a->.k, b->.k) ->. k and it is actually useful to be able to vary the k. So we prefer having these as defined construct. (on a slightly theoretical note, I do prefer linear type systems which are nicely symmetric and have a Par rather than a function arrow as the primitive. But it would not be reasonable to try to modify Haskell in this direction. And unclear whether it would be a good idea on practical terms).

@Tritlo

This comment has been minimized.

Show comment
Hide comment
@Tritlo

Tritlo Nov 10, 2017

Copying over the answers to @thoughtpolice from the reddit thread for posterity:

@jyp wrote:

Already with this proposal, you can implement new abstractions such as protocols, and safe resource management, enabling safe API to resources exposed as foreign resources allocated in a foreign heap. This is demonstrated at length in the paper https://arxiv.org/pdf/1710.09756.pdf There is a worked out operational semantics, we are sure that linear foreign objects can be outside of gc control.

There is a partial prototype of a type-checker (modification of GHC) and we'd like some feedback on the design before investing more effort into polishing it.

In the longer term, there can be more modifications deep into GHC:

  • use linearity to improve analyses and optimisations
  • update the RTS so that even native objects get out of GC

It is not known at this point how much performance gain to expect for these later stages.

And of course as academics we are forced to drown you in an endless stream of papers :)

@mboes wrote:

Real talk: can we discuss what's actually in the proposal?

A major motivation of the proposal is allegedly "performance of scientific applications".

I don't see the quoted phrase anywhere in the proposal...

The proposal does not in fact aim to improve the performance of scientific applications because...

Anything you can do with linear types you can do without.

Hence...

what are the expected performance gains from these features?

None! Zero. :) That's why the proposal makes no performance claims. Here's what the proposal does claim:

Linear types enable safe manual memory management for long-lived objects.

Linear types won't magically make anything you do go faster. It's about enabling you to statically check invariants about how you use resources, or how you interact with other processes. It's about providing extra safety to code patterns people already write today.

Now, if you know that the compiler has your back on more things, then it becomes feasible to do more things that would otherwise be very error prone to get right (even if it was previously possible all along). A concrete example of that is the zero-copy serialization/deserialization, which we dedicate a full 4 pages to in the paper. And yes, with performance numbers!

In short - the linear types of this proposal (and of the paper) don't improve performance - they just make code safer (including performance sensitive code). This proposal has no ambition to change anything in the RTS or to generated code. Future proposals might.

Why? Because there are 1000 half-baked "ideas" and PhD papers with some SystemF extension

You seem to assume this is yet another PhD thesis (nothing wrong with them). I can reassure you, it's not. Half of the authors of the paper are working on this in an industrial setting and the other half have a track record of turning innovative ideas from PL research into actual practical language extensions that exist in GHC today...

I can certainly understand your skepticism about whether this is an extension worth having, but I believe the proposal is being pretty upfront about what it claims.

Tritlo commented Nov 10, 2017

Copying over the answers to @thoughtpolice from the reddit thread for posterity:

@jyp wrote:

Already with this proposal, you can implement new abstractions such as protocols, and safe resource management, enabling safe API to resources exposed as foreign resources allocated in a foreign heap. This is demonstrated at length in the paper https://arxiv.org/pdf/1710.09756.pdf There is a worked out operational semantics, we are sure that linear foreign objects can be outside of gc control.

There is a partial prototype of a type-checker (modification of GHC) and we'd like some feedback on the design before investing more effort into polishing it.

In the longer term, there can be more modifications deep into GHC:

  • use linearity to improve analyses and optimisations
  • update the RTS so that even native objects get out of GC

It is not known at this point how much performance gain to expect for these later stages.

And of course as academics we are forced to drown you in an endless stream of papers :)

@mboes wrote:

Real talk: can we discuss what's actually in the proposal?

A major motivation of the proposal is allegedly "performance of scientific applications".

I don't see the quoted phrase anywhere in the proposal...

The proposal does not in fact aim to improve the performance of scientific applications because...

Anything you can do with linear types you can do without.

Hence...

what are the expected performance gains from these features?

None! Zero. :) That's why the proposal makes no performance claims. Here's what the proposal does claim:

Linear types enable safe manual memory management for long-lived objects.

Linear types won't magically make anything you do go faster. It's about enabling you to statically check invariants about how you use resources, or how you interact with other processes. It's about providing extra safety to code patterns people already write today.

Now, if you know that the compiler has your back on more things, then it becomes feasible to do more things that would otherwise be very error prone to get right (even if it was previously possible all along). A concrete example of that is the zero-copy serialization/deserialization, which we dedicate a full 4 pages to in the paper. And yes, with performance numbers!

In short - the linear types of this proposal (and of the paper) don't improve performance - they just make code safer (including performance sensitive code). This proposal has no ambition to change anything in the RTS or to generated code. Future proposals might.

Why? Because there are 1000 half-baked "ideas" and PhD papers with some SystemF extension

You seem to assume this is yet another PhD thesis (nothing wrong with them). I can reassure you, it's not. Half of the authors of the paper are working on this in an industrial setting and the other half have a track record of turning innovative ideas from PL research into actual practical language extensions that exist in GHC today...

I can certainly understand your skepticism about whether this is an extension worth having, but I believe the proposal is being pretty upfront about what it claims.

@mboes

This comment has been minimized.

Show comment
Hide comment
@mboes

mboes Nov 22, 2017

@int-index thanks for the feedback.

  • achieve resource safety
  • avoid garbage collection

From the discussion I have got an impression that neither is provided by the proposal.

What makes you think that? We showed an example IOL / ResourceL monad earlier in this discussion thread, that is exception-safe and shows how to manage system resources. Off-GC-heap memory is just another resource, so your second point above is a subset of the first. It's the type of resource for which you want low allocation/deallocation overhead, much more say than say for sockets or files. But that's the main difference.

The proposal also lists a third point early in the motivation section:

  • using types to control the scope of effects without forcing an unnatural sequencing of mutually independent effects [e.g. can replace the ST monad for some use cases and get better parallelism.]

mboes commented Nov 22, 2017

@int-index thanks for the feedback.

  • achieve resource safety
  • avoid garbage collection

From the discussion I have got an impression that neither is provided by the proposal.

What makes you think that? We showed an example IOL / ResourceL monad earlier in this discussion thread, that is exception-safe and shows how to manage system resources. Off-GC-heap memory is just another resource, so your second point above is a subset of the first. It's the type of resource for which you want low allocation/deallocation overhead, much more say than say for sockets or files. But that's the main difference.

The proposal also lists a third point early in the motivation section:

  • using types to control the scope of effects without forcing an unnatural sequencing of mutually independent effects [e.g. can replace the ST monad for some use cases and get better parallelism.]
@rleshchinskiy

This comment has been minimized.

Show comment
Hide comment
@rleshchinskiy

rleshchinskiy Nov 22, 2017

After thinking more about this, I now suspect that linear types as discussed in the proposal have no fundamental benefits over affine types (but do have significant drawbacks as discussed before). In particular, it seems that affine types are sufficient to obtain the same linearity guarantees as in the proposal when desired. Consider this example:

newMArray :: Int -> (MArray a ->. Unrestricted b) ->. Unrestricted b
freeze :: MArray a ->. Unrestricted (Array a)

If ->. is the linear arrow then we can be sure (non-termination notwithstanding) that the function passed to newMArray calls freeze somewhere. Now, let's assume that ->. is affine. We can then recover the guarantee by changing the interface to:

data Used s  = <abstract>
newMArray :: Int -> (forall s. MArray s a ->. (Used s, Unrestricted b)) ->. Unrestricted b
freeze :: MArray s a -> (Used s, Unrestricted (Array a))

That is, newMArray now requires us to provide affine evidence that we've used the MArray and freeze is the only way to obtain such evidence. Even though ->. is affine we can't silently drop the array. Given that the approach as described always requires "brackets" (i.e., continuation passing style) for pure functions it seems that it should always be possible to play this trick.

This should also work with brackets in IO-ish code. In particular, @mboes' bracketHandle works as is with affine types (including the fact that I can return a different handle from the bracket which seems suspect to me).

So it seems that affine types plus continuation passing style which is already required give us linearity when we want it. At the same time, we can define an affine IO monad with no restrictions on exception handling and we can enforce linearity through bracketing when we need it, too.

rleshchinskiy commented Nov 22, 2017

After thinking more about this, I now suspect that linear types as discussed in the proposal have no fundamental benefits over affine types (but do have significant drawbacks as discussed before). In particular, it seems that affine types are sufficient to obtain the same linearity guarantees as in the proposal when desired. Consider this example:

newMArray :: Int -> (MArray a ->. Unrestricted b) ->. Unrestricted b
freeze :: MArray a ->. Unrestricted (Array a)

If ->. is the linear arrow then we can be sure (non-termination notwithstanding) that the function passed to newMArray calls freeze somewhere. Now, let's assume that ->. is affine. We can then recover the guarantee by changing the interface to:

data Used s  = <abstract>
newMArray :: Int -> (forall s. MArray s a ->. (Used s, Unrestricted b)) ->. Unrestricted b
freeze :: MArray s a -> (Used s, Unrestricted (Array a))

That is, newMArray now requires us to provide affine evidence that we've used the MArray and freeze is the only way to obtain such evidence. Even though ->. is affine we can't silently drop the array. Given that the approach as described always requires "brackets" (i.e., continuation passing style) for pure functions it seems that it should always be possible to play this trick.

This should also work with brackets in IO-ish code. In particular, @mboes' bracketHandle works as is with affine types (including the fact that I can return a different handle from the bracket which seems suspect to me).

So it seems that affine types plus continuation passing style which is already required give us linearity when we want it. At the same time, we can define an affine IO monad with no restrictions on exception handling and we can enforce linearity through bracketing when we need it, too.

@jeltsch

This comment has been minimized.

Show comment
Hide comment
@jeltsch

jeltsch Nov 22, 2017

@rleshchinskiy Very interesting! Does your trick for guaranteeing linear use also work in the watertight 3D models case? And could it also be used to guarantee relevance alone, without affinity?

jeltsch commented Nov 22, 2017

@rleshchinskiy Very interesting! Does your trick for guaranteeing linear use also work in the watertight 3D models case? And could it also be used to guarantee relevance alone, without affinity?

@rleshchinskiy

This comment has been minimized.

Show comment
Hide comment
@rleshchinskiy

rleshchinskiy Nov 22, 2017

@jeltsch Indeed, if we replace ->. by -> we get relevance only. I don't see why 3D models wouldn't work. Of course, we'd need an actual proof-of-concept implementation to be sure.

rleshchinskiy commented Nov 22, 2017

@jeltsch Indeed, if we replace ->. by -> we get relevance only. I don't see why 3D models wouldn't work. Of course, we'd need an actual proof-of-concept implementation to be sure.

@Yuras

This comment has been minimized.

Show comment
Hide comment
@Yuras

Yuras Nov 22, 2017

@aspiwack Thank you for the response.

But there is a complication: if the computation (of type (Marray a ->. Unrestricted b)) is, itself, an unrestricted variable, then the result is unrestricted as well, so, in this case, we could duplicate the Unrestricted b result before we do pattern-matching.

Interesting... Because I came to the conclution that there is nothing to worry about (probably except the purely theoretical issue with resuming "freezed" chunk, but it is not specific to linear types.) Honestly I didn't get what you mean by "if the computation is, itself, an unrestricted variable", could you please give an example?

Yuras commented Nov 22, 2017

@aspiwack Thank you for the response.

But there is a complication: if the computation (of type (Marray a ->. Unrestricted b)) is, itself, an unrestricted variable, then the result is unrestricted as well, so, in this case, we could duplicate the Unrestricted b result before we do pattern-matching.

Interesting... Because I came to the conclution that there is nothing to worry about (probably except the purely theoretical issue with resuming "freezed" chunk, but it is not specific to linear types.) Honestly I didn't get what you mean by "if the computation is, itself, an unrestricted variable", could you please give an example?

@aspiwack

This comment has been minimized.

Show comment
Hide comment
@aspiwack

aspiwack Nov 22, 2017

Contributor

@rleshchinskiy I’m sorry, but at this point I’m very confused.

Let’s take a step back. The conversation so far went like this:

  1. Some folks were concerned that linear types don’t mesh well with exceptions
  2. It was demonstrated that we can implement exception-safe abstractions with linear types
  3. Some folks questioned questioned the relevance guarantee that linear types provide.
  4. Others contributed use cases where this guarantee is clearly relevant.

In short,

  1. we can implement resource-safe abstractions with linear types, and
  2. relevance guarantees matter.

You say that linear types “do have significant drawbacks as discussed before”. But I scanned through the whole thread once more and all I could really find is:

  • the absence of a linearly typed catch in a resource-safe monad (but note that weaker versions of catch are possible),
  • the absence of any story so far for a linearly typed fork.

So “significant drawbacks” sounds like an overstatement to me. At any rate, those remain to be seen. Now, in order to recover the catch that you want, you propose to abandon both resource-safety and linearity, or at any rate to recover these only via an unnatural encoding. You might not care about resource safety, but others do. I for one would rather have resource safety than a more powerful catch. Especially when I can get affine types anyways already (see below) if that’s what I really want.

Your encoding of linear types requires a lot of CPS. For instance, consider split:

split :: Int -> MArray s a ->. (MArray s a, MArray s a)

if i care about relevance, I can’t define this function! I need CPS for that. But it gets worse. How do I make a list of linear ressources (e.g. like in this comment)? Sure I could make a heterogeneous list, but we’ve lost the central tenet of the proposal: to make it possible to reuse existing Haskell data types in linear code. Further, this doesn’t actually work in all cases: because I need the s-s of every resource in my list (otherwise, I will not get the Used s I need at the end of each scopes). It means either that the size of my list is bounded in the types or that I use some form of stack discipline to put things in my list. In particular: no linear priority queue.

So why not let programmers have linear types to begin with? It sounds to me like you are proposing contortions to avoid them, but what’s still unclear to me is, why? Further, any problems you see with linear types will reappear as-is whether they are primitive or encoded, so I don’t quite see what an encoding solves.

Finally, linear types can encode affine types, so if you are happy about encodings, then linear types are good for affine types too. The encoding is, I hope you will agree, quite a bit less intrusive than what was proposed to go the other way round:

newtype Affine a = Affine (forall r. Either (a ->. r) r) ->. r)

use :: Affine a ->. a
use (Affine x) = x $ Left id

drop :: Affine a ->. ()
drop (Affine x) = x $ Right ()
Contributor

aspiwack commented Nov 22, 2017

@rleshchinskiy I’m sorry, but at this point I’m very confused.

Let’s take a step back. The conversation so far went like this:

  1. Some folks were concerned that linear types don’t mesh well with exceptions
  2. It was demonstrated that we can implement exception-safe abstractions with linear types
  3. Some folks questioned questioned the relevance guarantee that linear types provide.
  4. Others contributed use cases where this guarantee is clearly relevant.

In short,

  1. we can implement resource-safe abstractions with linear types, and
  2. relevance guarantees matter.

You say that linear types “do have significant drawbacks as discussed before”. But I scanned through the whole thread once more and all I could really find is:

  • the absence of a linearly typed catch in a resource-safe monad (but note that weaker versions of catch are possible),
  • the absence of any story so far for a linearly typed fork.

So “significant drawbacks” sounds like an overstatement to me. At any rate, those remain to be seen. Now, in order to recover the catch that you want, you propose to abandon both resource-safety and linearity, or at any rate to recover these only via an unnatural encoding. You might not care about resource safety, but others do. I for one would rather have resource safety than a more powerful catch. Especially when I can get affine types anyways already (see below) if that’s what I really want.

Your encoding of linear types requires a lot of CPS. For instance, consider split:

split :: Int -> MArray s a ->. (MArray s a, MArray s a)

if i care about relevance, I can’t define this function! I need CPS for that. But it gets worse. How do I make a list of linear ressources (e.g. like in this comment)? Sure I could make a heterogeneous list, but we’ve lost the central tenet of the proposal: to make it possible to reuse existing Haskell data types in linear code. Further, this doesn’t actually work in all cases: because I need the s-s of every resource in my list (otherwise, I will not get the Used s I need at the end of each scopes). It means either that the size of my list is bounded in the types or that I use some form of stack discipline to put things in my list. In particular: no linear priority queue.

So why not let programmers have linear types to begin with? It sounds to me like you are proposing contortions to avoid them, but what’s still unclear to me is, why? Further, any problems you see with linear types will reappear as-is whether they are primitive or encoded, so I don’t quite see what an encoding solves.

Finally, linear types can encode affine types, so if you are happy about encodings, then linear types are good for affine types too. The encoding is, I hope you will agree, quite a bit less intrusive than what was proposed to go the other way round:

newtype Affine a = Affine (forall r. Either (a ->. r) r) ->. r)

use :: Affine a ->. a
use (Affine x) = x $ Left id

drop :: Affine a ->. ()
drop (Affine x) = x $ Right ()
@aspiwack

This comment has been minimized.

Show comment
Hide comment
@aspiwack

aspiwack Nov 22, 2017

Contributor

@gbaz I’ve read the article Affine Sessions which you cited above.

I don’t think this paper supports your case: failures are explicit, and have an operational behaviour. In other words, these affine session types live in an ambient linearly typed world: it is not the case that you can silently drop a variable in this system. Really, it behaves more or less as a linearly typed language with exceptions (with some variations due to the specific domain under study), which is essentially what we are proposing for -XLinearTypes.

Contributor

aspiwack commented Nov 22, 2017

@gbaz I’ve read the article Affine Sessions which you cited above.

I don’t think this paper supports your case: failures are explicit, and have an operational behaviour. In other words, these affine session types live in an ambient linearly typed world: it is not the case that you can silently drop a variable in this system. Really, it behaves more or less as a linearly typed language with exceptions (with some variations due to the specific domain under study), which is essentially what we are proposing for -XLinearTypes.

@aspiwack

This comment has been minimized.

Show comment
Hide comment
@aspiwack

aspiwack Nov 22, 2017

Contributor

@Yuras

Here is the situation I’m was worried about:

-- Let’s assume a function `fill` which sets all the cells of a mutable arrays
fill :: MArray Int ->. MArray Int

-- `uarr` is unrestricted because its definition does not have any free variable
let uarr = newMArray (\arr -> freeze $ fill arr)
in
  forkIO $ return $ case uarr of { Unrestricted a -> a }
  case uarr of { Unrestricted a -> a } -- I can force `uarr` twice in parallel

My worry was that if the uarr thunk was blackholed lazily, then one of the two threads could get hold of an intermediate state. But maybe it is actually alright? Maybe the lazy blackholing will just cause the the computation to be restarted from the beginning? In which case a completely fresh mutable array would be allocated for the second computation. And both threads would yield the precise same result. That was my guiding intuition when we were writing the paper and developing our examples. But I don’t have a good intuition of lazy blackholing, so I’d be happy if you would share your thoughts on this.

At any rate, we are in a simpler case than ST, as we cannot interleave effects and lazy evaluation.

Contributor

aspiwack commented Nov 22, 2017

@Yuras

Here is the situation I’m was worried about:

-- Let’s assume a function `fill` which sets all the cells of a mutable arrays
fill :: MArray Int ->. MArray Int

-- `uarr` is unrestricted because its definition does not have any free variable
let uarr = newMArray (\arr -> freeze $ fill arr)
in
  forkIO $ return $ case uarr of { Unrestricted a -> a }
  case uarr of { Unrestricted a -> a } -- I can force `uarr` twice in parallel

My worry was that if the uarr thunk was blackholed lazily, then one of the two threads could get hold of an intermediate state. But maybe it is actually alright? Maybe the lazy blackholing will just cause the the computation to be restarted from the beginning? In which case a completely fresh mutable array would be allocated for the second computation. And both threads would yield the precise same result. That was my guiding intuition when we were writing the paper and developing our examples. But I don’t have a good intuition of lazy blackholing, so I’d be happy if you would share your thoughts on this.

At any rate, we are in a simpler case than ST, as we cannot interleave effects and lazy evaluation.

@ghost

This comment has been minimized.

Show comment
Hide comment
@ghost

ghost Nov 23, 2017

@aspiwack Wait, are we going to have to use a different arrow for linear lambda? I feel like that should be option if the compiler can infer that the lambda has to be linear from the use site.

ghost commented Nov 23, 2017

@aspiwack Wait, are we going to have to use a different arrow for linear lambda? I feel like that should be option if the compiler can infer that the lambda has to be linear from the use site.

@Yuras

This comment has been minimized.

Show comment
Hide comment
@Yuras

Yuras Nov 23, 2017

@aspiwack

In which case a completely fresh mutable array would be allocated for the second computation. And both threads would yield the precise same result.

Yes, that is right. We'll lose sharing, but it is a different story.

Yuras commented Nov 23, 2017

@aspiwack

In which case a completely fresh mutable array would be allocated for the second computation. And both threads would yield the precise same result.

Yes, that is right. We'll lose sharing, but it is a different story.

@mboes

This comment has been minimized.

Show comment
Hide comment
@mboes

mboes Nov 23, 2017

There was another point made by @rleshchinskiy about linear types, apart from the typing of catch and fork:

I think it is highly non-obvious what those restrictions are and I think they should be investigated before an informed decision can be made about the proposal. At the moment, I suspect it's at least "no catch" and perhaps "no fork".

My other point is that affine types don't seem to have such non-obvious restrictions

It was noted already that getting the types right for primitives is a tricky business. But I want to point out that the same applies if we have affine types instead of linear types. If a primitive is not morally linear but is given a linear type, type preservation can be destroyed, as previously observed by @rleshchinskiy. If a primitive is not morally affine but is given an affine type, type preservation can likewise be destroyed. In fact this very point came up earlier in the discussion. In an earlier comment, @gelisam points out that the type for fill in @aspiwack's slide was incorrect. If we are to assume an affine space of functions ~>., then the affine type signature for this would be:

fill :: a ~>. D a ~>. ()

But that's a mistake. This particular function should have signature

fill :: a -> D a ~>. ()

In fact, because with the wrong signature for fill you can implement dup (using the other functions defined in the original slide):

dup :: a ~>. (a, a)
dup x = case ul with Unrestricted l -> (head l, head l)
  where
    ul = allocList (\dlist -> let !(da, dtail) = consD dlist in nilD dtail `lseq` fill x da)
    lseq :: () ->. a ->. a
    lseq () x = x

this mistake should have the consequence of breaking type preservation for affine types.

Of course, direct-style affine typed functions offer fewer guarantees to their caller. So of course, it's harder to step into the danger zone where preservation is compromised. But that's just the same old story from PL theory: the stronger your types, the sooner you break preservation if you're not careful.

mboes commented Nov 23, 2017

There was another point made by @rleshchinskiy about linear types, apart from the typing of catch and fork:

I think it is highly non-obvious what those restrictions are and I think they should be investigated before an informed decision can be made about the proposal. At the moment, I suspect it's at least "no catch" and perhaps "no fork".

My other point is that affine types don't seem to have such non-obvious restrictions

It was noted already that getting the types right for primitives is a tricky business. But I want to point out that the same applies if we have affine types instead of linear types. If a primitive is not morally linear but is given a linear type, type preservation can be destroyed, as previously observed by @rleshchinskiy. If a primitive is not morally affine but is given an affine type, type preservation can likewise be destroyed. In fact this very point came up earlier in the discussion. In an earlier comment, @gelisam points out that the type for fill in @aspiwack's slide was incorrect. If we are to assume an affine space of functions ~>., then the affine type signature for this would be:

fill :: a ~>. D a ~>. ()

But that's a mistake. This particular function should have signature

fill :: a -> D a ~>. ()

In fact, because with the wrong signature for fill you can implement dup (using the other functions defined in the original slide):

dup :: a ~>. (a, a)
dup x = case ul with Unrestricted l -> (head l, head l)
  where
    ul = allocList (\dlist -> let !(da, dtail) = consD dlist in nilD dtail `lseq` fill x da)
    lseq :: () ->. a ->. a
    lseq () x = x

this mistake should have the consequence of breaking type preservation for affine types.

Of course, direct-style affine typed functions offer fewer guarantees to their caller. So of course, it's harder to step into the danger zone where preservation is compromised. But that's just the same old story from PL theory: the stronger your types, the sooner you break preservation if you're not careful.

@aspiwack

This comment has been minimized.

Show comment
Hide comment
@aspiwack

aspiwack Nov 23, 2017

Contributor

@Sahgher no, sorry, that was a typo. I'm rather in favour of being able to declare linearity at the lambda site, but this would be a misleading syntax (as we still want \x -> foo to mean any multiplicity). Suggestions are welcome.

Contributor

aspiwack commented Nov 23, 2017

@Sahgher no, sorry, that was a typo. I'm rather in favour of being able to declare linearity at the lambda site, but this would be a misleading syntax (as we still want \x -> foo to mean any multiplicity). Suggestions are welcome.

@rleshchinskiy

This comment has been minimized.

Show comment
Hide comment
@rleshchinskiy

rleshchinskiy Nov 23, 2017

@aspiwack First of all, I'd like to clarify that I really like the ideas put forth in the paper, it's great work. It's just seems to me that they work better with affine types than with linear ones.

Secondly, we really need actual examples of interfaces we want to be able to provide at this point. I will try to address some of your points but I do feel that we're starting to go in circles and having some actual code as a basis for the discussion would help a lot.

  1. we can implement resource-safe abstractions with linear types, and

I don't think anyone doubts that. However, they can also be implemented without linear types so the question really is what linear types buy us beyond what we have now. It seems that we get the strong guarantee that we don't use resources after they've been freed. This is very useful but it is an affine guarantee, not a linear one. What other guarantees do we get?

  1. relevance guarantees matter.

So far, we've had one example which essentially relies on relevance: 3D models. We need Haskell code for this one and we need more use cases to see what situations relevance matters in. Things are made more complicated by the fact that we aren't getting true relevance (because of exceptions) so just taking standard examples from the literature doesn't necessarily work (as we've seen in the case of resource safety).

You say that linear types “do have significant drawbacks as discussed before”. But I scanned through the whole thread once more and all I could really find is:

  • the absence of a linearly typed catch in a resource-safe monad (but note that weaker versions of catch are possible),
  • the absence of any story so far for a linearly typed fork.

So “significant drawbacks” sounds like an overstatement to me.

As far as I'm concerned, not having a catch is a deal breaker. I don't know how to write robust programs without error handling. You say that there are weaker versions - what do they look like? It's (probably) possible to handle errors outside of the linear context, in IO. But I rather suspect that programming in such a style would be awkward and introduce a lot of additional complexity. Again, we need examples.

Now, in order to recover the catch that you want, you propose to abandon both resource-safety and linearity, or at any rate to recover these only via an unnatural encoding.

I really don't propose to abandon anything, I'm not sure why you got that impression. I very explicitly said that in my view, having ->. be linear has no fundamental advantages in expressiveness over it being affine. What specifically do we need to recover for resource safety with affine types?

As to how natural expressing linearity is, I don't think that argument can be made without working through a few complete examples. However, I'd like to make two observations here.

Firstly, I used MArray as an example but it isn't a good one because really, why do we want it to be linear and not affine in the first place? I understand why I can't use the same MArray twice but why can't I drop it?

Secondly, it is well established (starting with Wadler's paper) that the MArray interface in the proposal is too restrictive, for instance because it artificially sequentialises reads. One way around this are capabilities which are typically associated with their objects via phantom types (see, e.g., the Alms paper). So in a real implementation of mutable arrays, we might want to introduce a phantom type parameter anyway. Indeed, the Used type from my example might well be seen as a form of capability.

So again, this discussion needs real, fleshed out examples. I can't really address any of your points without those.

So why not let programmers have linear types to begin with? It sounds to me like you are proposing contortions to avoid them, but what’s still unclear to me is, why?

Because I think that with the approach you propose, an affine ->. will let me write better and clearer code than a linear one, especially since it wouldn't be truly linear anyway. You disagree with this and yes, it is entirely possible that I'm wrong. But in my view, the only way to solve this is to look at real examples. For instance, the Alms paper and especially Tov's thesis have a wealth of case studies (of course, Alms only has affine types so none of them need relevance).

And with my Steering Committee member's hat on, of the 3 examples given in the Motivation section, the MArray one has a too restrictive interface and really should be affine while the malloc and the Socket ones leak resources in the presence of exceptions so how do I evaluate the proposal?

rleshchinskiy commented Nov 23, 2017

@aspiwack First of all, I'd like to clarify that I really like the ideas put forth in the paper, it's great work. It's just seems to me that they work better with affine types than with linear ones.

Secondly, we really need actual examples of interfaces we want to be able to provide at this point. I will try to address some of your points but I do feel that we're starting to go in circles and having some actual code as a basis for the discussion would help a lot.

  1. we can implement resource-safe abstractions with linear types, and

I don't think anyone doubts that. However, they can also be implemented without linear types so the question really is what linear types buy us beyond what we have now. It seems that we get the strong guarantee that we don't use resources after they've been freed. This is very useful but it is an affine guarantee, not a linear one. What other guarantees do we get?

  1. relevance guarantees matter.

So far, we've had one example which essentially relies on relevance: 3D models. We need Haskell code for this one and we need more use cases to see what situations relevance matters in. Things are made more complicated by the fact that we aren't getting true relevance (because of exceptions) so just taking standard examples from the literature doesn't necessarily work (as we've seen in the case of resource safety).

You say that linear types “do have significant drawbacks as discussed before”. But I scanned through the whole thread once more and all I could really find is:

  • the absence of a linearly typed catch in a resource-safe monad (but note that weaker versions of catch are possible),
  • the absence of any story so far for a linearly typed fork.

So “significant drawbacks” sounds like an overstatement to me.

As far as I'm concerned, not having a catch is a deal breaker. I don't know how to write robust programs without error handling. You say that there are weaker versions - what do they look like? It's (probably) possible to handle errors outside of the linear context, in IO. But I rather suspect that programming in such a style would be awkward and introduce a lot of additional complexity. Again, we need examples.

Now, in order to recover the catch that you want, you propose to abandon both resource-safety and linearity, or at any rate to recover these only via an unnatural encoding.

I really don't propose to abandon anything, I'm not sure why you got that impression. I very explicitly said that in my view, having ->. be linear has no fundamental advantages in expressiveness over it being affine. What specifically do we need to recover for resource safety with affine types?

As to how natural expressing linearity is, I don't think that argument can be made without working through a few complete examples. However, I'd like to make two observations here.

Firstly, I used MArray as an example but it isn't a good one because really, why do we want it to be linear and not affine in the first place? I understand why I can't use the same MArray twice but why can't I drop it?

Secondly, it is well established (starting with Wadler's paper) that the MArray interface in the proposal is too restrictive, for instance because it artificially sequentialises reads. One way around this are capabilities which are typically associated with their objects via phantom types (see, e.g., the Alms paper). So in a real implementation of mutable arrays, we might want to introduce a phantom type parameter anyway. Indeed, the Used type from my example might well be seen as a form of capability.

So again, this discussion needs real, fleshed out examples. I can't really address any of your points without those.

So why not let programmers have linear types to begin with? It sounds to me like you are proposing contortions to avoid them, but what’s still unclear to me is, why?

Because I think that with the approach you propose, an affine ->. will let me write better and clearer code than a linear one, especially since it wouldn't be truly linear anyway. You disagree with this and yes, it is entirely possible that I'm wrong. But in my view, the only way to solve this is to look at real examples. For instance, the Alms paper and especially Tov's thesis have a wealth of case studies (of course, Alms only has affine types so none of them need relevance).

And with my Steering Committee member's hat on, of the 3 examples given in the Motivation section, the MArray one has a too restrictive interface and really should be affine while the malloc and the Socket ones leak resources in the presence of exceptions so how do I evaluate the proposal?

@gelisam

This comment has been minimized.

Show comment
Hide comment
@gelisam

gelisam Nov 24, 2017

So far, we've had one example which essentially relies on relevance: 3D models. We need Haskell code for this one and we need more use cases to see what situations relevance matters in.

Okay, I'll download tweag's docker image, write a concrete implementation, and post it here when I'm done. Also I have a few other substructural examples in the readme of my category-syntax repository; since you're looking for more example, would it be worthwhile for me to attempt to implement those as well, or would it be better if the examples don't all come from the same person?

gelisam commented Nov 24, 2017

So far, we've had one example which essentially relies on relevance: 3D models. We need Haskell code for this one and we need more use cases to see what situations relevance matters in.

Okay, I'll download tweag's docker image, write a concrete implementation, and post it here when I'm done. Also I have a few other substructural examples in the readme of my category-syntax repository; since you're looking for more example, would it be worthwhile for me to attempt to implement those as well, or would it be better if the examples don't all come from the same person?

@mboes

This comment has been minimized.

Show comment
Hide comment
@mboes

mboes Nov 27, 2017

@rleshchinskiy thank you for your continued input. @jyp announced here that the proposal authors are revising the proposal with extra content based on the feedback so far. Seeing as this latest round of discussion doesn't seem to have made any tangible progress as you yourself point out (in particular the discussion about catch, the claims about leaking resources or the linear/affine distinction based on the Alms paper - already discussed), I hope that consolidating the discussion in the proposal itself will help us make progress again.

Meanwhile,

having some actual code as a basis for the discussion would help a lot.

@aspiwack and I already invited you to collaborate on some code samples, after you said

I'd be more than happy to write some code (and would be delighted to be proven wrong as I'd really like to have linear types)

@aspiwack has since created the linear-base repo. I'd invite you to work with him there, specifically on a basic implementation of IOL (also called ResourceL in a previous comment).

@aspiwack will close this PR in the meantime and resubmit with an updated proposal, in an effort to make it easier for others to join this discussion as well.

Finally, I do have a request for clarification for the Committee. @nomeata, might you be able to bring this up with them?

  • What is the purpose of a proposal? Should it read as a diff to a hypothetically up-to-date Haskell Report, or should it delve into GHC-specific implementation details (like Core)?
  • Could the Committee define the items it expects before it can make a determination about this proposal? We've heard in this thread members of the Committee ask for a fleshed out linear base, more examples than are already in the proposal and in the paper, and also full code from other users at large. I can certainly understand where they are coming from. But at the same time, I am concerned about scope creep. Defining where the goal posts are from now would certainly help.

mboes commented Nov 27, 2017

@rleshchinskiy thank you for your continued input. @jyp announced here that the proposal authors are revising the proposal with extra content based on the feedback so far. Seeing as this latest round of discussion doesn't seem to have made any tangible progress as you yourself point out (in particular the discussion about catch, the claims about leaking resources or the linear/affine distinction based on the Alms paper - already discussed), I hope that consolidating the discussion in the proposal itself will help us make progress again.

Meanwhile,

having some actual code as a basis for the discussion would help a lot.

@aspiwack and I already invited you to collaborate on some code samples, after you said

I'd be more than happy to write some code (and would be delighted to be proven wrong as I'd really like to have linear types)

@aspiwack has since created the linear-base repo. I'd invite you to work with him there, specifically on a basic implementation of IOL (also called ResourceL in a previous comment).

@aspiwack will close this PR in the meantime and resubmit with an updated proposal, in an effort to make it easier for others to join this discussion as well.

Finally, I do have a request for clarification for the Committee. @nomeata, might you be able to bring this up with them?

  • What is the purpose of a proposal? Should it read as a diff to a hypothetically up-to-date Haskell Report, or should it delve into GHC-specific implementation details (like Core)?
  • Could the Committee define the items it expects before it can make a determination about this proposal? We've heard in this thread members of the Committee ask for a fleshed out linear base, more examples than are already in the proposal and in the paper, and also full code from other users at large. I can certainly understand where they are coming from. But at the same time, I am concerned about scope creep. Defining where the goal posts are from now would certainly help.

@aspiwack aspiwack closed this Nov 27, 2017

@simonpj

This comment has been minimized.

Show comment
Hide comment
@simonpj

simonpj Nov 27, 2017

Contributor

Should it read as a diff to a hypothetically up-to-date Haskell Report, or should it delve into GHC-specific implementation details (like Core)?

Here's my view FWIW.

  • Generally, the proposals process is about the programmer-facing language design, not the implementation.

  • However, over the last two decades, Core has served as an outstanding sanity check on language designs. Things that require zero changes to Core are essentially syntactic sugar (though they may have profound impact on type inference, impredicativity being a prime example). Things that change Core are altogether more fundamental.

  • Requiring a fully formal account of the effect of a change on the source-language type system is too high a bar. We do not have such an account of GHC Haskell as it is, let alone further extensions. But it is feasible to give a fully formal account of the effect on Core. This is exactly the position we took in our paper, where we gave a formal account of a small calculus, but argued more informally about the source language.

  • So for me it's not just "e.g. Core"; rather Core has a special place. I would suggest that any proposal that affects Core should actually specify in detail what those changes are. It's a fantastic sanity check.

The proposals process does yet not give Core a special role in this way; but I'd argue that it should.

Contributor

simonpj commented Nov 27, 2017

Should it read as a diff to a hypothetically up-to-date Haskell Report, or should it delve into GHC-specific implementation details (like Core)?

Here's my view FWIW.

  • Generally, the proposals process is about the programmer-facing language design, not the implementation.

  • However, over the last two decades, Core has served as an outstanding sanity check on language designs. Things that require zero changes to Core are essentially syntactic sugar (though they may have profound impact on type inference, impredicativity being a prime example). Things that change Core are altogether more fundamental.

  • Requiring a fully formal account of the effect of a change on the source-language type system is too high a bar. We do not have such an account of GHC Haskell as it is, let alone further extensions. But it is feasible to give a fully formal account of the effect on Core. This is exactly the position we took in our paper, where we gave a formal account of a small calculus, but argued more informally about the source language.

  • So for me it's not just "e.g. Core"; rather Core has a special place. I would suggest that any proposal that affects Core should actually specify in detail what those changes are. It's a fantastic sanity check.

The proposals process does yet not give Core a special role in this way; but I'd argue that it should.

@gelisam

This comment has been minimized.

Show comment
Hide comment
@gelisam

gelisam Nov 27, 2017

I have finished my implementation of watertight 3D models. Having now used linear types in anger, I have quite a few pieces of feedback I would like to share.

Seeing as this latest round of discussion doesn't seem to have made any tangible progress [...], I hope that consolidating the discussion in the proposal itself will help us make progress again.

Does that mean I should post the aforementioned feedback on the new proposal once it is posted, or should I post in now so it can inform the new proposal? My feedback has nothing to do with exceptions nor affine types.

@aspiwack has since created the linear-base repo.

I wasn't aware of that effort, so I wrote my own linear version of many standard things. I'll contribute my definitions there if they aren't implemented already.

gelisam commented Nov 27, 2017

I have finished my implementation of watertight 3D models. Having now used linear types in anger, I have quite a few pieces of feedback I would like to share.

Seeing as this latest round of discussion doesn't seem to have made any tangible progress [...], I hope that consolidating the discussion in the proposal itself will help us make progress again.

Does that mean I should post the aforementioned feedback on the new proposal once it is posted, or should I post in now so it can inform the new proposal? My feedback has nothing to do with exceptions nor affine types.

@aspiwack has since created the linear-base repo.

I wasn't aware of that effort, so I wrote my own linear version of many standard things. I'll contribute my definitions there if they aren't implemented already.

@aspiwack

This comment has been minimized.

Show comment
Hide comment
@aspiwack

aspiwack Nov 27, 2017

Contributor

@gelisam Please share your feedback, it would be a shame to hold it for a while. We can copy or reference it in the new thread later.

I'll contribute my definitions there if they aren't implemented already.

Thanks

Contributor

aspiwack commented Nov 27, 2017

@gelisam Please share your feedback, it would be a shame to hold it for a while. We can copy or reference it in the new thread later.

I'll contribute my definitions there if they aren't implemented already.

Thanks

@mboes

This comment has been minimized.

Show comment
Hide comment
@mboes

mboes Nov 27, 2017

But it is feasible to give a fully formal account of the effect on Core. This is exactly the position we took in our paper

Indeed, this is what the paper does. So what should the proposal do? Simply point to the paper's formal account? Name which extra fields with what type are added to which Core constructor? That is, from the moment that we have a paper formalization, be it a simplified Core (that ignores orthogonal features like coercions and polymorphism), is that all that is needed, or should we be discussing implementation details related to Core? If say optimizations are affected, should we be listing those too, even if this is the kind of thing that only gets discovered with an upfront implementation of the feature?

Feedback much appreciated. But I suggest pushing any answers to the above question a PR with a diff to the ghc-proposals README. Best to have this meta-discussion elsewhere than in this (now closed!) ticket.

mboes commented Nov 27, 2017

But it is feasible to give a fully formal account of the effect on Core. This is exactly the position we took in our paper

Indeed, this is what the paper does. So what should the proposal do? Simply point to the paper's formal account? Name which extra fields with what type are added to which Core constructor? That is, from the moment that we have a paper formalization, be it a simplified Core (that ignores orthogonal features like coercions and polymorphism), is that all that is needed, or should we be discussing implementation details related to Core? If say optimizations are affected, should we be listing those too, even if this is the kind of thing that only gets discovered with an upfront implementation of the feature?

Feedback much appreciated. But I suggest pushing any answers to the above question a PR with a diff to the ghc-proposals README. Best to have this meta-discussion elsewhere than in this (now closed!) ticket.

@jeltsch

This comment has been minimized.

Show comment
Hide comment
@jeltsch

jeltsch Nov 27, 2017

@mboes

@aspiwack will close this PR in the meantime and resubmit with an updated proposal, in an effort to make it easier for others to join this discussion as well.

Does this mean that a new pull request will be created? If yes, how can I get notified about the creation of this new pull request?

jeltsch commented Nov 27, 2017

@mboes

@aspiwack will close this PR in the meantime and resubmit with an updated proposal, in an effort to make it easier for others to join this discussion as well.

Does this mean that a new pull request will be created? If yes, how can I get notified about the creation of this new pull request?

@gelisam

This comment has been minimized.

Show comment
Hide comment
@gelisam

gelisam Nov 29, 2017

All right, so here is my feedback.

First of all, I like (->.) better than -o now. I implemented linear versions of many existing concepts, and thanks to (->.) paving the way, naming them was easy: prefix names get an L suffix, infix names a .. For example, when I needed a linear version of Applicative, I named it ApplicativeL and I named its methods pureL and (<*>.).

Second, a relief: given the definition of linearity in terms of evaluating terms to head normal form and such, I expected I would have to worry about strictness a lot, but I didn't end up having to use seq nor ! at all. I now realize that due to the way in which linearity is defined, such annotations should never be required. After all, a strictness annotation could make it so that forcing a small part of the output forces a larger part of the input than strictly necessary, but if all the parts of the input are used somewhere in the output (which they must be if the input is linear), then forcing the entire output to normal form (not just hwnf) will of course force the entire input.

Third, a concern. I found it quite difficult to get linear type signatures right, and I wonder if it's simply because I am new to them or if there is a more fundamental problem. The reason I think it might be the latter is that when I got the type signature wrong, I didn't simply get a type error and had to try again, instead I managed to get quite a lot of code past the linearity checker before ending up in a dead end and having to try again with a different type signature.

The specific case I am thinking of is my StateL implementation. To demonstrate how non-obvious this is, here is an exercise: think of a version of the State monad in which variables bound by x <- ... with do notation must be used linearly. For the regular State monad, the underlying representation is s -> (s, a). Write down your best guess for the underlying representation of this StateL monad. Got it? Okay, let's compare with my implementation now.

My first attempt was s ->. (s, a), and using this representation, I managed to implement instances for FunctorL, ApplicativeL, and MonadL, and also implementations for execStateL and modifyL. But then I tried to implement getL, and I hit a dead end because getL :: StateL s s requires a dup :: s ->. (s, s) under the hood.

So I switched to the other possibility, s -> (s, a). This time I was able to implement FunctorL, execStateL, modifyL, and getL, but not ApplicativeL nor MonadL. The reason is a bit complicated. Let's try to implement a simpler function, pairL :: State s a ->. State s b ->. State s (a, b). Even though the underlying f :: s -> (s, a) and g :: s -> (s, b) functions are non-linear, we can only call each of those functions once. So we call f s once, and we get back an (s, a) with multiplicity one. Which is a problem, because we now need to call g on that s, but g wants to use s with multiplicity ω. Another dead end!

It turns out that (->) and (->.) are not the only two possibilities which need to be considered. The representation which did work was Unrestricted s ->. (Unrestricted s, a), or equivalently, s -> (Unrestricted s, a). This works because I can use s non-linearly even when (Unrestricted s, a) has multiplicity 1. Is that the underlying representation you wrote down? I'd be especially curious to hear if those who are already familiar with linear types managed to get that right.

Finally, I noticed that what I want from linear types in a library like this one is to force the users of my library to use my API linearly. Inside my library's implementation, I have a specific meaning of "to consume" in mind which has nothing to do with forcing values to normal form. As a result, I expect this pattern of hiding an Unrestricted value behind an unexported constructor to be quite common. It's an annoying representation because I can't use some typeclasses within my library implementation for fear of exposing them to my users. Maybe I could use the ST trick, asking my users to write code which works for any type u, and then within my implementation I instantiate u to Unrestricted?

I have a few more usability comments, but since they are related to the state of the linear types implementation and not the proposal, I'll find another place to post them.

gelisam commented Nov 29, 2017

All right, so here is my feedback.

First of all, I like (->.) better than -o now. I implemented linear versions of many existing concepts, and thanks to (->.) paving the way, naming them was easy: prefix names get an L suffix, infix names a .. For example, when I needed a linear version of Applicative, I named it ApplicativeL and I named its methods pureL and (<*>.).

Second, a relief: given the definition of linearity in terms of evaluating terms to head normal form and such, I expected I would have to worry about strictness a lot, but I didn't end up having to use seq nor ! at all. I now realize that due to the way in which linearity is defined, such annotations should never be required. After all, a strictness annotation could make it so that forcing a small part of the output forces a larger part of the input than strictly necessary, but if all the parts of the input are used somewhere in the output (which they must be if the input is linear), then forcing the entire output to normal form (not just hwnf) will of course force the entire input.

Third, a concern. I found it quite difficult to get linear type signatures right, and I wonder if it's simply because I am new to them or if there is a more fundamental problem. The reason I think it might be the latter is that when I got the type signature wrong, I didn't simply get a type error and had to try again, instead I managed to get quite a lot of code past the linearity checker before ending up in a dead end and having to try again with a different type signature.

The specific case I am thinking of is my StateL implementation. To demonstrate how non-obvious this is, here is an exercise: think of a version of the State monad in which variables bound by x <- ... with do notation must be used linearly. For the regular State monad, the underlying representation is s -> (s, a). Write down your best guess for the underlying representation of this StateL monad. Got it? Okay, let's compare with my implementation now.

My first attempt was s ->. (s, a), and using this representation, I managed to implement instances for FunctorL, ApplicativeL, and MonadL, and also implementations for execStateL and modifyL. But then I tried to implement getL, and I hit a dead end because getL :: StateL s s requires a dup :: s ->. (s, s) under the hood.

So I switched to the other possibility, s -> (s, a). This time I was able to implement FunctorL, execStateL, modifyL, and getL, but not ApplicativeL nor MonadL. The reason is a bit complicated. Let's try to implement a simpler function, pairL :: State s a ->. State s b ->. State s (a, b). Even though the underlying f :: s -> (s, a) and g :: s -> (s, b) functions are non-linear, we can only call each of those functions once. So we call f s once, and we get back an (s, a) with multiplicity one. Which is a problem, because we now need to call g on that s, but g wants to use s with multiplicity ω. Another dead end!

It turns out that (->) and (->.) are not the only two possibilities which need to be considered. The representation which did work was Unrestricted s ->. (Unrestricted s, a), or equivalently, s -> (Unrestricted s, a). This works because I can use s non-linearly even when (Unrestricted s, a) has multiplicity 1. Is that the underlying representation you wrote down? I'd be especially curious to hear if those who are already familiar with linear types managed to get that right.

Finally, I noticed that what I want from linear types in a library like this one is to force the users of my library to use my API linearly. Inside my library's implementation, I have a specific meaning of "to consume" in mind which has nothing to do with forcing values to normal form. As a result, I expect this pattern of hiding an Unrestricted value behind an unexported constructor to be quite common. It's an annoying representation because I can't use some typeclasses within my library implementation for fear of exposing them to my users. Maybe I could use the ST trick, asking my users to write code which works for any type u, and then within my implementation I instantiate u to Unrestricted?

I have a few more usability comments, but since they are related to the state of the linear types implementation and not the proposal, I'll find another place to post them.

@aspiwack

This comment has been minimized.

Show comment
Hide comment
@aspiwack

aspiwack Nov 29, 2017

Contributor

@jeltsch I'll post a notification here.

@gelisam thanks for your feedback (this is not particularly related to linear types, but for hiding, I advise using a newtype with unexported constructor, this way, you can use the type classes that you want on the type you use in your implementation)

Contributor

aspiwack commented Nov 29, 2017

@jeltsch I'll post a notification here.

@gelisam thanks for your feedback (this is not particularly related to linear types, but for hiding, I advise using a newtype with unexported constructor, this way, you can use the type classes that you want on the type you use in your implementation)

@jyp

This comment has been minimized.

Show comment
Hide comment
@jyp

jyp Nov 29, 2017

jyp commented Nov 29, 2017

@cartazio

This comment has been minimized.

Show comment
Hide comment
@cartazio

cartazio Nov 29, 2017

cartazio commented Nov 29, 2017

@jeltsch

This comment has been minimized.

Show comment
Hide comment
@jeltsch

jeltsch Nov 30, 2017

@gelisam

First of all, I like ->. better than -o now.

I am also much in favor of ->.. It is very similar to the ordinary arrow (->) and thus does not much irritate people uninterested in linearity. Furthermore, it is an operator according to current Haskell syntax. When making -o an operator, we would not be able to write the negation of variable o in the natural way anymore, which is also -o.

jeltsch commented Nov 30, 2017

@gelisam

First of all, I like ->. better than -o now.

I am also much in favor of ->.. It is very similar to the ordinary arrow (->) and thus does not much irritate people uninterested in linearity. Furthermore, it is an operator according to current Haskell syntax. When making -o an operator, we would not be able to write the negation of variable o in the natural way anymore, which is also -o.

@Saagar-A

This comment has been minimized.

Show comment
Hide comment
@Saagar-A

Saagar-A Jan 15, 2018

It seems really odd that type variables never stand for zero multiplicity as for every other type type variables of that type can contain any value of that type. Breaking this consistency doesn't seem like a good choice.

Saagar-A commented Jan 15, 2018

It seems really odd that type variables never stand for zero multiplicity as for every other type type variables of that type can contain any value of that type. Breaking this consistency doesn't seem like a good choice.

@aspiwack aspiwack referenced this pull request Feb 13, 2018

Open

Linear types #111

3 of 6 tasks complete
@aspiwack

This comment has been minimized.

Show comment
Hide comment
@aspiwack

aspiwack Feb 15, 2018

Contributor

If you followed this thread. You will probably be interested by #111 where the discussion continues with a revised version of the proposal.

Contributor

aspiwack commented Feb 15, 2018

If you followed this thread. You will probably be interested by #111 where the discussion continues with a revised version of the proposal.

@rrnewton

This comment has been minimized.

Show comment
Hide comment
@rrnewton

rrnewton Feb 16, 2018

Collaborator

This should be past tense now (POPL over). @aspiwack, if you'll give me push access I'll tweak this and any other typos I come across.

Collaborator

rrnewton commented on proposals/0000-linear-types.rst in 178fead Feb 16, 2018

This should be past tense now (POPL over). @aspiwack, if you'll give me push access I'll tweak this and any other typos I come across.

This comment has been minimized.

Show comment
Hide comment
@aspiwack

aspiwack Feb 16, 2018

@rrnewton Just added you. Push tweaks on on the linear-types2 branch so that it's immediately reflected in ghc-proposals#111 .

aspiwack replied Feb 16, 2018

@rrnewton Just added you. Push tweaks on on the linear-types2 branch so that it's immediately reflected in ghc-proposals#111 .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment