Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

React in some way to MetaOCaml #555

Open
masak opened this issue Jan 15, 2021 · 13 comments
Open

React in some way to MetaOCaml #555

masak opened this issue Jan 15, 2021 · 13 comments

Comments

@masak
Copy link
Owner

masak commented Jan 15, 2021

So I was reading Oleg's latest about session types, and I realized a couple of things.

One was that I still really haven't dug into MetaOCaml, and... I really should.

Two: the whole approach in that paper reminded me of an offhand comment near the end of the Kernel-minus-one report:

Traditional type systems contain a fixed inference engine that fights,
and inevitably loses to, the halting problem. An alternative approach is being
developed in which inference engines are the responsibility of the programmer,
who already confronts the halting problem as a normal part of programming.

I don't claim to possess an excellent grasp of staging, the technique Oleg uses in that paper (and in MetaOCaml), but something strikes me as being similar to what Shutt is hinting at.

Not only that. Staging feels like, in some sense, what Kernel is doing, too — every definition of a $vau thing establishes an "early" stage, against which a "later" use is contrasted. I dunno. Pretty staged for a 1-phase all-dynamic language.

Not only that: my ultimate vision for Alma (and Raku) was something like staging, too. Except I don't think I've ever managed to put it into words in a good way. Imagine having a really good external source-to-source transformer... a really competent preprocessor, but AST-based of course. Now imagine that one being internal instead; that is to say, it's embedded into the environment of the very program it's transforming. That, to me, is macros.

Three: look at that code example:

let p1 y1 y2 = recv y1 Int @@ fun z1 →
               recv y2 Int @@ fun z2 →
               send y2 Bool .< .~z1 > .~z2 >. @@
               finish

If you peer closely, you'll notice that z1 and z2 are not ASTs, they're Ints. Which I found was pretty remarkable, because that whole thing is a code quote, and standard quasiquoting vocabulary is employed for it.

I don't quite know what to do with that yet. My only reaction is that unquoting values instead of ASTs might possibly make quasiquotes a lot less confusing. One of the weird things about building code with quasi is keeping track of what's an AST and what isn't. With MetaOCaml's version of quasiquotes, you're never really dealing with ASTs; only with code quotes and regular values. Refreshing!

That said, I have no idea how to make something like that work for Alma. Maybe this only works when what's being done is actual staging, not macros. More research is needed.

@masak masak changed the title React in some way to MetaOcaml React in some way to MetaOCaml Jan 15, 2021
@masak
Copy link
Owner Author

masak commented Jan 16, 2021

For that matter, why are we constructing staged code in this example? We're in two nested funs and have access to the two Ints z1 and z2. Wouldn't it be far simpler to just compare them directly to get the Bool we're about to send, than to build a code quote to do it? I must be missing something.

@vendethiel
Copy link
Collaborator

you're never really dealing with ASTs; only with code quotes and regular values. Refreshing!

Isn't that pretty similar to rakudo's current macros impl :)

@masak
Copy link
Owner Author

masak commented Jan 17, 2021

you're never really dealing with ASTs; only with code quotes and regular values. Refreshing!

Isn't that pretty similar to rakudo's current macros impl :)

I'm out of touch, but let me point to the difference.

macro moo($z1, $z2) {
    quasi {
        say "Hear ye, hear ye! ", {{{$z1}}} > {{{$z2}}};
    }
}

moo(5, 3);

In Raku (and Alma), the $z1 carries an AST object holding an integer literal with the value 5.

In MetaOCaml, z1 carries the integer 5.

@vendethiel
Copy link
Collaborator

In Raku (and Alma), the $z1 carries an AST object holding an integer literal with the value 5.

Sorry, it was in jest referring a non-introspectable AST being pretty much a value.

@masak
Copy link
Owner Author

masak commented Jan 18, 2021

Ouch. 😉 But even that jest is too charitable, I think. A non-introspectable AST is effective neither as an AST nor as a runtime value.

@masak
Copy link
Owner Author

masak commented Jan 28, 2021

I struggle to put into words what makes staging different from compilation. (And therefore also, I can't fully map out the shape of my enthusiasm for it, in this issue and in general.)

But here's a flawed attempt. Staged systems such as MetaOCaml have been accused of being too complicated and hard-to-understand [citation needed]; but to me, it feels that they also make some aspect a lot simpler.

Say there are two stages, S0 and S1. Your S1 stage is just your regular program; it's S0 that's the added detail. The premise of staging (again, as far as I understand), is that S0 does computation chiefly in order to output/generate the code for S1.

But wait, that could be a description of compilation! With S0 being your regular source code (Java, say), and S1 being a lower-level I'd-rather-not-write-that-by-hand run-friendly format (such as JVM bytecode). Since staging is not just a restatement of compilation, I'm curious what it has or does that distinguishes it.

Here's my current best guess at the difference: in staged programming, the main activity of the S0 program is to actively, explicitly generate the S1 code. (That's the function of the code type in BER MetaOCaml.) That's the big distinction with just regular compilation — when I code Java, neither do I explicitly build code objects with JVM bytecode, nor do I usually think of the chief task of the Java program as building the JVM bytecode; my focus is on the anticipated side-effects of that bytecode, but expressed entirely with the syntax of Java.

(Dammit! Now I want a staged Java.)

Building S1 artifacts in S0 is in some sense more complicated, because two stages are more than one stage, and the communication between them is non-trivial and leads to needing quasiquotes and escapes (unquotes).

But there is also a pleasant simplification in there that doesn't usually manifest with regular compilation, and that feels like it deeply relates to macros somehow: besides the necessary quasiquoting mechanism, the S0 stage has access to the full power of the regular (OCaml) language, and (in some sense) only the full power of the language.

From Oleg's session abstract:

We present a system exploring a very different approach to session typing: lowering type-level sophistry to ordinary programming, while maintaining the static assurances.

(Emphasis mine.)

In a sense, thanks to the staging, S1's (session) types get to turn into fairly regular runtime values in S0. And so instead of artificially constructing a separate elaborate type system on top of the language to validate sessions, in S0 we can just use the full power of the language to achieve the same effect.

Once a few years ago, @jnthn asked me why in Raku's parser, "lexical scopes turn into dynamic scopes". He was hoping there'd be a neat category-theoretical answer that would make it obvious why it has to be like that. ("Oh, it's because your category is Cartesian closed, so you have all products and exponents, which means you have currying via an adjunction...") But now it seems to me that it's just an aspect of staging: the parser is an S0 stage acting on the S1 program as data; when it walks over the AST, the scopes are dynamic in S0, but when we later run the program in S1, they are lexical.

@jnthn
Copy link

jnthn commented Jan 28, 2021

I struggle to put into words what makes staging different from compilation

With what I understand of it to date, I'd perhaps give it the slogan "one language, many compile times". In that:

  • One language: we're working within the syntax and semantics of the language from the very beginning. It's not like eval, where we juggle strings, drop some of them on the floor, lose one under the sofa, but eventually get to the point of having some code that, when passed to an eval function, is deemed syntactically and semantically valid. Rather, if there's a syntax error or type error in the code we're quasiquoting, that's a compile time error right off.
  • Many compile times: things that are quasiquoted are not for immediate compilation/evaluation. Indeed, they contain gaps that will need filling. They may also themselves contain quasiquoted things, so we can specialize specializers.

@masak
Copy link
Owner Author

masak commented Jan 29, 2021

Yes, I think we're making, or reaching for, the same point.

It's not like eval, where we juggle strings, drop some of them on the floor, lose one under the sofa

😄

but eventually get to the point of having some code that, when passed to an eval function, is deemed syntactically and semantically valid.

This sounds awfully close to the "treating language as flat strings instead of rich structures" problem that plagues SQL injections and the like. The outcome is a bit different in the two cases, but the root cause feels very similar.

I thought of this the other day, and it suddenly struck me that there is some connection — haven't made it precise yet — between injection attacks, and breaking the Single Evaluation Rule (the one that says unquoting the same AST fragment several times in a quasiquote is a bug, all other things equal). I was close to creating a separate issue about this; I still might. (Later edit: Although I notice that #479 and #234 cover a lot of that ground already, except for maybe stating outright the extreme opinion that SER breakage and injection attacks are aspects of the same problem.)

It's all an instance of an even broader "developer sin", of not properly representing the desired computation in the program itself, but (by omission, largely) keeping much of it inside one's human cranium. Other representatives of this class include manually unrolling loops, representing enums as untyped string values, or filling the code with unexplained magic values. It represents a kind of hubris or arrogance, where the programmer thinks she can be a better compiler than the compiler (or maybe just forgets that the compiler is there, and is typically better at being a compiler than the human). Another way to state this, a bit more abstractly, is that it's the compiler's job to denormalize/analyze the code into target code, and the developers job to normalize/synthesize the source code. The two really pull in completely different directions... Poorly emulating the compiler's denormalization work (intentionally or unwittingly) is usually a mistake.

Many compile times: things that are quasiquoted are not for immediate compilation/evaluation. Indeed, they contain gaps that will need filling. They may also themselves contain quasiquoted things, so we can specialize specializers.

I think this is where staging meets macros, sort of. The only difference being that in the Raku/Alma model, we do macro expansion in a single stage, which may or may not be thought of as the same stage as runtime. (So it's either a 2-stage or a 1-stage/no-stage system.)

 8  macro moo() {
..      ...
12      return quasi { ... };
..  }
..
94  moo();

But "things that are quasiquoted are not for immediate compilation/evaluation" is certainly true of macro expansion, if you count the thing where the macro is defined on line 8, expanded on line 94, whereupon the macro runs and the quasiquoted value on line 12 is constructed (and hence "evaluated"), and finally the macro call on line 94 is expanded into the resulting injectile, which is "linked"/compiled into its surrounding environment. There's a gap there — though not a gap between stages — where the thing is quasiquoted but not compiled. This gap can be arbitrarily long.

@masak
Copy link
Owner Author

masak commented Feb 3, 2021

it's the compiler's job to denormalize/analyze the code into target code, and the developers job to normalize/synthesize the source code

In a stunning case of synchronicity, a few days later I stumbled over a Kent Dybvig talk (in honor of Dan Friedman's 60th birthday) called "The guaranteed optimization clause of the macro-writers Bill of Rights". It's a fantastic talk, not very long, and if you're somehow reading this comment in this issue thread in this repository, then you'll probably enjoy the talk much like I did.

Having said that, here's my best summary:

  • Compilers take high-level source code to low-level target code.
  • Optimizing compilers also try to generate fast target code, with fewer memory allocations (and better resource use in general), less target code, and generally abstractions replaced by concretions whenever possible.
  • Guaranteed optimization is simply the idea that if you use an abstraction, it's effectively free — you don't pay the cost at runtime because it has been compiled away. This is an impossible ideal in general, but it's possible enough to be something to aim for.
  • Finally, guaranteed optimization is even more important for macro writers, who effectively insert themselves at an early point in the compilation process. By being able to assume/lean on guaranteed optimization, macro writers can write smaller, simpler, more high-level macros; they can be "extravagant" in their use of high-level abstractions, because they know that guaranteed optimization helps reduce overhead. Specifically, the macro writer is no longer tempted to use macros for optimization or special-cased code; the compiler does that.

This is surely something to contemplate and digest; I still want to think more about Kent's list of macro system desiderata at the beginning of the talk, for example.

(There's a dedicated page for the 2004 celebration that this talk was a part of. Many of the other talks look interesting, too.)

@masak
Copy link
Owner Author

masak commented Mar 15, 2021

By being able to assume/lean on guaranteed optimization, macro writers can write smaller, simpler, more high-level macros; they can be "extravagant" in their use of high-level abstractions, because they know that guaranteed optimization helps reduce overhead.

This is key. In a sense, the macro writer is a "regular user" (because the meta-language is largely the language); in another sense, by consuming and producing code, they have stepped one level above/beyond/meta and have joined the compiler as a peer. Either way, it remains the compiler's job to do compiler-y things well, and the human's job to do human-y things well — oversimplifying, those are optimization and abstraction, respectively. There's a real sense in which those are opposites on the same spectrum.

@masak
Copy link
Owner Author

masak commented Jul 26, 2021

Three: look at that code example:

let p1 y1 y2 = recv y1 Int @@ fun z1 →
    recv y2 Int @@ fun z2 →
    send y2 Bool .< .~z1 > .~z2 >. @@
    finish

If you peer closely, you'll notice that z1 and z2 are not ASTs, they're Ints. Which I found was pretty remarkable, because that whole thing is a code quote, and standard quasiquoting vocabulary is employed for it.

I don't quite know what to do with that yet. My only reaction is that unquoting values instead of ASTs might possibly make quasiquotes a lot less confusing. One of the weird things about building code with quasi is keeping track of what's an AST and what isn't. With MetaOCaml's version of quasiquotes, you're never really dealing with ASTs; only with code quotes and regular values. Refreshing!

Thinking more about this, I think the general idea of "punning" runtime values into their corresponding compile-time AST fragments is a natural/comfrotable/safe thing to do.

It doesn't mean that we can get rid of Q.Literal.Int, though — at least not if there's any distinction between a regular runtime integer, and an integer literal embedded in a program. I can think of two straight off:

  • The integer literal occurs at a particular file/line/column position, and we might be interested in modeling that, just as for other AST nodes.
  • The integer 5000 can be represented in many different ways: 5000, 5_000, 05000, and 0x1388. Usually these differences in representations don't matter, but they are a part of the program structure which just the integer itself doesn't capture.

@masak
Copy link
Owner Author

masak commented Nov 23, 2021

Start with a "vanilla" object-level language. We abstract away from exactly which one. Now, either

  1. Add staged compilation, or
  2. Add macros and quasiquoting.

What's the difference between the resulting two languages? Not shallow surface-level differences, but the differences in more fundamental semantics.

@masak
Copy link
Owner Author

masak commented May 12, 2022

Three: look at that code example:

let p1 y1 y2 = recv y1 Int @@ fun z1 →
               recv y2 Int @@ fun z2 →
               send y2 Bool .< .~z1 > .~z2 >. @@
               finish

If you peer closely, you'll notice that z1 and z2 are not ASTs, they're Ints. Which I found was pretty remarkable, because that whole thing is a code quote, and standard quasiquoting vocabulary is employed for it.

Nope; that turns out to be a misunderstanding on my part. If you read the text, you see that the type of z1 and z2 is, in fact, int code. (Which is how ML languages write a parametric type like Code<T> with a provided type argument int.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants