-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add parse_generics! and parse_where! macros #1583
Conversation
This seems useful, fairly simple to use (at least assuming you are already writing a TT muncher) and solves a real problem. Plus it's already implemented, which is a big plus. Bringing that implementation in-tree would incur a small additional maintenance burden, but it can be replaced with procedural macros as soon as those are ready, providing a useful test case for that system. Questions/points for discussion:
|
@durka The reason for As for comma, well, it has to be comma because there's simply no way to match individual parts, which means the only thing you can really do with it is directly substitute it, and the parser expects commas. So... commas. Even if something else was used (such as placing each constraint inside a delimited TT), I can't think of anything you could do with them. Also, doing that would complicate the overwhelmingly most common use: directly substituting them in order to wrap a generic item. Well, I say "overwhelmingly most common", but really it's "only" at this point. That said, I'm entirely open to revisiting things if you can show me another application that these macros don't cover. A similar reasoning goes for Also, I'm a little confused by "it provides no advantage over simply hanging on to the input to |
Right, I missed that one of the main functions of the two macros is to separate the generics/clause from the tail, which otherwise requires munching. Still, I could imagine rare situations where you'd want the individual constraints... say you wanted to leverage specialization and generate two impls of One way to make both of my complaints disappear is you could make the macros configurable, so you could request only the outputs you need. That would increase the complexity a bit, but it would reduce strain on the macro system (i.e. compile time) to ignore irrelevant inputs to the callback. |
Hmm... somehow, that never occurred to me. So, something like parse_generics!(constr, params, then callback!(), ...) which would have the same structure, but only outputting |
That's pretty much exactly what I was thinking. |
Now using the field list idea, prompted by durka.
I've updated this with an expanded invocation syntax, as prompted by @durka. The POC implementation has also been updated. The justification is that it slightly reduces the obscurity of the output (what fields, and in what order?), plus provides backward- and forward-compatibility defenses. The previous syntax is listed as an alternative. |
We discussed this in the most recent @rust-lang/lang meeting. The general feeling was that, while we recognize that there is a real need for this sort of functionality in macros, we're pretty reluctant to add some a complex macro into the language proper. Particularly since it can be implemented in a third-party crate using plugins (albeit one that only works on nightly). The typical procedure for these sorts of additions is that they would be prototyped "in the wild" first and then, once they've been in use, considered for addition to the standard library. |
I don't like this decision, but I did expect it :) The point is you shouldn't have to use a plugin for this. And I feel
|
I don't see a team decision, and the RFC has not had a FCP. I assume the lang team just wants to lower expectations in advance? |
A problem with the "external crate" approach is that you can't re-export procedural macros (so far as I can tell, anyway). This means you can't build on or abstract the use of these macros; everyone downstream has to depend on the plugin. It also means we can't build a shim that works on both stable and nightly (where it would simply be able to do a better job than on stable). ... Let's assume I write stable equivalents to these macros that just parse a minimal subset of generics that's reasonable tractable (just type params, maybe a small fixed set of lifetimes (not sure if that's even possible), and maybe basic I really feel strongly that Rust is unnecessarily crippling itself by not being able to handle this. I know the design is more complicated than anyone wants, but after several months of chewing on this, I really don't see any other way to do it without significantly more radical upheaval... and the lack of this hurts now. |
Indeed, no final decision has been reached. I just wanted to convey our perspective from the team meeting. Some responses to various comments by @durka and @DanielKeep:
That's not clear to me, but regardless the desire to prototype externally is not because we think that it's necessarily better to see the design live in a plugin forever, but because we'd like to avoid committing to a design that hasn't seen much use in practice.
Yes, that is precisely the step we were looking for -- that is, an API without any consumers has not really been tested.
Fair enough, though it'd be nice to see at least proof-of-concept ports to show that the API is fulfilling the needs of the plugin. But the next point (by @DanielKeep) is interesting:
This is an interesting point. I agree it makes a prototype "in the wild" less likely to be used, though I guss one could still show that -- were the API available -- it would be possible to use it. More on this below.
Certainly any sort of buy-in would help, but I'm not sure that making a half-baked version is the best option just yet.
Yes, that makes sense, I can certainly see that it'd be nice to get more things on stable. Skimming over the RFC again, though, I wasn't entirely sure just how many existing crates or patterns you thought were blocked by specifically this capability (and only this capability). For example, you mention serde-- do you think that having parse-generics support would enable serde to work on stable? More coverage of existing crates that could make use of this macro to become stable (and how precisely they would do so) would certainly strengthen the case here. I am also quite concerned about future-proofing: what will happen when generics are modified to e.g. support HKT, for example? It seems like that would break clients. Finally, the RFC takes it as a given that macro rules can't restart a failed parse, and hence that one cannot parse generic listings both with and without lifetimes, but that seems to be to 'just a bug' -- and one that we are actually more-or-less in a position to fix soon. Would fixing that bug change your opinion on whether the RFC is needed? (On a related note, do you think we can't fix the bug for backwards-compatibilty or other reasons? I don't quite see why that would be.) |
It's come up before (those are three separate links), but I never saw anyone seriously suggest fixing it! Folks just said that's how MBE/Earley is supposed to work. Allowing backtracking would make macro_rules a lot more powerful, I think! |
I think this is the last missing capability for implementing serde's macros in macro_rules, yes. (And I think that should be a pretty powerful case for adding it, seeing that the serialization mess is one of the big warts that gets cited about Rust!) Though I'd be a little worried about the speed of the resulting macros, since they'll be tt munchers parsing entire structs. |
The idea here would be to create a compatible, but limited, version of the macro in If people want to use those macros with the proof-of-concept compiler plugin backing them, they'll have to jump through some extra hoops, but I'm hoping it should be possible to enable that without having to change the intermediate code. To put it another way: we should be able to construct the shims so that they do the best they can without the plugin, but will work better if you jump through the necessary hoops to make the plugin available to them.
Well, I don't know about literally everything serde's codegen plugin does, but one of the tests is an otherwise stable implementation of The problem with that is that because of the design, you'd also need either macro-based attributes (which I have a WIP RFC for), or to actually rewrite the input to strip the offending attributes so the compiler doesn't choke on them. That said, I'm confident that these macros are a necessary step for Also, those attributes not being supported is only really an issue if you need them. I don't know how often people need them, but I'd like to believe that it's relatively infrequently (though it's the sort of thing that when you need it, you really need it).
Well, I'm not sure about that. It's hard to say anything for certain without even a prospective syntax for HKT, but: this is part of why you can specify the output fields you want. If we need to introduce a new kind of output for HKT information, we can do that and it won't affect old clients. Heck, that's why you can put a Secondly, I'd argue that old code that needs to parse generics breaking if you feed it an HKT type is fine because it's not breaking existing code, it's just a limitation of existing code. After all, it's not like anyone can write HKT-using code before HKTs exist. Even then, depending on how HKTs end up being specified and used, it's not inconceivable to think we could find a way to cludge support into the existing Let's assume HKT gets encoded like this: {
{
constr: [ Col<Elem>, ],
params: [ Col, ],
ltimes: [ ],
tnames: [ Col, ],
},
...
} If someone is using those fields in the intended way, their code will probably continue to work. If they want to actually deconstruct the HKT parameters, here's one potential way forward: parse_generics! { { hkt_params }, then callback!(), <T, Col<Elem>> ... }
-->
{
{
hkt_params: [ T [ ], Col [ Elem [ ] ], ],
},
...
}
Provided that we also got a lifetime matcher, it would probably make it not needed, strictly. However, parsing generics would remain complicated to do. A native approach would still be orders of magnitude faster, and consume only one level of recursion. When I've tried this before in So, if backtracking was fixed, I'd say these macros weren't strictly necessary, but they'd still be massively preferred.
Given that the only observable outcomes of a rule needing fallback are: 1. it accepts the lexically first matching rule, or 2. it explodes; and that there's no way to react to that second one... no, I can't imagine how enabling backtracking would be an issue. You could certainly have macros unintentionally change behaviour... but that would be a case of macros that used to explode on particular inputs suddenly no longer exploding. |
Hear ye, hear ye! This RFC is now entering final comment period. The @rust-lang/lang team is inclined to close. The basis is that these macros are basically too complex to really be suitable for the standard library, and that their primary utility (helping with things like serde) is better served by the Macros 1.1 proposal, so we would prefer to wait until that is available to evaluate. |
For example, these macros were put to use to implement rand_derive |
Agree with motion to close |
The @rust-lang/lang and @rust-lang/libs teams have decided to close this RFC for the reasons previously given -- we'd prefer to use #1681 to address this problem. Thanks everyone for the discussion! |
Introduce two macros to the standard library:
parse_generics!
andparse_where!
. These are the "missing pieces" which will allow stable macros to completely parse and transform arbitrary Rust items.parse-macros
crate, and some examples).