-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New Plugins API #3
Comments
The version of plugins I am currently working on would solve this problem. It will allow an engine to delegate the lookups until the function is called. It also will give more convenient Rust syntax for writing a package:
Once defined, all that is needed will be something like
If you are wondering how it works, it uses procedural macros to generate a calling function with a uniform interface. After the
This means that registering a module results in registering one "lookup function", and that is all. Otherwise, it is static code, generated at compile time. This should be much more efficient, and more convenient for users. |
Wow, this is something beyond my expectations. I think we can replace the entire packages implementation with this. One question: when you register a plugin module, I suppose you register all the functions declared within the same module, right? I don't suppose you can "pick-n-choose"? You'll also need a way to handle functions that have a first argument that is Also, I'm thinking how this can handle generic functions with multiple data types. I suppose you must have some form of name mangling then... For example, a simple |
I think we can replace most of it, but there might be some edge cases. See below.
In the current prototype, all It would certainly be possible to add additional attributes to the functions themselves, like
I would probably have a
Generics are something I am not sure if I can support or not. The syntax wouldn't be difficult to detect and add, but Rust's type inference may be the limitation due to my reliance on the Let's take your example with plus on numeric types. The broadest way to implement that would be to implement a trait
As currently written, the auto-generated code would create a call to Even if T is bounded eventually -- i.e. Also, please note that my use of |
I have opened a PR for the foundational API I need for this work. Hopefully it will be quick to review and merge. |
If you can modify your PR to merge into the As to generic functions, I think that is the main difference between Rhai and Rust... in this case, Rhai is more like JS. Not only can arguments be of different types, there can also be different number of arguments. In many Rust std lib cases, multiple functions with different names can be mapped into the same function name in Rhai. I think we need to make the Rust API more "Rhai-centric" by leveraging overloading instead of a simple one-to-one mapping. As for binding multiple generic versions, maybe the plugin can provide a generic version that an outside macro can simply loop over. For example: mod MyModule {
#[rhai(name = "export_to_rhai")]
#[rhai::reg(i8, f32)]
#[rhai::reg(i32, f64)]
pub fn export_to_rhai_3<T: Add, U, V>(i: T, f: U, t: MyStruct) -> String { /* ... * }
#[rhai(name = "export_to_rhai")]
#[rhai::reg(char, INT)]
pub fn export_to_rhai_2<T, U, V>(i: T, f: U) -> bool { /* ... * }
pub fn call<'e>(
engine: &'e Engine,
fn_name: &str,
args_hash: u64,
args: Iterator<Item=Dynamic + 'e>
) -> Result<(), rhai::EvalAltResult> {
switch (fn_name, args.len()) {
("export_to_rhai", 3) => {
/* Somehow use a hash to map argument types */
match args_hash {
1234567 => Ok(export_to_rhai_3(args.next()?.as_ref().cast::<i8>()?,
args.next()?.as_ref().cast::<f32>()?,
args.next()?.as_ref().cast::<MyStruct>()?)),
98765 => Ok(export_to_rhai_3(args.next()?.as_ref().cast::<i32>()?,
args.next()?.as_ref().cast::<f64>()?,
args.next()?.as_ref().cast::<MyStruct>()?)),
_ => Err(...... argument types not matched .......),
}
}
("export_to_rhai", 2) => ..... ,
_ => Err(rhai::EvalAltResult::RuntimeError("cannot find function '{}' in 'MyModule'", fn_name),
}
}
} |
I opened a new PR against the Please make sure to keep that feature branch updated with all your master changes. I'm relying on you to avoid big merge conflicts later.
I generally plan to handle overloading and multiple types with the flexibility in that To go back to a previous example of mine when I was thinking about varadic functions:
This is the kind of overloading you mean, right? My plan for that was to make a function attribute that indicated a "varadic" function. If called, the iterator should be passed directly, and let the function itself handle it:
I also hope this problem will be somewhat mitigated by broader use of this plugin style. It will hopefully encourage Rhai to develop more |
OK, I'll make sure I keep it up-to-date. Your idea of keeping all the argument resolution inside the function implementation itself is a good solution to the problem of overloading. However, this puts the burden on the plugin author, who must then use a mass number of One way we can avoid this is to pass in |
In a perfect world, we should provide macro facilities to do this kind of dispatching so the author doesn't have to worry about it! |
I personally don't think this is a significant burden. As shown in my example, I originally came up with this design for algorithms which "fold types together" naturally in the course of their processing. It would be difficult to make That said, I see your concern for functions which are "do the same thing, just accept different types." It may be possible to expand the procedural macro to support this, but my first idea is to expand the types on the Rust side. For example, take I would write this definition:
Rhai would just need to ensure This would work with the procedural macro I already have in mind.
My understanding is limited, but the impression I get is that the Both of them will end up looking like a switch statement in C: choose where to jump based on whether this long integer's value (the type ID or pair of type IDs) is A, B, C, D, or anything else. Accessing the type ID during the cast shouldn't be slow, either. The functional arguments will surely be in the CPU cache -- after the first conditional check, at the latest.
I find this direct use of TypeId unintuitive. I would rather write code which has Rust doing it for me in things like if and match statements. The reason I picked a type of If you want callers able to know how many arguments they have -- for example, to support "wrong number of arguments" checks early on -- then I would happily change it to an iterator wrapper which implements
My goal is to have procedural macros do as much work as I can. However, I am still figuring things out. A number of your ideas I agree with in general, but I hesitate to make many promises right now. As things get beyond my first, simplest cases -- currently, all functions accept types passed by value, and return |
Also, a separate question: name spaces. Currently my
The use of Has any previous work on modules or packages nailed down a syntax I could hook into? |
Not really, so you can "modularize" the plugins. However, I'd say let's keep the namespacing/modules syntax in sync so we don't get into a conflict later on. At the least we'd need to support user-defined namespaces/modules which can be imported in the same manner. And are you going to support importing just one function inside a module? |
Unfortunately the particular design used in Rhai depends heavily on For example, you won't even know if a This is the way Rhai handles custom objects that it doesn't know about. |
Usually this is enough, if we're only supporting
Meaning that a function call may not always be possible to be qualified with a namespace. Of course, you can force everybody to use namespaced calls instead and disallow method-style, but that would be very un-Rhai... For a function like Now, image somebody writes a new plugin handling yet another data type. He/she would obviously want to implement In your system, everything must be pre-baked-in. That is, you'll have one |
The |
I am currently prototyping the simplest answer to that question: a module can only define receiver methods on types that it exports. When a type is imported by Rhai, all receiver methods come with it. This is sufficient for my use case, which is about writing types and functions in Rust, and easily exporting them to Rhai. But I recognize that this will not suffice cases where users want to extend the standard library, or do a more dynamic In other words, my current prototype allows this Rhai code:
But not something like this, which is what you seem to be describing:
While it would be easy to write a separate attribute to tie into such syntax, there is the problem of conflicts. Currently, Rhai plugin loads are presumed infallible. I wasn't going to write them this way at first, but everything they do to the Engine seems to be infallible. It's not clear what happens if two modules try to add a If there were two calls to |
In addition, let me go back to The function My off-the-cuff answer would be: there is a global
The
Code achieving that is TBD. |
I believe the second one overwrites the first right now... |
Regarding For a new type, you need to define Not only that, you'd also want to override Not the best design, I admit, but that's how it is originally structured. Any ideas on your side to really streamline/automate this would be appreciated! |
Can we just do:
if |
The only reason I wrote that was thinking about Rust's They wanted you to opt-in to procedural macros, because the macros would show up in the global namespace. That might cause conflicts you didn't want. In this case, the conflicts would be between modules, if "traits" were always imported:
The |
Good point here, about potential name conflicts. In that case,, even if we force the user to We'd probably need to do something similar to JS:
or in case of conflicts:
And if we have this, mind as well open this up to all other imports:
|
@jhwgh1968 I've kept the Now you can program your macros to the modules. Except there is no way to do the Your idea of mapping an |
The My design instinct tells me:
Does that help? I know this leaves unanswered questions, such as "how do I know this trait is for another type, rather than my type?" but I am deferring those until you can answer a higher priority question below. As for the modules implementation, it took me a while to get my head around what I need to do for plugins, but I think I can make progress. In particular, I hope to commit a basic plugins implementation soon, which can create new However, I'm only committing it because I am a big fan of the old saying, "do it first, do it well second." If I were to start on the procedural macro from that PR, it would fail to achieve some of the benefits, and might make some of the features we discussed above more difficult to implement. Everything I wrote up there relied on my low-overhead lookup strategy: resolve the namespace to a plugin ahead of time, but defer resolving namespace members until a script accesses them. That is what the macro would generate. Your current modules implementation is still incompatible with that approach. A plugin can now enable the I think my approach would require writing a In the process of writing this post, I now realize I need to separate these two related ideas in my head:
How would you rate those, @schungx? Is one worthwhile without the other? I remember you expressed praise for the procedural macro idea, but it might have just been item 1 you were thinking of. |
Wise words indeed! That's how I define "hacking".
I don't think a custom module resolver can help in this case, because it seems like you'd want to lazy-load functions and variables, defer until the first time they're used. Correct? That means that, when the module is attached to an Engine, it doesn't contain anything. However, currently in the interest of speed, I hash up all the functions and variables in the entire tree once a module is loaded. Doing this has the great advantage of making module-qualified function calls and variable access as fast as normal. However, this also means that you'd need to have all of them ready in the very beginning. Sort of a catch-22...
If this is lazy-loading, I think that's a wonderful idea. Let's not lose it just because of the current architecture. Let's think of a way to have our cake and eat it.
Well, I must admin I was thinking of No.1. Haven't really thought about No.2 as being possible at all, until you raise it up now. If it is at all possible, then No.2 is clearly the better solution. That would make loading up Engine's extremely fast because none of the built-in functions need to be loaded until they are needed! However, I would really like to know more about what you're delaying. Which workload is put off until needed? Because I think you'll need a large lookup table just to know the names of all the possible functions to match for, and their parameter types... so essentially you're not loading anything less than what's done currently. You're essentially only saving a boxed function pointer, which is one allocation. |
Also, as a general comment since it has now scrolled off the page: this to-do list I am keeping up-to-date. If you find it useful, perhaps clean up the original issue text, and link to it? |
Today, I was going to work on supporting nested submodules, which would require significant refactoring. After that, I would do the handling of names that are not valid Rust idents. When I just looked at the However, if you want me to open a PR for it, please either fix the CI or delete the commits that broke it. |
Yes, I would pick fixing the naming as the highest priority, because that's preventing a large amount of work from being moved to plugins. Sub-modules is a nice to have, but can be solved simply by splitting into separate modules. However, I can say making feature gates
My commits are just an experiment to see if the idea works and whether it simplifies writing libraries. I wasn't even sure if it is implemented correctly. Feel free to overwrite it with something that is more consistent. Nevertheless, I'll fix the CI tests so at least it is a singe unit. I wonder whether I can just open a "PR" for my own repo instead of always having to commit directly into it so I won't muck things up... maybe the right thing to do is to open up a new branch? |
OK, I've tried fixing the CI, but I hit on a problem with Some of the outputs don't match simply by having different spans pointing to the error. For example:
From what I can see, the errors are identical except for the position of the arrows. I am not sure if this is a difference with the compilers we are using. I hesitate to fix them just to find out that it is failing for you... EDIT: So, OK, yes it is passing on Linux. So probably a Windows/Linux thing. The CI is now passing. |
I think PRs against your own repo are the best way to go. It would not only run CI, but let me review it. I will update you on my progress below, which will include comments that I would have put in a code review.
Yes, that is another small difference in macro behavior between the current I'm not 100% sure what causes it, but I wrote my tests to enforce the more "readable" behavior of nightly, where the entire parameter type is highlighted. That is the token span I'm really interested in making sure is preserved, when I am coding away. However, this error does not affect CI. I specifically set up the Codegen Build job to only use nightly so that it would match the expected output. I was planning to add a "trybuild" feature at some point, which would be on by default, and you could disable if you were going to do codegen library development on stable. I guess that day is here. Now, an update on the change in It turns out, your code didn't need any changes. When I open my PR, you will find your commit cherry-picked as-is. The main issue I had with your change was that the tests were not updated for the behavior change you made. At a minimum:
I have already fixed this in my soon-to-be PR. But then, I hit a second issue. This demonstrates why macro development is so test heavy. When I cherry-picked your second commit which updated Rhai to use All of them were along the same lines: a function called in a script didn't do what it was supposed to, didn't return what it was supposed to, or didn't return the type it was supposed to. Based on that, I thought: "Hmm... perhaps when @schungx did those renames, he created some name conflicts in his modules by accident, and functions overwrote each other. This tells me that I should implement rename collision detection, and throw an error on that." So that's what I am working on now. (And yes, I do remember that I have to check both the name and all the parameters before I call something a "collision".) |
Do you know how to do that? I searched a bit and didn't come up with anything. Seems like I need to create a branch...
This is strange because the tests pass on my side... I suspect when my changes are merged with your existing changes, some logic gets confused in the middle... I'd advise just junking all my changes and then reapplying them on top of your existing version. However, keep my changes to the |
Oh! Yeah, sorry, I misread your question. My suggestion is, create a
I am planning to do that. However, that is what caused my issues. ... Actually, I just realized I don't have the very last commits that made the CI green. One of them made changes to the string package. I wonder if that will fix the issues I'm seeing? Even if it does, though, I still think it suggests rename duplicate detection is a good idea. I will open my PR once I am done with that. |
Yes I fixed a bug in the Strings package. It wasn't due to naming conflict, but I put in the wrong operator. |
|
FYI... I pushed some commits with a bug fix for method-call detection of plugins. It was not recognizing method calls from plugins correctly. |
I was thinking you would actually open a PR. That is also so I could review it, and point out things like: you should probably write a test for X case.
Thanks for catching that. I never would have! I have opened a PR with duplicate detection, and some test fixes. |
Yes, I plan to open PR's from I tried creating a PR to your fork, but it generates a godzillion commits dating all the way back to history, so I'm not sure if that's a good idea... |
To clarify, I really meant: open a pull request to merge from You are correct, my repo branches are out of date, because I don't keep my copy in very good shape except for rebasing purposes.
In fact, I would ask you to open a PR and let me review any macro change, even if it is small or fixing a bug. (Your most recent bug fix regarding methods was fine, because it did not touch files in Even if I don't have much to say, I would still like a review for two things:
At the moment, both of these are currently missing for the I have it on my personal list to back-fill, after I get submodules (including submodule attributes) finished. I would like that list to not get any longer in the mean time. |
I updated the current state of the to-do list, and it got me thinking about the eventual release. Since we are making good progress, my understanding is that this will ship for Rhai Even if the main parts of Rhai use it, I imagine end users will find new and clever ways to break the macros. They will do things that are wrong that "experts" (you and I) would not do, and cause it to While I plan to try a bunch of experiments for negative testing, I know it won't catch even half of the problems. Every time something is called "fool proof", the universe sends a bigger fool to disprove it. 😄 Given that, how would you suggest we release this? I can think of several ideas:
What do you think, @schungx? |
I would suggest release Right now the macros are restricted to implementing the standard library (and some portions are still not ported to macros yet - I'm waiting for support of feature gates in modules). As long as all the tests pass, that shouldn't affect stuff much. The section in the documentation on how to create custom packages need to be scrapped and rewritten. It should be a new complete section on macros and how to use them. I can start on that, if you don't think the public API is going to change much. As it stands, it is already extremely useful and quite clean. |
I think as it stand right now, plugins is probably ready to merge into What do you think? |
I was thinking you might be able to clean things up more after I merge submodules and EDIT: I'm presuming "your master", not "the official master" where anyone else could use it. I think it is currently for "internal use only" at the moment. |
I am confident the API I currently have will expand not much, and what does will be backward compatible. Getting started on that might not be a bad idea. |
Well, sub-modules are not as useful without feature gates. The main attraction of sub-modules is to put them under feature gates to include/exclude entire sections of functions at once.
Yes, I mean |
BTW, I have run some benchmarks on We might want to start fine-tuning the macros to generate code that has less regression. The regression might be coming from the fact that all function calls go through an additional level of function call (i.e. the |
Yes, I am working on it now.
I am a little surprised it's that high, but I have not looked at any assembly in many, many revisions.
Avoiding that runtime dispatch would require speculative devirtualization, which is being worked on for LLVM, but is not implemented yet to my knowledge. When I was designing this, my intention was to make that the one runtime call in the entire code path. Given the amount of previous indirection in packages and modules when I looked at it, I presumed this would be similar in performance, if not faster. That is why I'm surprised to see that the penalty is so high. |
Could you also try adding debug symbols to your benchmark test binary and running it with That would tell us if there are instruction cache misses (and if they are tied to the dynamic lookup), or if there are data fetch misses (caused by data structures not fitting well or cold/random accesses). |
I'll still investigating the performance angle. Seems that many of the benchmarks simply do not even touch on plugin functions. Therefore the regression probably has nothing to do with plugins. Could be my machine running some other tasks in the background (virus scanners can be sneaky and start scanning when you don't notice). No need to be alarmed right now... The differences are not significant enough to be over the error bars. However, they do show a systematic bias which, as I mentioned, may be simply due to my machine. |
As I'm working through Currently, in the string package, there is this code: #[cfg(not(feature = "no_object"))]
#[export_fn]
pub fn format_map(x: &mut Map) -> ImmutableString {
format!("#{:?}", x).into()
} Believe it or not, that shouldn't work. When I run #[cfg(not(feature = "no_object"))]
pub fn format_map(x: &mut Map) -> ImmutableString {
{
let res = /* format macro expansion */;
res
}
.into()
}
#[allow(unused)]
pub mod rhai_fn_format_map {
use super::*;
struct Token();
impl PluginFunction for Token {
fn call(
&self,
args: &mut [&mut Dynamic],
pos: Position,
) -> Result<Dynamic, Box<EvalAltResult>> {
/* debug assert macro expansion */
let arg0: &mut _ = &mut args[0usize].write_lock::<Map>().unwrap();
Ok(Dynamic::from(format_map(arg0)))
}
/* rest of the macro output ... */
} If the Yet somehow, when the Since I don't trust this magic, I am going to forbid mixing This is the sole place in your code you did that, so I have no problem updating this to the new way to use |
Ah. I was under the impression you changed enough core parts that users couldn't avoid them. In that case, I'll ignore it for now. |
Well, as far as I can tell, feature gates and Try expanding with EDIT: I just tested it and there is no EDIT 2: If the feature gate is true, then it errors out because the function is still missing. Actually, in the strings package, I put it inside a module and put the feature gate on the module. That's why it worked. #[cfg(not(feature = "no_object"))]
mod format_map {
use super::*;
#[inline]
#[export_fn]
pub fn format_map(x: &mut Map) -> ImmutableString {
format!("#{:?}", x).into()
}
} |
I made the latter change as part of my PR. I'm glad I put that rule in! |
@jhwgh1968 there has been an open issue regarding the fact that creating an
Engine
is expensive: rhaiscript#142It looks like registering all those core functions (such as basic arithmetic) each time during an
Engine
creation is causing it to be really slow.The solution, obviously, is to run it only once (since they never change - all scripts will need them). One solution is to use
lazy_static
to make this package of functions a global constant, but the downside is that an additional crate must be pulled in:lazy_static
.An alternate solution is to really start breaking down the built-in's into separate packages that are immutable, so the packages themselves can be created only once, stored around, and then passed to all new
Engine
created.Something of that sort:
The text was updated successfully, but these errors were encountered: