-
-
Notifications
You must be signed in to change notification settings - Fork 437
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow compile and cargo check #783
Comments
Really quite slow, very strange. 😂 |
AFAIK rustc can be very slow in some cases, but I don't know exactly why. |
thanks I will try to narrow done which endpoint/feature makes it slow |
On my computer: rustc 1.57.0 (f1edd0429 2021-11-29) first time
second time
change the code
That doesn't seem to be slow? |
Thanks for looking into it. I just tried cargo check again and I get similar numbers on my machine (not sure if it takes longer if there are more errors). However, I noticed that rust analyzer in VSCode seem to take longer than just doing a ~6s is quite a long time for a type check though... Developing graphql becomes a bit of a pain with these long check times :( I tried some profiling as described here: https://fasterthanli.me/articles/why-is-my-rust-build-so-slow |
Yes, |
Do you think it could help to split an endpoint over multiple mods/crates and then use #[derive(MergedObject, Default)] to put them all together? (is it actually possible to use an Object from a different crate?) |
I don't think it makes much sense to do this, in fact I never bothered with this problem, usually I only check it once with |
I think this problem is caused by some bug in the compiler, not because of too much code generated by the macro. |
I really hope this is the case :-) do you have any special bug in mind? A small update: I just added a small endpoint and after fixing all syntax errors |
I've been trying to figure out which code block is caused check to be slow by removing code. |
This should reduce compilation time, try to split
|
I tried to split our big Query struct into two structs and then merge them back like in https://async-graphql.github.io/async-graphql/en/merging_objects.html, but this didn't help, i.e. editing the smaller new Query struct resulted in the same long check times. I manage to reproduce the long I haven't tried to split our graphql crate into multiple crates yet. |
This definitely does not reduce check time. 😂
But this is possible, splitting into multiple crates can make full use of multi-core cpu. |
I also have been having very slow "cargo check" times in my Rust project: https://github.com/debate-map/app/tree/master/Packages/app-server-rs/src/db In Rust 1.58, merely changing the string in a In Rust 1.59 (just updated to see if it would help), this same thing now takes 17 seconds! (I cannot wait 17 seconds every time I change one letter just to see syntax/type errors!) I eventually narrowed down one of the slowdowns to being due to the Specifically, given these structs: Code for database structs
If I make no changes except removing the 4.5 seconds doesn't sound that bad? Well, it's acceptable if the 4.5 seconds was occurring only when I am running Also keep in mind that the 4.5 seconds is only one part of the overhead -- there is more overhead, eg. in the If more details are desired, here is a text log I wrote detailing my cargo-check timing process: Log of slowdown investigation
|
I tried updating from Rust 1.59 to Rust 1.60.0-beta.3 (as per a comment on the Discord saying 1.59 had incremental-compilation broken), but the 4.5 second cargo-check cost from the However, in a terminal I then ran
So on Rust 1.60, it seems the overhead of using When I revert all the stripping-down I did, the overall time for For me, that still feels quite slow; is 6s considered acceptable for cargo-check for even smallish programs like this? (I'm thinking of TypeScript for example, where syntax/type checking generally shows up in the IDE in under 1 second, even in much larger codebases) Anyway, I will see if I can speed it up somehow... |
Haha, I think the only way is to upgrade the computer. 😂 |
@Venryx we don't use SimpleObject maybe it would be worth trying to use Small update on splitting graphql into smaller crates: it actually works very well and developing graphql becomes fun again (even though you still have to wait some secs for cargo check but much better now). I wouldn't call the issue fixed though since it leads to some unnecessary overhead like managing multiple crates and keeping their Cargo.toml files in sync. On the plus site our graphql code is slightly better organized now since cyclic dependencies are not possible using crates. |
The |
Never give up! lol I found a way to reduce that 6.1s down to 3.5s. The trick? I'll just paste my comments from Discord: The proof of concept works! By caching the tokens that the The approach I used is also quite generic, so it should be usable for a fair number of other macros, not just async-graphql's As a user of the macro, basically all you need to do is wrap your macro-using code in
The approach I used to expand the macros inside is somewhat "clumsy": it spawns a new process that runs After testing the bare-bones proof-of-concept, I then tried using it in my actual app-server, which was having And I think I should be able to bring that number down even more, if I can improve the macro to handle more situations. Right now it is working for these macros: Anyway, I'm pretty pleased with the results so far, and will see if I can get the proof-of-concept polished up enough to be worth using long-term. |
I just realized something: the execution of the It is needed during your actual build, because your GraphQL endpoints need that metadata. But it seems like it's not something even needed for the IDE type-checking stage. (Alternately, maybe it would need the same generated "shell", but could have the details of the GraphQL machinery underneath it left ungenerated.) Anyway, what does this imply? Well, I could modify my wrapper macro to say, "If the current compilation is merely cargo-checking, then completely ignore all of async-graphql's macros, because we don't need the code it generates at this stage." That should improve performance even further! Granted, you may lose some functionality (eg. compiler verification that the return type of the query/mutation/subscription methods are correct). But if the latency reductions for Anyway, I will try experimenting with this follow-up idea tomorrow. |
It works! My crazy idea to have an "outer layer" macro that strips out all the The What drawbacks does it have? Not much! Basically, it just means that when you hover over the That's really not a big deal though, because there are only six macros that get stripped ( How do you use it? Pretty straightforward: any structs where you use the wrap_async_graphql!{
#[derive(SimpleObject, Clone, Serialize, Deserialize)]
pub struct CommandRun {
field1: String;
#[graphql(name = "field99")]
field2: i64;
}
} There is one other change you must make: // replace this
Schema::build(QueryRoot::default(), MutationRoot::default(), SubscriptionRoot::default())
// with this
wrap_agql_schema_build!{
Schema::build(QueryRoot::default(), MutationRoot::default(), SubscriptionRoot::default())
} This Schema::build(EmptyMutation, EmptyMutation, EmptySubscription) (If a new macro seems overkill for this, it may be possible to use a more standard replacement approach, like the built-in cfg! macro or something...) Anyway, while the output-token-caching approach in my previous post is more flexible, this "dumber" macro-stripping is probably better for While I will likely work more on the output-token-caching approach eventually, for now this solves my concerns well, and lets me resume work on my actual project without frustration. Anyway, I hope to eventually make a separate crate for this system, so that other users of Until then, you can find the initial source-code for it here (in the Other than the instructions already mentioned, the only other thing needed is to have
(I might be able to remove this requirement later on, if I figure out a way to detect the rust-analyzer execution-context automatically.) |
👏🏻 This solution looks interesting, looking forward to it. 😁@Venryx |
@clemens-msupply Rust-analyzer has a function that expands a given macro into its final code-form: click on a macro-usage in your source, then run the Doing so for Example source:
Expanded code of SimpleObject macro
I maybe wouldn't expect the expansion above to take quite as long as it does, but it is a fair amount of code being generated. |
So today I thought:
Turns out, you absolutely can. I now have a combined It strips out the impl serde::Serialize for #struct_name {
fn serialize<__S>(&self, __serializer: __S) -> serde::__private::Result<__S::Ok, __S::Error> where __S: serde::Serializer {
Err(serde::ser::Error::custom("This is a placeholder generated by the Serialize_Stub macro, for quick resolution during cargo-check. You should not be seeing this at runtime."))
}
}
impl <'de> serde::Deserialize<'de> for #struct_name {
fn deserialize<__D>(__deserializer:__D) -> serde::__private::Result<Self, __D::Error> where __D: serde::Deserializer<'de> {
Err(serde::de::Error::custom("This is a placeholder generated by the Deserialize_Stub macro, for quick resolution during cargo-check. You should not be seeing this at runtime."))
}
} With the optimization above, my I'm quite relieved! This was my biggest concern with Rust (slow macros for important crates like |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
This issue was closed because it has been stalled for 5 days with no activity. |
…or gets very close to expiring), the jwt is destroyed, and the user is prompted to sign-in again. * MS when user signs-in using google provider, the popup window automatically closes. (according to a stackoverflow comment, the approach used may not work on modern Safari though; not end of world, as user can close manually, but good to resolve in the future if possible) * Removed sign-in panel for js backend. (will probably remove app-server-js in next commit; in any case, soon) Note: To see how much porting the commands to rust increased the app-server-rs compile time, I did a test. ========== CheckAction: Changing string in main.rs, time to recompile (ie. the time listed for the main "cargo build" step) Results: WithOnly2CommandsAddedToRust: Finished dev [unoptimized + debuginfo] target(s) in 12.43s <line-break, followed by docker line...> [done: 14.423s] WithAllCommandsAddedToRust: Finished dev [unoptimized + debuginfo] target(s) in 22.71s <line-break, followed by docker line...> [done: 25.079s] So, adding the commands to app-server-rs did increase it's "incremental compile time" by ~10 seconds. Not wonderful of course, but okay: * The `cargo check` time is what's most important, and it's still fast. * I'm pretty sure most of the slowdown is from the async-graphql macros (as mentioned at: async-graphql/async-graphql#783), and presumably Rust will someday have a way for macros to be incrementally compiled as well. (if not, there may be a way to cache-results, even in the docker builds -- along the lines of my attempt in the thread above, except done more robustly) * There are some workarounds to try to manage slow compilation. (eg. splitting up project into multiple crates; granted, that probably won't be so helpful in this case, since async-graphql needs to compile "after" most code, and most likely needs to remain "in a single crate")
Dear brilliant @Venryx, are you still using those macros? Do you have a better solution? |
Indeed, I am still using the macros. I tried with them off again briefly, and can confirm that they still have a significant impact on making the "cargo check" times more acceptable. (~4.2s with my That said, the release build times for my project remain very high (since my "trick" only works for cargo check, not for actual builds, since the macros need to all run there). You can see some of my (very slow) benchmarkings here: debate-map/app#285 (comment) That thread also shows me trying a (non-open-source atm) fork of rust that someone made that caches the result of proc-macro expansions. Oddly though, that did not yield a meaningful improvement in the release build times for my project. (it did give a slight improvement on cargo check times, but interestingly, seemingly not as much as my "wrap_slow_macros" helper achieves if we extrapolate based on relative time-reductions between the two [did not directly test this particular matchup, and the timings I do have are on different machines]) Anyway, redoing the release build with So I'm not sure what the path forward is for getting faster release builds. The thread above did prove that it's possible to get much faster rebuilds than I am currently getting (the build server I was connecting to while testing the modified rust was taking only 1m20s for release builds rather than my ~5m, which is odd since its cpu is supposedly only slightly faster than mine), so maybe I can get the release builds closer to that 1m20s mark by:
|
If you're on Windows can you try linux on windows and mold? |
I know you saw this already, but for others reading the thread, my timings on WSL2 (with the mold linker) can be seen here: debate-map/app#285 (comment) Brief summary: Within WSL2, my timings were ~3m for a release build -- which is better than the ~5m on Windows itself (maybe because of NTFS slowness), but still a far cry from the 1m20s I was seeing on the build server. I've bought a new development laptop which should arrive soon, which I plan to set up with dual-boot for Windows and Linux. So will be interesting to see what timings I get there. (eg. doing a release build on the exact same hardware and no VM, but just different OS and filesystem [NTFS vs ext4]) EDIT: I found that using Cranelift (and incremental compilation), I was able to drastically improve the speed of my release compiles (3m24s+ to ~20s) -- of course, others' projects may not have the same bottlenecks. More info on it (along with a compile-times table) in my comment here: #1287 (comment) |
We have a relatively small graphql schema but it takes a long time for
cargo check
to complete (~20s). Is this a common problem or is there any known issue that can causes the long check times? Happy to provide more details if needed.Sources are at: https://github.com/openmsupply/remote-server/tree/develop/graphql/src/schema
The text was updated successfully, but these errors were encountered: