-
Notifications
You must be signed in to change notification settings - Fork 26.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[NEXT-1143] Dev mode slow compilation #48748
Comments
same here, and in docker env is even worse, seems like is processing same files over and over without caching them. |
Same for me also dev env ,navigating to different pages via link component is pretty slow |
+1 its same here, hitting the page first time seems fine but routing via links gets stuck |
last canary version has a better cold build times improvements, still slow like 2-5 seconds (in docker env) waiting but much better the version im talking about is 13.3.2-canary.6 |
Hey, @jeengbe there have been some patch updates (13.3.1 -> 13.3.4) did it improve for you? |
Hi @denu5, unfortunately, I can't report any real performance changes since I opened this issue. You might want to check out the above linked issue in the TypeScript repo though - might be related. |
As @jeengbe mention there is no performance improvement, there is also a lot of I/O I don’t know why, one request gets pretty much like 1gb-2gb of io. And it is very slow. |
Unfortunately, I can't confirm this for my case |
That’s pretty good, in my case there is a lot of I/O, maybe is because I’m using material-ui but I think is too much even though. |
Possibly, it would align with what your trace shows: #48407 (comment) |
I see that slow route changes in dev mode are showing a '[Fast Refresh] rebuilding' message in the browser console. Sometimes it performs a full page reload when changing routes even if no files have been edited. |
entry next-app-loader
spans?)entry next-app-loader
spans?)
its slowing down the development..! |
Having the same issue here, in the Docker environment it's come to a point where it's almost unusable, and sometimes I even have to do a hard reload, after waiting too long for navigation. This is the case both with component from next/navigation, as with the router.push (useRouter hook imported from next/navigation). We're using Next.js 13.4.2. |
same here, it is almost not usable in dokcer enviorements, but also outside docker is very slow, something is not working nice. this is painfully slow. |
Yeah same for me. I used to remote develop inside our k8s cluster but dev --turbo is super slow inside a container and causes my health check endpoint to sigkill it regularly. The whole app router is super slow when containerized in Dev mode. It works perfectly fine when I run both on my local machine and connect it via reverse proxy. This way it's faster than the old setup (which was not significantly faster before) and takes advantage of preloading pages via next/link. I see inconsistencies in caching too where it's a mix of instant navigation or long builds (around 3.5k modules for some things) around 2-10 sec. Also there is this weird thing happening that a page compiles just fine and then later it grinds to a halt being stuck in waiting for compiling forever until the pod is crashed. |
I love next, but this is a complete show stopper. Sometimes it takes 10+ seconds outside of docker for me on a Mac M2 to navigate one page. This is insane. |
Yep even more I get sometimes 50 seconds in a simple page, that’s because is also building other things related to that in pralllel I guess. not even outside docker, i just make a test to work outside docker and timing is exactly the same no difference…. Is getting slower and slower |
Btw webpack lazy building cold is faster than turbopack 🙂 by far |
Yes! I'm surprised this is not more prevalent as an issue atm; unless turbo will somehow fix all of this in 13.5 and they're waiting to address it. What configs do you have for the faster webpack builds? I've tried quite a bit and can't lower my built time by much. I need a temporary fix for this ASAP :( |
A month later no updates on this? Makes development on appDir absolutely impossible. @timneutkens ? Linked a bunch of related issues on this: |
I confirm that next app dir on dev mode and dynamic routing are very very slow on docker now |
Changes in the past weekI've been investigating this over the past week. Made a bunch of changes, some make a small impact, some make a large impact. Here's a list:
You can try them using Help InvestigateIn order to help me investigate this I'll ideally need an application that can be run, if you can't provide that (I understand if you can't) please provide the If possible follow these steps which would give me the best picture to investigate:
Known application-side slowdownsTo collect things I've seen before that cause slow compilation as this is often the root cause:
This and other slowdown reports are currently the top priority for our team. We'll continue optimizing Next.js with webpack where possible. |
entry next-app-loader
spans?)
Changed the initial post in this issue to reflect my reply above in order to ensure people see it as the first thing when opening the issue. I'm going to close the duplicate issues reporting similar slowdowns in favor of this one. I'll need help from you all to ensure this thread doesn't spiral in "It is slow" comments that are not actionable when e.g. without traces / reproduction / further information. Thank you 🙏 |
I'm checking this issue every day for new comments and every day I end up having to post a variant of this comment: #48748 (comment) Unfortunately I'll have to start marking these comments as off-topic and won't be able to help you as you're not providing what was requested @useEffects. |
@KarlsMaranjs Some thoughts on your trace:
Overall I'd say you'd benefit a lot from using Turbopack given that you're creating a module tree that is both wide and deep so you end up benefitting a lot from the parallelization. |
@timneutkens if possible to reply to my question, #48748 (comment), it's a genuine question, I am really wondering why vercel wanted to reinvent the wheel while the JS community already have a powerful and fast tool like Vite which is used by most meta-frameworks ? |
@timneutkens thanks for your feedback. I will work with the team to get rid of the barrel imports as much as possible. We'll do a few optimizations and then report back if it doesn't solves it. Thanks a lot for your time. |
This comment was marked as spam.
This comment was marked as spam.
I will try to keep it brief as I could write / talk about this for a few hours 😄 A few years ago, before Vite had much adoption, we started seeing larger and larger web applications built on top of Next.js, including enterprise adoption on teams of 100+ developers. These codebases grow to tens of thousands of custom components, and on top of that importing packages from npm. In short: even though webpack, which we were using at the time (and still are if you don't opt-in to Turbopack), is actually quite fast, it wasn't fast enough for these ever increasing codebase sizes. We also saw a trend of general applications become much more compilation heavy, largely caused by the rise of component libraries / icon libraries. Today, as you can see in this thread, it's not uncommon for a super small application to end up compiling 20K modules or more because of published design systems and icon libraries being use. The problem we saw is that even if we optimize webpack to the maximum there is still a cap on the amount of modules it can process because if you have 20.000 modules even spending 1 millisecond per module ends up being 20 seconds if you can't parallelize the processing. On top of that we're not just running 1 webpack compiler, we were running 3. One for server, one for browser, one for edge runtime. This causes complexity because these separate compilers have to coordinate as there is no shared module graph. Around the same time we also started exploring React Server Components, App Router, and overall how we'd want Next.js development to look in 5-10 years from now. One of the main topics of that was around code that can go from server->client->server->client, in short, if you're familiar, Server Actions, and specifically that Server Actions can return JSX that holds additional client components. In order to make that work we found that having a single unified module graph that can hold both server, client, and edge code in the same bundler/compiler would be very beneficial. This is something that bundlers like Parcel had been exploring for quite a while. At the time we evaluated all existing solutions and found that each of them has trade-offs, I'm not going to "throw others under the bus" as these trade-offs all make sense, they just didn't make sense for a framework like Next.js and especially Next.js in the future (this was around ~2020 if I remember correctly). Overall let's talk a bit about the goals, some of these benefit you as a user, some benefit maintenance:
Personally I'm happy to see Vite is doing well in the ecosystem. They're also taking learnings from other bundlers. If you look at the recent work they've been doing with Rolldown you'll see quite some similarities, it's going back to bundling instead of "unbundling" for compiler performance for example. Guess writing this up still took more time than I wanted to spend on it, but I hope it's helpful! TLDR: other bundlers are great, but they don't fit well for a framework like Next.js, we want to bring these improvements to existing users, in order to do that we had to build a new bundler that takes learnings from a whole lot of different approaches that have been tried before. |
Thank you for this excellent post - appreciate all the details
…On Mon, Jul 1, 2024 at 7:53 AM Tim Neutkens ***@***.***> wrote:
@timneutkens <https://github.com/timneutkens> if possible to reply to my
question, #48748 (comment)
<#48748 (comment)>,
it's a genuine question, I am really wondering why vercel wanted to
reinvent the wheel while the JS community already have a powerful and fast
tool like Vite which is used by most meta-frameworks ?
I will try to keep it brief as I could write / talk about this for a few
hours 😄
A few years ago, before Vite had much adoption, we started seeing larger
and larger web applications built on top of Next.js, including enterprise
adoption on teams of 100+ developers. These codebases grow to tens of
thousands of custom components, and on top of that importing packages from
npm. In short: even though webpack, which we were using at the time (and
still are if you don't opt-in to Turbopack), is actually quite fast, it
wasn't fast enough for these ever increasing codebase sizes.
We also saw a trend of general applications become much more compilation
heavy, largely caused by the rise of component libraries / icon libraries.
Today, as you can see in this thread, it's not uncommon for a super small
application to end up compiling 20K modules or more because of published
design systems and icon libraries being use.
The problem we saw is that even if we optimize webpack to the maximum
there is still a cap on the amount of modules it can process because if you
have 20.000 modules even spending 1 millisecond per module ends up being 20
seconds if you can't parallelize the processing.
On top of that we're not just running 1 webpack compiler, we were running
3. One for server, one for browser, one for edge runtime. This causes
complexity because these separate compilers have to coordinate as there is
no shared module graph.
Around the same time we also started exploring React Server Components,
App Router, and overall how we'd want Next.js development to look in 5-10
years from now. One of the main topics of that was around code that can go
from server->client->server->client, in short, if you're familiar, Server
Actions, and specifically that Server Actions can return JSX that holds
additional client components. In order to make that work we found that
having a single unified module graph that can hold both server, client, and
edge code in the same bundler/compiler would be very beneficial. This is
something that bundlers like Parcel had been exploring for quite a while.
At the time we evaluated all existing solutions and found that each of
them has trade-offs, I'm not going to "throw others under the bus" as these
trade-offs all make sense, they just didn't make sense for a framework like
Next.js and especially Next.js in the future (this was around ~2020 if I
remember correctly).
Overall let's talk a bit about the goals, some of these benefit you as a
user, some benefit maintenance:
-
Faster HMR
- Webpack has a performance limit on the amount of modules in the
module graph. Once you hit 30K modules ever code change takes at least ~1
second to process in overhead, regardless of if you're making a small css
change
-
Faster initial compile of a route
- Webpack with 20-30K modules would consistently take 15-30 seconds to
process because it can't parallelize across CPUs
-
No breaking changes
- We want to give existing applications all these improvements. As
part of that there's a lot of Next.js specific compiler features like
next/font that have to be added.
- Each bundler has their own behaviors / tooling. For example even
switching the CSS parser for Turbopack to Lightning CSS has proven to be an
issue as people trying out Turbopack reported behavior differences compared
to webpack, using an off the shelf existing bundler would mean hundreds of
those small differences. In this case we were able to change the way the
parsing handles to match the webpack behavior closely.
-
Scales to the largest TS/JS codebases
- As said above we're seeing larger and larger codebases, in order to
optimize these a different architecture is needed. I think the closest to
the one we landed on in comparable other bundlers is Parcel.
- For small codebases you're not going to see a big difference
comparing to other bundlers for the initial compile time / hmr time if you
set them up in the same way
-
Persistent caching
- Turbopack has an extensive caching mechanism that can be compared to
Facebook's Metro bundler (used in react-native and for instagram.com),
it will be able to persistently cache work that was done before, so that
when you reboot the development server it only has to restore the cache of
your last session, this is currently being worked on.
- This cache also applies to production builds, when you do
subsequent builds it only has to recompile the parts that you changed,
significantly speeding up production builds.
-
Production builds that closely match development
- Currently there's differences between dev/prod both in Next.js with
webpack as well as other bundlers. We want to minimize these.
-
Production optimizations that go beyond current bundlers
- We've been working on advanced tree shaking features that allow code
splitting on the import/export level instead of on the module level
inspired by Closure Compiler. Current bundlers operate on the module level.
-
Less flakiness in the compiler / compile times
- Currently because of the coordination between server/client/edge
webpack compilers there's instances where compilation takes longer because
it's coordinating between multiple instances of webpack. One of the main
goals has been to reduce complexity of the implementation and make the
bundler output all required files in one compilation pass.
-
(Later) Next.js aware bundler tooling
- I.e. much improved bundle analysis that knows about
layouts/pages/routes
-
(Later) Next.js / RSC aware bundling optimizations
- For example optimizing client components to be bundled in a way to
load as efficiently as possible
-
Full observability for maintainers
- Next.js has a lot of usage and with that comes a large amount of bug
reports / feature requests. One type of bug report that is notoriously hard
to investigate is related to slowdowns (this issue is a great example of
that) and memory usage ("Next.js is leaking memory" reports). Those types
of issues are hard to investigate because they require deep knowledge of
profiling / memory dumps of the reporter, as they generally don't want to
share runnable code (again this issue is a great example of that).
- This is a big reason of why building our own tooling for this is
beneficial, it allows us to investigate issues reported without needing
access to your codebase. If we were using any other bundler we'd have to
say "though luck, this is your problem now, try reporting it to that
bundler's GitHub repo", which is not something we want to do and we haven't
done for webpack.
Personally I'm happy to see Vite is doing well in the ecosystem. They're
also taking learnings from other bundlers. If you look at the recent work
they've been doing with Rolldown you'll see quite some similarities, it's
going back to bundling instead of "unbundling" for compiler performance for
example.
Guess writing this up still took more time than I wanted to spend on it,
but I hope it's helpful!
TLDR: other bundlers are great, but they don't fit well for a framework
like Next.js, we want to bring these improvements to existing users, in
order to do that we had to build a new bundler that takes learnings from a
whole lot of different approaches that have been tried before.
—
Reply to this email directly, view it on GitHub
<#48748 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABM6IQN7K7KQ3SAXG6Z5FSDZKE7KNAVCNFSM6AAAAAAXIQHFM6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCOJZHE2DCMZRGE>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
@timneutkens Thanks for the explanations. |
@timneutkens what do you mean? Vite always created bundles in production build using Rollup and in dev it will still not bundle anything, just using the plugin pipeline where Esbuild is the compiler (or some other additional, like Babel or SWC).
While Vite is introducing Environment API with v6 where client/ssr/RSC environments has standalone module graphs and a single file (for ex. server actions module) requires totally different output. Maybe it's just a misalignment on definition. |
I was explaining why we're building Turbopack and why we chose to build it, you're mentioning about an API that wasn't available in 2020 🙂 Regardless, let's keep this issue on topic, I'd love to receive more traces from people that ignored my earlier message and still posted screenshots 😄 |
Sorry @timneutkens I just wanted to get a better understanding about your explanation. I hope that Turbopack will work out for Next.js and compilation fatigue will be solved. I'm struggling daily on Next.js compilation times (was not able to use Turbopack as it still has limitations) while I enjoy using other solutions with very similar feature set. I am familiar with Rolldown which will be a replacement for Rollup and Esbuild, but the architecture of Vite will remain the same, which in my opinion is the winning strategy in most use cases. |
Would love to hear more about this, is it because you're customizing webpack? Can you open a GitHub discussion about this to keep this issue with almost 200 participants on topic 👍 |
Rolldown will do bundling on development, something that the current Vite doesn't do. That's what Tims was referring to "go back to bundling". |
This comment was marked as off-topic.
This comment was marked as off-topic.
It'll take a little bit of work to move us over to turbopack - are people seeing markedly better dev performance with |
Just a bit, I noticed it compiles a bit faster but doesn't solve the problem, the compile time takes like 40 secs. |
@pablojsx because you're not reading the posts in this issue and are posting screenshots instead 😢 #48748 (comment) Please follow the instructions, it ensures you're sharing useful information. Sharing screenshots is not helpful and will be marked as off-topic/spam going forward from the message linked above as there's new screenshots every day and each time someone posts a screenshot suddenly more people show up with screenshots. |
@followbl We shared compile time improvements for Vercel's website/dashboard/blog here: https://nextjs.org/blog/next-14-1#performance TLDR:
This is measured cold, without persistent caching as that is currently being worked on. |
@timneutkens, we have a quite large app on pages router using api routes for backend handling and a bunch of shared libraries and we do see the issue quite often. Your earlier message suggests upgrading to canary to be able to generate traces. The current canary however also forces an upgrade to react 19, which is a rather big effort to generate a trace. Is there a specific version that we could use that still has your fixes, but is before the react 19 upgrade? |
You can generate traces on all versions! The thing I was talking about in the earlier message is around the trace viewer tool, the |
Got the following error upon existing https://gist.github.com/dstoyanoff/f8a6904be0fb70d62095d3db7724009d Next config (stripped some sensitive parts):
Btw, one of the big issues that we have is on the api routes. We have a GraphQL route that needs to access next-auth session by calling |
@dstoyanoff I'll have a look at the trace soon, you should be able to try Turbopack with this configuration based on that being the only webpack configuration: https://x.com/timneutkens/status/1805180098703675882 |
@oljimenez it's still bundleless https://x.com/youyuxi/status/1808333856996815271 @timneutkens we have a pnpm monorepo and experimental Turbo resolve alias is not working, while running flawlessly using Webpack, without any Webpack customizations. But discussing our issue is not belonging here under this ticket, if needed we will find an existing issue or report it, totally agree on moving that to somewhere else, thanks! |
@timneutkens could you also please take some of your time to look at this trace #48748 (comment) this is a very very bare-bones nextra project and it still takes 10s+ on average (on a windows pc). |
Well said next dev with --turbo is already having an overall better dev experience than the default, also turns out i needed to upgrade to 32gbs of ram instead of my toaster's 16 and that made it consistently much faster :) |
I have 32GB RAM and it's still slow for me. It's good with My colleagues who work on Mac have no problem with this. My Ubuntu 22.04 has. Soon I'll be upgrading to 64GB. If that does not solve this then the problem is somewhere else. |
@lazarv Please file an issue indeed! We can't fix it if it's not reported 🙏 |
@dev-badace Sorry I missed your post, I just had a look and the trace shows that your disk I/O is extremely slow, for example writing the outputs takes 10 seconds per compiler. Reading files takes between 30-200ms, usually reading files from disk should take below 10ms. Do you have anti-virus enabled or something like that? That can cause such issues as it can block disk I/O on Windows |
@Moe03 @tiriana please keep following the instructions for this issue in order to avoid pinging 200 people with a message that can't be investigated by me 🙂 See my earlier replies: #48748 (comment) Even if you're having a better experience it's still useful to share these traces, especially for Turbopack, as we track all memory usage and will be able to use these traces to optimize individual pieces. |
Hi @timneutkens , We also are facing this issue with a very high memory usage (6+GB), I'm providing you the files aforementioned:
Thanks for your help ! |
Changes in the past week
I've been investigating this over the past week. Made a bunch of changes, some make a small impact, some make a large impact. Here's a list:
pages
andapp
and you're only working onapp
it will no longer compile the runtime forpages
. Note: this shifts the compilation of the runtime to when you first open a pageYou can try them using
npm install next@canary
.Help Investigate
In order to help me investigate this I'll ideally need an application that can be run, if you can't provide that (I understand if you can't) please provide the
.next/trace
file.If possible follow these steps which would give me the best picture to investigate:
npm install next@canary
(use the package manager you're using) -- We want to make sure you're using the very latest version of Next.js which includes the fixes mentioned earlier.rm -rf .next
NEXT_CPU_PROF=1
andNEXT_TURBOPACK_TRACING=1
(regardless of if you're using Turbopack, it only affects when you do use Turbopack) environment variable. E.g.:NEXT_TURBOPACK_TRACING=1 NEXT_CPU_PROF=1 npm run dev
NEXT_TURBOPACK_TRACING=1 NEXT_CPU_PROF=1 yarn dev
NEXT_TURBOPACK_TRACING=1 NEXT_CPU_PROF=1 pnpm dev
ctrl+c
).next/trace
file to https://gist.github.com -- Please don't run trace-to-tree yourself, as I use some other tools (e.g. Jaeger) that require the actual trace file..next/trace.log
as well, if it's too large for GitHub gists you can upload it to Google Drive or Dropbox and share it through that.next.config.js
(if you have one) to https://gist.github.comKnown application-side slowdowns
To collect things I've seen before that cause slow compilation as this is often the root cause:
react-icons
, material icons, etc. Most of these libraries publish barrel files with a lot of re-exports. E.g. material-ui/icons ships 5500 module re-exports, which causes all of them to be compiled. You have to addmodularizeImports
to reduce it, here's an example: long compile times locally - along with "JavaScript heap out of memory" since upgrade to NextJS 13 #45529 (comment)content
setting that tries to read too many files (e.g. files not relevant for the application)This and other slowdown reports are currently the top priority for our team. We'll continue optimizing Next.js with webpack where possible.
The Turbopack team is currently working on getting all Next.js integration tests passing when using Turbopack as we continue working towards stability of Turbopack.
Original post
Verify canary release
Provide environment information
Which area(s) of Next.js are affected? (leave empty if unsure)
No response
Link to the code that reproduces this issue
https://github.com/DigitalerSchulhof/digitaler-schulhof
To Reproduce
Note that I have been unable to replicate this issue in a demo repository.
Describe the Bug
The issue is that Next.js is generally slow in dev mode. Navigating to new pages takes several seconds:
The only somewhat reasonable time would be 600ms for the API route
/api/schulhof/auth/login/route
, even though that is still quite a lot slower than what it should be given its size.It also doesn't look right to compile ~1500 modules for each page, as most of them should be cached. The pages are not very different.
Even an empty API route takes several hundreds of ms. The following example contains solely type exports:
I am not exactly sure how to read trace trees, but what stands out is that there are (over multiple runs) several
entry next-app-loader
that take 2+ seconds to complete:Find both dev and build traces here: https://gist.github.com/jeengbe/46220a09846de6535c188e78fb6da03e
Note that I have modified
trace-to-tree.mjs
to include event times for all events.It also seems unusual that none of the modules have child traces.
Expected Behavior
Initial load and navigating should be substantially faster.
Which browser are you using? (if relevant)
No response
How are you deploying your application? (if relevant)
No response
From SyncLinear.com | NEXT-1143
The text was updated successfully, but these errors were encountered: