Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[NEXT-1143] Dev mode slow compilation #48748

Open
1 task done
jeengbe opened this issue Apr 23, 2023 · 454 comments
Open
1 task done

[NEXT-1143] Dev mode slow compilation #48748

jeengbe opened this issue Apr 23, 2023 · 454 comments
Assignees
Labels
linear: next Confirmed issue that is tracked by the Next.js team.

Comments

@jeengbe
Copy link

jeengbe commented Apr 23, 2023

⚠️ this original post has been edited by @timneutkens to reflect this comment ⚠️

Changes in the past week

I've been investigating this over the past week. Made a bunch of changes, some make a small impact, some make a large impact. Here's a list:

You can try them using npm install next@canary.

Help Investigate

In order to help me investigate this I'll ideally need an application that can be run, if you can't provide that (I understand if you can't) please provide the .next/trace file.

If possible follow these steps which would give me the best picture to investigate:

  • npm install next@canary (use the package manager you're using) -- We want to make sure you're using the very latest version of Next.js which includes the fixes mentioned earlier.
  • rm -rf .next
  • start development using the NEXT_CPU_PROF=1 and NEXT_TURBOPACK_TRACING=1 (regardless of if you're using Turbopack, it only affects when you do use Turbopack) environment variable. E.g.:
    • npm: NEXT_TURBOPACK_TRACING=1 NEXT_CPU_PROF=1 npm run dev
    • yarn: NEXT_TURBOPACK_TRACING=1 NEXT_CPU_PROF=1 yarn dev
    • pnpm: NEXT_TURBOPACK_TRACING=1 NEXT_CPU_PROF=1 pnpm dev
  • Wait a few seconds
  • Open a page that you're working on
  • Wait till it's fully loaded
  • Wait a few seconds
  • Make an edit to a file that holds a component that is on the page
  • Wait for the edit to apply
  • Wait a few seconds
  • Make another edit to the same file
  • Wait a few seconds
  • Exit the dev command (ctrl+c)
  • Upload the CPU traces put in the root of the application directory to https://gist.github.com
  • Upload the .next/trace file to https://gist.github.com -- Please don't run trace-to-tree yourself, as I use some other tools (e.g. Jaeger) that require the actual trace file.
  • If you're using Turbopack upload the .next/trace.log as well, if it's too large for GitHub gists you can upload it to Google Drive or Dropbox and share it through that.
  • Upload next.config.js (if you have one) to https://gist.github.com
  • Share it here

Known application-side slowdowns

To collect things I've seen before that cause slow compilation as this is often the root cause:

  • If you're on Windows, disable Windows Defender, it's a known cause of extreme slowdowns in filesystem access as it sends each file to an external endpoint before allowing to read/write
  • Filesystem slowness overall is what we've seen as the cause of problems, e.g. with Docker
  • react-icons, material icons, etc. Most of these libraries publish barrel files with a lot of re-exports. E.g. material-ui/icons ships 5500 module re-exports, which causes all of them to be compiled. You have to add modularizeImports to reduce it, here's an example: long compile times locally - along with "JavaScript heap out of memory" since upgrade to NextJS 13 #45529 (comment)
  • Custom postcss config, e.g. tailwindcss with a content setting that tries to read too many files (e.g. files not relevant for the application)

This and other slowdown reports are currently the top priority for our team. We'll continue optimizing Next.js with webpack where possible.
The Turbopack team is currently working on getting all Next.js integration tests passing when using Turbopack as we continue working towards stability of Turbopack.

Original post

Verify canary release

  • I verified that the issue exists in the latest Next.js canary release

Provide environment information

Operating System:
      Platform: linux
      Arch: x64
      Version: #1 SMP Fri Jan 27 02:56:13 UTC 2023
    Binaries:
      Node: 18.13.0
      npm: 8.19.3
      Yarn: 1.22.18
      pnpm: 7.30.5
    Relevant packages:
      next: 13.3.1
      eslint-config-next: 13.3.1
      react: 18.2.0
      react-dom: 18.2.0

Which area(s) of Next.js are affected? (leave empty if unsure)

No response

Link to the code that reproduces this issue

https://github.com/DigitalerSchulhof/digitaler-schulhof

To Reproduce

Note that I have been unable to replicate this issue in a demo repository.

Describe the Bug

The issue is that Next.js is generally slow in dev mode. Navigating to new pages takes several seconds:

[next] ready - started server on 0.0.0.0:3000, url: http://localhost:3000
[next] info  - Loaded env from /home/jeengbe/dsh/digitaler-schulhof/.env
[next] warn  - You have enabled experimental feature (appDir) in next.config.js.
[next] warn  - Experimental features are not covered by semver, and may cause unexpected or broken application behavior. Use at your own risk.
[next] info  - Thank you for testing `appDir` please leave your feedback at https://nextjs.link/app-feedback
[next] event - compiled client and server successfully in 1574 ms (267 modules)
[next] wait  - compiling...
[next] event - compiled client and server successfully in 219 ms (267 modules)
[next] wait  - compiling /(schulhof)/Schulhof/page (client and server)...
[next] event - compiled client and server successfully in 3.6s (1364 modules)
[next] wait  - compiling /(schulhof)/Schulhof/(login)/Anmeldung/page (client and server)...
[next] event - compiled client and server successfully in 1920 ms (1411 modules)
[next] wait  - compiling /api/schulhof/auth/login/route (client and server)...
[next] event - compiled client and server successfully in 625 ms (1473 modules)
[next] wait  - compiling /(schulhof)/Schulhof/Nutzerkonto/page (client and server)...
[next] event - compiled client and server successfully in 1062 ms (1482 modules)
[next] wait  - compiling /(schulhof)/Schulhof/Nutzerkonto/Profil/page (client and server)...
[next] event - compiled client and server successfully in 1476 ms (1546 modules)
[next] wait  - compiling /(schulhof)/Schulhof/Nutzerkonto/Profil/Einstellungen/page (client and server)...
[next] event - compiled client and server successfully in 2.1s (1559 modules)

The only somewhat reasonable time would be 600ms for the API route /api/schulhof/auth/login/route, even though that is still quite a lot slower than what it should be given its size.

It also doesn't look right to compile ~1500 modules for each page, as most of them should be cached. The pages are not very different.

Even an empty API route takes several hundreds of ms. The following example contains solely type exports:

[next] wait  - compiling /api/schulhof/administration/persons/persons/settings/route (client and server)...
[next] event - compiled successfully in 303 ms (107 modules)

I am not exactly sure how to read trace trees, but what stands out is that there are (over multiple runs) several entry next-app-loader that take 2+ seconds to complete:

│  │  ├─ entry next-app-loader?name=app/(schulhof)/Schulhof/page&page=/(schulhof)/Schulhof/page&appPaths=/(schulhof)/Schulhof/page&pagePath=private-next-app-dir/(schulhof)/Schulhof/page.tsx&appDir=/home/jeengbe/dsh/digitaler-schulhof/app&pageExtensions=tsx&pageExtensions=ts&pageExtensions=jsx&pageExtensions=js&rootDir=/home/jeengbe/dsh/digitaler-schulhof&isDev=true&tsconfigPath=tsconfig.json&assetPrefix=&nextConfigOutput=! 1.9s

Find both dev and build traces here: https://gist.github.com/jeengbe/46220a09846de6535c188e78fb6da03e

Note that I have modified trace-to-tree.mjs to include event times for all events.

It also seems unusual that none of the modules have child traces.

Expected Behavior

Initial load and navigating should be substantially faster.

Which browser are you using? (if relevant)

No response

How are you deploying your application? (if relevant)

No response

From SyncLinear.com | NEXT-1143

@jeengbe jeengbe added the bug Issue was opened via the bug report template. label Apr 23, 2023
@joacub
Copy link

joacub commented Apr 24, 2023

same here, and in docker env is even worse, seems like is processing same files over and over without caching them.

@jinojacob15
Copy link

Same for me also dev env ,navigating to different pages via link component is pretty slow

@denu5
Copy link

denu5 commented Apr 25, 2023

+1 its same here, hitting the page first time seems fine but routing via links gets stuck

@joacub
Copy link

joacub commented Apr 25, 2023

last canary version has a better cold build times improvements, still slow like 2-5 seconds (in docker env) waiting but much better

the version im talking about is 13.3.2-canary.6

@denu5
Copy link

denu5 commented May 2, 2023

Hey, @jeengbe there have been some patch updates (13.3.1 -> 13.3.4) did it improve for you?

@jeengbe
Copy link
Author

jeengbe commented May 2, 2023

Hi @denu5,

unfortunately, I can't report any real performance changes since I opened this issue.

You might want to check out the above linked issue in the TypeScript repo though - might be related.

@joacub
Copy link

joacub commented May 2, 2023

As @jeengbe mention there is no performance improvement, there is also a lot of I/O I don’t know why, one request gets pretty much like 1gb-2gb of io. And it is very slow.

@jeengbe
Copy link
Author

jeengbe commented May 2, 2023

As @jeengbe mention there is no performance improvement, there is also a lot of I/O I don’t know why, one request gets pretty much like 1gb-2gb of io. And it is very slow.

Unfortunately, I can't confirm this for my case

image

@joacub
Copy link

joacub commented May 2, 2023

As @jeengbe mention there is no performance improvement, there is also a lot of I/O I don’t know why, one request gets pretty much like 1gb-2gb of io. And it is very slow.

Unfortunately, I can't confirm this for my case

image

That’s pretty good, in my case there is a lot of I/O, maybe is because I’m using material-ui but I think is too much even though.

@jeengbe
Copy link
Author

jeengbe commented May 2, 2023

As @jeengbe mention there is no performance improvement, there is also a lot of I/O I don’t know why, one request gets pretty much like 1gb-2gb of io. And it is very slow.

Unfortunately, I can't confirm this for my case
image

That’s pretty good, in my case there is a lot of I/O, maybe is because I’m using material-ui but I think is too much even though.

Possibly, it would align with what your trace shows: #48407 (comment)

@langfordG
Copy link

langfordG commented May 4, 2023

I see that slow route changes in dev mode are showing a '[Fast Refresh] rebuilding' message in the browser console. Sometimes it performs a full page reload when changing routes even if no files have been edited.

@timneutkens timneutkens added the linear: next Confirmed issue that is tracked by the Next.js team. label May 10, 2023
@timneutkens timneutkens changed the title Dev mode very slow navigation (Slow entry next-app-loader spans?) [NEXT-1143] Dev mode very slow navigation (Slow entry next-app-loader spans?) May 10, 2023
@AsathalMannan
Copy link

its slowing down the development..!

@vajajak
Copy link

vajajak commented May 21, 2023

Having the same issue here, in the Docker environment it's come to a point where it's almost unusable, and sometimes I even have to do a hard reload, after waiting too long for navigation. This is the case both with component from next/navigation, as with the router.push (useRouter hook imported from next/navigation). We're using Next.js 13.4.2.

@joacub
Copy link

joacub commented May 22, 2023

Having the same issue here, in the Docker environment it's come to a point where it's almost unusable, and sometimes I even have to do a hard reload, after waiting too long for navigation. This is the case both with component from next/navigation, as with the router.push (useRouter hook imported from next/navigation). We're using Next.js 13.4.2.

same here, it is almost not usable in dokcer enviorements, but also outside docker is very slow, something is not working nice. this is painfully slow.

@JoshApp
Copy link

JoshApp commented May 27, 2023

Yeah same for me. I used to remote develop inside our k8s cluster but dev --turbo is super slow inside a container and causes my health check endpoint to sigkill it regularly.

The whole app router is super slow when containerized in Dev mode.

It works perfectly fine when I run both on my local machine and connect it via reverse proxy. This way it's faster than the old setup (which was not significantly faster before) and takes advantage of preloading pages via next/link. I see inconsistencies in caching too where it's a mix of instant navigation or long builds (around 3.5k modules for some things) around 2-10 sec.

Also there is this weird thing happening that a page compiles just fine and then later it grinds to a halt being stuck in waiting for compiling forever until the pod is crashed.

@Rykuno
Copy link

Rykuno commented May 27, 2023

I love next, but this is a complete show stopper. Sometimes it takes 10+ seconds outside of docker for me on a Mac M2 to navigate one page.

This is insane.

@joacub
Copy link

joacub commented May 27, 2023

I love next, but this is a complete show stopper. Sometimes it takes 10+ seconds outside of docker for me on a Mac M2 to navigate one page.

This is insane.

Yep even more I get sometimes 50 seconds in a simple page, that’s because is also building other things related to that in pralllel I guess.

not even outside docker, i just make a test to work outside docker and timing is exactly the same no difference…. Is getting slower and slower

@joacub
Copy link

joacub commented May 27, 2023

Btw webpack lazy building cold is faster than turbopack 🙂 by far

@Rykuno
Copy link

Rykuno commented May 27, 2023

Btw webpack lazy building cold is faster than turbopack 🙂 by far

Yes! I'm surprised this is not more prevalent as an issue atm; unless turbo will somehow fix all of this in 13.5 and they're waiting to address it.

What configs do you have for the faster webpack builds? I've tried quite a bit and can't lower my built time by much. I need a temporary fix for this ASAP :(

@oalexdoda
Copy link

A month later no updates on this? Makes development on appDir absolutely impossible. @timneutkens ?

Linked a bunch of related issues on this:
#50332

@JunkyDeLuxe
Copy link

I confirm that next app dir on dev mode and dynamic routing are very very slow on docker now

@timneutkens
Copy link
Member

timneutkens commented Jun 6, 2023

Changes in the past week

I've been investigating this over the past week. Made a bunch of changes, some make a small impact, some make a large impact. Here's a list:

You can try them using npm install next@canary.

Help Investigate

In order to help me investigate this I'll ideally need an application that can be run, if you can't provide that (I understand if you can't) please provide the .next/trace file.

If possible follow these steps which would give me the best picture to investigate:

  • npm install next@canary (use the package manager you're using) -- We want to make sure you're using the very latest version of Next.js which includes the fixes mentioned earlier.
  • rm -rf .next
  • start development using the NEXT_CPU_PROF=1 environment variable. E.g.:
    • npm: NEXT_CPU_PROF=1 npm run dev
    • yarn: NEXT_CPU_PROF=1 yarn dev
    • pnpm: NEXT_CPU_PROF=1 pnpm dev
  • Wait a few seconds
  • Open a page that you're working on
  • Wait till it's fully loaded
  • Wait a few seconds
  • Make an edit to a file that holds a component that is on the page
  • Wait for the edit to apply
  • Wait a few seconds
  • Make another edit to the same file
  • Wait a few seconds
  • Exit the dev command (ctrl+c)
  • Upload the CPU traces put in the root of the application directory to https://gist.github.com
  • Upload the .next/trace file to https://gist.github.com -- Please don't run trace-to-tree yourself, as I use some other tools (e.g. Jaeger) that require the actual trace file.
  • Share it here

Known application-side slowdowns

To collect things I've seen before that cause slow compilation as this is often the root cause:

  • If you're on Windows, disable Windows Defender, it's a known cause of extreme slowdowns in filesystem access as it sends each file to an external endpoint before allowing to read/write
  • Filesystem slowness overall is what we've seen as the cause of problems, e.g. with Docker
  • react-icons, material icons, etc. Most of these libraries publish barrel files with a lot of re-exports. E.g. material-ui/icons ships 5500 module re-exports, which causes all of them to be compiled. You have to add modularizeImports to reduce it, here's an example: long compile times locally - along with "JavaScript heap out of memory" since upgrade to NextJS 13 #45529 (comment)
  • Custom postcss config, e.g. tailwindcss with a content setting that tries to read too many files (e.g. files not relevant for the application)

This and other slowdown reports are currently the top priority for our team. We'll continue optimizing Next.js with webpack where possible.
The Turbopack team is currently working on getting all Next.js integration tests passing when using Turbopack as we continue working towards stability of Turbopack.

@timneutkens timneutkens changed the title [NEXT-1143] Dev mode very slow navigation (Slow entry next-app-loader spans?) [NEXT-1143] Dev mode slow navigation Jun 6, 2023
@timneutkens timneutkens changed the title [NEXT-1143] Dev mode slow navigation [NEXT-1143] Dev mode slow compilation Jun 6, 2023
@timneutkens
Copy link
Member

Changed the initial post in this issue to reflect my reply above in order to ensure people see it as the first thing when opening the issue. I'm going to close the duplicate issues reporting similar slowdowns in favor of this one.

I'll need help from you all to ensure this thread doesn't spiral in "It is slow" comments that are not actionable when e.g. without traces / reproduction / further information. Thank you 🙏

@timneutkens
Copy link
Member

I'm checking this issue every day for new comments and every day I end up having to post a variant of this comment: #48748 (comment)

Unfortunately I'll have to start marking these comments as off-topic and won't be able to help you as you're not providing what was requested @useEffects.

@timneutkens
Copy link
Member

@KarlsMaranjs Some thoughts on your trace:

  • You're on Windows and your filesystem seems to be quite slow, did you disable antivirus?
  • You're importing a lot of barrel libraries (or the libraries you installed are)
    • @fpfx-technologies-llc\ui\index.ts
    • @fpfx-technologies-llc\ui\config\index.ts
    • @fpfx-technologies-llc\core\index.ts
    • @fpfx-technologies-llc\services\index.ts
    • date-fns
      • date-fns\esm\locale\index.js (imported from react-day-picker)
    • @fpfx-technologies-llc\types\index.ts
    • recharts (with barrel optimization is still takes 2 seconds because it uses most of the library)
    • @microsoft\applicationinsights-react-js
    • @microsoft\applicationinsights-web\dist-es5\applicationinsights-web.js
    • Using 2 variants of react-icons: react-icons\hi\index.mjs and react-icons\io5\index.mjs
      • Barallel optimization is optimizing this but processing the root barrel file still takes 70ms because it's quite slow to parse given the size of the files react-icons publishes

Overall I'd say you'd benefit a lot from using Turbopack given that you're creating a module tree that is both wide and deep so you end up benefitting a lot from the parallelization.

@roonie007
Copy link

roonie007 commented Jun 26, 2024

I'm checking this issue every day for new comments and every day I end up having to post a variant of this comment: #48748 (comment)

Unfortunately I'll have to start marking these comments as off-topic and won't be able to help you as you're not providing what was requested @useEffects.

@timneutkens if possible to reply to my question, #48748 (comment), it's a genuine question, I am really wondering why vercel wanted to reinvent the wheel while the JS community already have a powerful and fast tool like Vite which is used by most meta-frameworks ?

@KarlsMaranjs
Copy link

KarlsMaranjs commented Jun 26, 2024

@KarlsMaranjs Some thoughts on your trace:

...

@timneutkens thanks for your feedback. I will work with the team to get rid of the barrel imports as much as possible. We'll do a few optimizations and then report back if it doesn't solves it. Thanks a lot for your time.

@pablojsx

This comment was marked as spam.

@timneutkens
Copy link
Member

@timneutkens if possible to reply to my question, #48748 (comment), it's a genuine question, I am really wondering why vercel wanted to reinvent the wheel while the JS community already have a powerful and fast tool like Vite which is used by most meta-frameworks ?

I will try to keep it brief as I could write / talk about this for a few hours 😄

A few years ago, before Vite had much adoption, we started seeing larger and larger web applications built on top of Next.js, including enterprise adoption on teams of 100+ developers. These codebases grow to tens of thousands of custom components, and on top of that importing packages from npm. In short: even though webpack, which we were using at the time (and still are if you don't opt-in to Turbopack), is actually quite fast, it wasn't fast enough for these ever increasing codebase sizes.

We also saw a trend of general applications become much more compilation heavy, largely caused by the rise of component libraries / icon libraries. Today, as you can see in this thread, it's not uncommon for a super small application to end up compiling 20K modules or more because of published design systems and icon libraries being use.

The problem we saw is that even if we optimize webpack to the maximum there is still a cap on the amount of modules it can process because if you have 20.000 modules even spending 1 millisecond per module ends up being 20 seconds if you can't parallelize the processing.

On top of that we're not just running 1 webpack compiler, we were running 3. One for server, one for browser, one for edge runtime. This causes complexity because these separate compilers have to coordinate as there is no shared module graph.

Around the same time we also started exploring React Server Components, App Router, and overall how we'd want Next.js development to look in 5-10 years from now. One of the main topics of that was around code that can go from server->client->server->client, in short, if you're familiar, Server Actions, and specifically that Server Actions can return JSX that holds additional client components. In order to make that work we found that having a single unified module graph that can hold both server, client, and edge code in the same bundler/compiler would be very beneficial. This is something that bundlers like Parcel had been exploring for quite a while.

At the time we evaluated all existing solutions and found that each of them has trade-offs, I'm not going to "throw others under the bus" as these trade-offs all make sense, they just didn't make sense for a framework like Next.js and especially Next.js in the future (this was around ~2020 if I remember correctly).

Overall let's talk a bit about the goals, some of these benefit you as a user, some benefit maintenance:

  • Faster HMR

    • Webpack has a performance limit on the amount of modules in the module graph. Once you hit 30K modules ever code change takes at least ~1 second to process in overhead, regardless of if you're making a small css change
  • Faster initial compile of a route

    • Webpack with 20-30K modules would consistently take 15-30 seconds to process because it can't parallelize across CPUs
  • No breaking changes

    • We want to give existing applications all these improvements. As part of that there's a lot of Next.js specific compiler features like next/font that have to be added.
    • Each bundler has their own behaviors / tooling. For example even switching the CSS parser for Turbopack to Lightning CSS has proven to be an issue as people trying out Turbopack reported behavior differences compared to webpack, using an off the shelf existing bundler would mean hundreds of those small differences. In this case we were able to change the way the parsing handles to match the webpack behavior closely.
  • Scales to the largest TS/JS codebases

    • As said above we're seeing larger and larger codebases, in order to optimize these a different architecture is needed. I think the closest to the one we landed on in comparable other bundlers is Parcel.
    • For small codebases you're not going to see a big difference comparing to other bundlers for the initial compile time / hmr time if you set them up in the same way
  • Persistent caching

    • Turbopack has an extensive caching mechanism that can be compared to Facebook's Metro bundler (used in react-native and for instagram.com), it will be able to persistently cache work that was done before, so that when you reboot the development server it only has to restore the cache of your last session, this is currently being worked on.
    • This cache also applies to production builds, when you do subsequent builds it only has to recompile the parts that you changed, significantly speeding up production builds.
  • Production builds that closely match development

    • Currently there's differences between dev/prod both in Next.js with webpack as well as other bundlers. We want to minimize these.
  • Production optimizations that go beyond current bundlers

    • We've been working on advanced tree shaking features that allow code splitting on the import/export level instead of on the module level inspired by Closure Compiler. Current bundlers operate on the module level.
  • Less flakiness in the compiler / compile times

    • Currently because of the coordination between server/client/edge webpack compilers there's instances where compilation takes longer because it's coordinating between multiple instances of webpack. One of the main goals has been to reduce complexity of the implementation and make the bundler output all required files in one compilation pass.
  • (Later) Next.js aware bundler tooling

    • I.e. much improved bundle analysis that knows about layouts/pages/routes
  • (Later) Next.js / RSC aware bundling optimizations

    • For example optimizing client components to be bundled in a way to load as efficiently as possible
  • Full observability for maintainers

    • Next.js has a lot of usage and with that comes a large amount of bug reports / feature requests. One type of bug report that is notoriously hard to investigate is related to slowdowns (this issue is a great example of that) and memory usage ("Next.js is leaking memory" reports). Those types of issues are hard to investigate because they require deep knowledge of profiling / memory dumps of the reporter, as they generally don't want to share runnable code (again this issue is a great example of that).
    • This is a big reason of why building our own tooling for this is beneficial, it allows us to investigate issues reported without needing access to your codebase. If we were using any other bundler we'd have to say "though luck, this is your problem now, try reporting it to that bundler's GitHub repo", which is not something we want to do and we haven't done for webpack.

Personally I'm happy to see Vite is doing well in the ecosystem. They're also taking learnings from other bundlers. If you look at the recent work they've been doing with Rolldown you'll see quite some similarities, it's going back to bundling instead of "unbundling" for compiler performance for example.

Guess writing this up still took more time than I wanted to spend on it, but I hope it's helpful!

TLDR: other bundlers are great, but they don't fit well for a framework like Next.js, we want to bring these improvements to existing users, in order to do that we had to build a new bundler that takes learnings from a whole lot of different approaches that have been tried before.

@jonknyc
Copy link

jonknyc commented Jul 1, 2024 via email

@roonie007
Copy link

@timneutkens Thanks for the explanations.

@lazarv
Copy link

lazarv commented Jul 1, 2024

with Rolldown you'll see quite some similarities, it's going back to bundling instead of "unbundling" for compiler performance for example

@timneutkens what do you mean? Vite always created bundles in production build using Rollup and in dev it will still not bundle anything, just using the plugin pipeline where Esbuild is the compiler (or some other additional, like Babel or SWC).

having a single unified module graph that can hold both server, client, and edge code in the same bundler/compiler would be very beneficial

While Vite is introducing Environment API with v6 where client/ssr/RSC environments has standalone module graphs and a single file (for ex. server actions module) requires totally different output. Maybe it's just a misalignment on definition.

@timneutkens
Copy link
Member

@timneutkens what do you mean? Vite always created bundles in production build using Rollup and in dev it will still not bundle anything, just using the plugin pipeline where Esbuild is the compiler (or some other additional, like Babel or SWC).

https://rolldown.rs/about

While Vite is introducing Environment API with v6 where client/ssr/RSC environments has standalone module graphs and a single file (for ex. server actions module) requires totally different output. Maybe it's just a misalignment on definition.

I was explaining why we're building Turbopack and why we chose to build it, you're mentioning about an API that wasn't available in 2020 🙂


Regardless, let's keep this issue on topic, I'd love to receive more traces from people that ignored my earlier message and still posted screenshots 😄

@lazarv
Copy link

lazarv commented Jul 1, 2024

Sorry @timneutkens I just wanted to get a better understanding about your explanation. I hope that Turbopack will work out for Next.js and compilation fatigue will be solved. I'm struggling daily on Next.js compilation times (was not able to use Turbopack as it still has limitations) while I enjoy using other solutions with very similar feature set.

I am familiar with Rolldown which will be a replacement for Rollup and Esbuild, but the architecture of Vite will remain the same, which in my opinion is the winning strategy in most use cases.

@timneutkens
Copy link
Member

I'm struggling daily on Next.js compilation times (was not able to use Turbopack as it still has limitations)

Would love to hear more about this, is it because you're customizing webpack? Can you open a GitHub discussion about this to keep this issue with almost 200 participants on topic 👍

@oljimenez
Copy link

oljimenez commented Jul 1, 2024

Sorry @timneutkens I just wanted to get a better understanding about your explanation. I hope that Turbopack will work out for Next.js and compilation fatigue will be solved. I'm struggling daily on Next.js compilation times (was not able to use Turbopack as it still has limitations) while I enjoy using other solutions with very similar feature set.

I am familiar with Rolldown which will be a replacement for Rollup and Esbuild, but the architecture of Vite will remain the same, which in my opinion is the winning strategy in most use cases.

Rolldown will do bundling on development, something that the current Vite doesn't do. That's what Tims was referring to "go back to bundling".

@pablojsx

This comment was marked as off-topic.

@followbl
Copy link
Contributor

followbl commented Jul 1, 2024

It'll take a little bit of work to move us over to turbopack - are people seeing markedly better dev performance with next dev --turbo?

@pablojsx
Copy link

pablojsx commented Jul 1, 2024

It'll take a little bit of work to move us over to turbopack - are people seeing markedly better dev performance with next dev --turbo?

Just a bit, I noticed it compiles a bit faster but doesn't solve the problem, the compile time takes like 40 secs.

@timneutkens
Copy link
Member

@pablojsx because you're not reading the posts in this issue and are posting screenshots instead 😢

#48748 (comment)
#48748 (comment)

Please follow the instructions, it ensures you're sharing useful information. Sharing screenshots is not helpful and will be marked as off-topic/spam going forward from the message linked above as there's new screenshots every day and each time someone posts a screenshot suddenly more people show up with screenshots.

@timneutkens
Copy link
Member

@followbl We shared compile time improvements for Vercel's website/dashboard/blog here: https://nextjs.org/blog/next-14-1#performance

TLDR:

  • About 45% faster initial compile of a route
  • About 95% faster Fast Refresh
  • About 75% faster development server startup (time to the "ready" message)

This is measured cold, without persistent caching as that is currently being worked on.

@dstoyanoff
Copy link

@timneutkens, we have a quite large app on pages router using api routes for backend handling and a bunch of shared libraries and we do see the issue quite often. Your earlier message suggests upgrading to canary to be able to generate traces. The current canary however also forces an upgrade to react 19, which is a rather big effort to generate a trace. Is there a specific version that we could use that still has your fixes, but is before the react 19 upgrade?

@timneutkens
Copy link
Member

You can generate traces on all versions!

The thing I was talking about in the earlier message is around the trace viewer tool, the next internal turbo-trace-server command is only on canary right now, but that tool can interpret traces from all versions 👍

@dstoyanoff
Copy link

dstoyanoff commented Jul 2, 2024

Got the following error upon existing Cannot generate CPU profiling: Error [ERR_INSPECTOR_COMMAND]: Inspector error -32000: No recording profiles found, but still adding here the generated .next/trace file.

https://gist.github.com/dstoyanoff/f8a6904be0fb70d62095d3db7724009d

Next config (stripped some sensitive parts):

const path = require('path');

const withBundleAnalyzer = require('next-bundle-analyzer')({
  enabled: process.env.ANALYZE === 'true',
  html: {
    open: false,
  },
});

/** @type {import('next').NextConfig} */
const nextConfig = {
  output: 'standalone',
  experimental: {
    externalDir: true,
    outputFileTracingRoot: path.join(__dirname, '../..'),
    fallbackNodePolyfills: false,
    // its true by default, which breaks the class names (.constructor.name)
    serverMinification: false,
  },
  eslint: {
    ignoreDuringBuilds: true,
  },
  webpack: (config, { isServer }) => {
    if (!isServer) {
      config.resolve.fallback.fs = false;
    }
    return config;
  },
  images: {
    domains: [
       // ....
    ],
  },
  pageExtensions: ['page.tsx', 'route.ts', 'api.ts'],
  publicRuntimeConfig: {
    // ...
  },
  async headers() {
    return [
      // ...
    ];
  },
};

module.exports = withBundleAnalyzer(nextConfig);

Btw, one of the big issues that we have is on the api routes. We have a GraphQL route that needs to access next-auth session by calling getServerSession (next-auth@4.22.0). Sometimes, when making a request to the GraphQL server, there are a few seconds where nothing happens in the console and then all of a sudden things start working. Not sure if that's related to NextJS at all, but worth mentioning.

@timneutkens
Copy link
Member

@dstoyanoff I'll have a look at the trace soon, you should be able to try Turbopack with this configuration based on that being the only webpack configuration: https://x.com/timneutkens/status/1805180098703675882

@lazarv
Copy link

lazarv commented Jul 3, 2024

@oljimenez it's still bundleless https://x.com/youyuxi/status/1808333856996815271

@timneutkens we have a pnpm monorepo and experimental Turbo resolve alias is not working, while running flawlessly using Webpack, without any Webpack customizations. But discussing our issue is not belonging here under this ticket, if needed we will find an existing issue or report it, totally agree on moving that to somewhere else, thanks!

@dev-badace
Copy link

@timneutkens could you also please take some of your time to look at this trace #48748 (comment)

this is a very very bare-bones nextra project and it still takes 10s+ on average (on a windows pc).

@Moe03
Copy link

Moe03 commented Jul 3, 2024

@timneutkens if possible to reply to my question, #48748 (comment), it's a genuine question, I am really wondering why vercel wanted to reinvent the wheel while the JS community already have a powerful and fast tool like Vite which is used by most meta-frameworks ?

I will try to keep it brief as I could write / talk about this for a few hours 😄

..

TLDR: other bundlers are great, but they don't fit well for a framework like Next.js, we want to bring these improvements to existing users, in order to do that we had to build a new bundler that takes learnings from a whole lot of different approaches that have been tried before.

Well said next dev with --turbo is already having an overall better dev experience than the default, also turns out i needed to upgrade to 32gbs of ram instead of my toaster's 16 and that made it consistently much faster :)

@tiriana
Copy link

tiriana commented Jul 4, 2024

@timneutkens if possible to reply to my question, #48748 (comment), it's a genuine question, I am really wondering why vercel wanted to reinvent the wheel while the JS community already have a powerful and fast tool like Vite which is used by most meta-frameworks ?

I will try to keep it brief as I could write / talk about this for a few hours 😄
..
TLDR: other bundlers are great, but they don't fit well for a framework like Next.js, we want to bring these improvements to existing users, in order to do that we had to build a new bundler that takes learnings from a whole lot of different approaches that have been tried before.

Well said next dev with --turbo is already having an overall better dev experience than the default, also turns out i needed to upgrade to 32gbs of ram instead of my toaster's 16 and that made it consistently much faster :)

I have 32GB RAM and it's still slow for me. It's good with --turbo but then there are other downsides like lack of support for dynamic imports.

My colleagues who work on Mac have no problem with this. My Ubuntu 22.04 has.

Soon I'll be upgrading to 64GB. If that does not solve this then the problem is somewhere else.

@timneutkens
Copy link
Member

@lazarv Please file an issue indeed! We can't fix it if it's not reported 🙏

@timneutkens
Copy link
Member

@dev-badace Sorry I missed your post, I just had a look and the trace shows that your disk I/O is extremely slow, for example writing the outputs takes 10 seconds per compiler. Reading files takes between 30-200ms, usually reading files from disk should take below 10ms. Do you have anti-virus enabled or something like that? That can cause such issues as it can block disk I/O on Windows

CleanShot 2024-07-05 at 14 54 25@2x

@timneutkens
Copy link
Member

@Moe03 @tiriana please keep following the instructions for this issue in order to avoid pinging 200 people with a message that can't be investigated by me 🙂

See my earlier replies: #48748 (comment)

Even if you're having a better experience it's still useful to share these traces, especially for Turbopack, as we track all memory usage and will be able to use these traces to optimize individual pieces.

@PILLOWPET
Copy link

PILLOWPET commented Jul 5, 2024

Hi @timneutkens ,

We also are facing this issue with a very high memory usage (6+GB), I'm providing you the files aforementioned:

Thanks for your help !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
linear: next Confirmed issue that is tracked by the Next.js team.
Projects
None yet
Development

No branches or pull requests