Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Turbo crashes after deleting yarn.lock or update #6715

Closed
1 task done
FrancoRATOVOSON opened this issue Dec 6, 2023 · 7 comments · Fixed by #6723
Closed
1 task done

Turbo crashes after deleting yarn.lock or update #6715

FrancoRATOVOSON opened this issue Dec 6, 2023 · 7 comments · Fixed by #6723
Assignees
Labels
kind: bug Something isn't working needs: triage New issues get this label. Remove it after triage owned-by: turborepo

Comments

@FrancoRATOVOSON
Copy link

FrancoRATOVOSON commented Dec 6, 2023

Verify canary release

  • I verified that the issue exists in the latest Turborepo canary release.

Link to code that reproduces this issue

https://github.com/FrancoRATOVOSON/e-commerce/tree/create-ui

What package manager are you using / does the bug impact?

Yarn v2/v3 (node_modules linker only)

What operating system are you using?

Linux

Which canary version will you have in your reproduction?

turbo@npm:1.11.0 (via npm:^1.11.0)

Describe the Bug

The wode work when I run a script in a specific package with yarn workspace [package-name] [script] but give me this message if with a turbo command :

$ yarn web
Oops! Turbo has crashed.
         
A report has been written to /tmp/report-a1019fad-095f-4bb0-a8eb-5089c85ca7c0.toml

Please open an issue at https://github.com/vercel/turbo/issues/new/choose and include this file

And this is the content of the .toml file :

name = "turbo"
operating_system = "Fedora 38.0.0 [64-bit]"
crate_version = "1.11.0"
explanation = """
file 'crates/turborepo-lockfiles/src/berry/mod.rs' at line 126
"""
cause = "Descriptor collision eslint-config-custom@workspace:* and npm:*"
method = "Panic"
backtrace = """

   0:  0x101cedd - <turborepo_repository[2915623b9f180db5]::package_manager::PackageManager>::parse_lockfile
   1:  0x10185cd - <turborepo_repository[2915623b9f180db5]::package_manager::PackageManager>::read_lockfile
   2:   0xc0e269 - <turborepo_repository[2915623b9f180db5]::package_graph::builder::BuildState<turborepo_repository[2915623b9f180db5]::package_graph::builder::ResolvedWorkspaces, turborepo_repository[2915623b9f180db5]::discovery::CachingPackageDiscovery<turborepo_repository[2915623b9f180db5]::discovery::LocalPackageDiscovery>>>::populate_lockfile::{closure#0}::{closure#0}
   3:   0xc0df92 - <tracing[4917b8fba0f262d1]::instrument::Instrumented<<turborepo_repository[2915623b9f180db5]::package_graph::builder::BuildState<turborepo_repository[2915623b9f180db5]::package_graph::builder::ResolvedWorkspaces, turborepo_repository[2915623b9f180db5]::discovery::CachingPackageDiscovery<turborepo_repository[2915623b9f180db5]::discovery::LocalPackageDiscovery>>>::populate_lockfile::{closure#0}::{closure#0}> as core[b651e4c64ee609a3]::future::future::Future>::poll
   4:   0xc0d0f0 - <turborepo_repository[2915623b9f180db5]::package_graph::builder::BuildState<turborepo_repository[2915623b9f180db5]::package_graph::builder::ResolvedWorkspaces, turborepo_repository[2915623b9f180db5]::discovery::CachingPackageDiscovery<turborepo_repository[2915623b9f180db5]::discovery::LocalPackageDiscovery>>>::resolve_lockfile::{closure#0}::{closure#0}
   5:   0xc0ce22 - <tracing[4917b8fba0f262d1]::instrument::Instrumented<<turborepo_repository[2915623b9f180db5]::package_graph::builder::BuildState<turborepo_repository[2915623b9f180db5]::package_graph::builder::ResolvedWorkspaces, turborepo_repository[2915623b9f180db5]::discovery::CachingPackageDiscovery<turborepo_repository[2915623b9f180db5]::discovery::LocalPackageDiscovery>>>::resolve_lockfile::{closure#0}::{closure#0}> as core[b651e4c64ee609a3]::future::future::Future>::poll
   6:   0xc0b104 - <turborepo_repository[2915623b9f180db5]::package_graph::builder::BuildState<turborepo_repository[2915623b9f180db5]::package_graph::builder::ResolvedWorkspaces, turborepo_repository[2915623b9f180db5]::discovery::CachingPackageDiscovery<turborepo_repository[2915623b9f180db5]::discovery::LocalPackageDiscovery>>>::resolve_lockfile::{closure#0}.30101
   7:   0xbfea0c - <turborepo_repository[2915623b9f180db5]::package_graph::builder::PackageGraphBuilder<turborepo_repository[2915623b9f180db5]::discovery::LocalPackageDiscovery>>::build::{closure#0}::{closure#0}
   8:   0xbfe265 - <tracing[4917b8fba0f262d1]::instrument::Instrumented<<turborepo_repository[2915623b9f180db5]::package_graph::builder::PackageGraphBuilder<turborepo_repository[2915623b9f180db5]::discovery::LocalPackageDiscovery>>::build::{closure#0}::{closure#0}> as core[b651e4c64ee609a3]::future::future::Future>::poll
   9:   0xbf1d28 - <turborepo_lib[adfb7611b6a9cc2e]::run::Run>::run_with_analytics::{closure#0}.29865
  10:   0xb57835 - <tokio[aa1a39802c796514]::future::poll_fn::PollFn<turborepo_lib[adfb7611b6a9cc2e]::commands::run::run::{closure#0}::{closure#2}> as core[b651e4c64ee609a3]::future::future::Future>::poll
  11:   0xe13755 - turborepo_lib[adfb7611b6a9cc2e]::cli::run::{closure#0}.35759
  12:   0xcd0d7d - <tokio[aa1a39802c796514]::runtime::context::blocking::BlockingRegionGuard>::block_on::<turborepo_lib[adfb7611b6a9cc2e]::cli::run::{closure#0}>
  13:   0xdffecf - turborepo_lib[adfb7611b6a9cc2e]::cli::run
  14:   0xf54488 - turborepo_lib[adfb7611b6a9cc2e]::shim::run_correct_turbo
  15:   0xf50092 - turborepo_lib[adfb7611b6a9cc2e]::main
  16:   0xa3f7e3 - turbo[dd188b14ed14ed60]::main
  17:   0xa58883 - std[222148ad30572bc6]::sys_common::backtrace::__rust_begin_short_backtrace::<fn() -> core[b651e4c64ee609a3]::result::Result<(), anyhow[fd74d4a407a9609c]::Error>, core[b651e4c64ee609a3]::result::Result<(), anyhow[fd74d4a407a9609c]::Error>>
  18:   0xa58583 - std[222148ad30572bc6]::rt::lang_start::<core[b651e4c64ee609a3]::result::Result<(), anyhow[fd74d4a407a9609c]::Error>>"""

Expected Behavior

I expect the script to work as before, and the same way with a turbo command or yarn workspace.

To Reproduce

There's nothing specific, the repos is there.

Additional context

I undid the last commit, but if you try to reproduce it, make sure tubo is above 1.7.4 (it should be 1.11.0 tho).

@FrancoRATOVOSON FrancoRATOVOSON added kind: bug Something isn't working needs: triage New issues get this label. Remove it after triage owned-by: turborepo labels Dec 6, 2023
@NicholasLYang
Copy link
Contributor

Hi @FrancoRATOVOSON, thanks for the issue. Could you try with the --go-fallback flag? That will let us know if it's an issue specific to the Rust codepath or a general issue

@chris-olszewski
Copy link
Contributor

@FrancoRATOVOSON I noticed in the repro you linked that you're using Yarn 4 which we don't officially support yet. Could you try using Yarn 3, regenerating the yarn.lock and see if the issue persists?

@chris-olszewski chris-olszewski self-assigned this Dec 6, 2023
@chris-olszewski
Copy link
Contributor

@FrancoRATOVOSON We still need to solve the panic, but I believe you want to change * to workspace:* for your internal dependencies. With Yarn 4 npm is the default protocol when one isn't provided. This gets encoded in the lockfile like this where I think you want something like this.

@FrancoRATOVOSON
Copy link
Author

Hi @FrancoRATOVOSON, thanks for the issue. Could you try with the --go-fallback flag? That will let us know if it's an issue specific to the Rust codepath or a general issue

$ yarn web --go-fallback
thread '<unnamed>' panicked at crates/turborepo-lockfiles/src/berry/mod.rs:126:21:
Descriptor collision eslint-config-custom@workspace:* and npm:*
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
fatal runtime error: failed to initiate panic, error 5
SIGABRT: abort
PC=0x12aa4c1 m=5 sigcode=18446744073709551610
signal arrived during cgo execution

goroutine 4 [syscall]:
runtime.cgocall(0xf3c590, 0xc00011ca50)
	runtime/cgocall.go:157 +0x5c fp=0xc00011ca28 sp=0xc00011c9f0 pc=0x91509c
github.com/vercel/turbo/cli/internal/ffi._Cfunc_transitive_closure({0x3e9d4, 0x7f30175490a0})
	_cgo_gotypes.go:213 +0x55 fp=0xc00011ca50 sp=0xc00011ca28 pc=0xd776d5
github.com/vercel/turbo/cli/internal/ffi.TransitiveDeps({0xc000400000, 0x3e2f1, 0x3e2f2}, {0x35c458, 0x5}, 0x9daa27?, 0xc0000ecc90?)
	github.com/vercel/turbo/cli/internal/ffi/ffi.go:197 +0x3a5 fp=0xc00011cc48 sp=0xc00011ca50 pc=0xd787c5
github.com/vercel/turbo/cli/internal/lockfile.rustTransitiveDeps({0xc000400000, 0x3e2f1, 0x3e2f2}, {0x35c458, 0x5}, 0xc0000ed100, 0x7f30429b6a68?)
	github.com/vercel/turbo/cli/internal/lockfile/lockfile.go:180 +0x1d1 fp=0xc00011cee0 sp=0xc00011cc48 pc=0xdaaf31
github.com/vercel/turbo/cli/internal/lockfile.AllTransitiveClosures(0xc0000ed130?, {0x459e00?, 0xc0002c6420?})
	github.com/vercel/turbo/cli/internal/lockfile/lockfile.go:77 +0xaf fp=0xc00011cff8 sp=0xc00011cee0 pc=0xdaa20f
github.com/vercel/turbo/cli/internal/context.(*Context).populateExternalDeps(0xc00007c080, {0xc000044210?, 0x35ad5a?}, 0xc000314480, 0x35ad5a?)
	github.com/vercel/turbo/cli/internal/context/context.go:384 +0x2fa fp=0xc00011d328 sp=0xc00011cff8 pc=0xf15f9a
github.com/vercel/turbo/cli/internal/context.BuildPackageGraph({0xc000044210, 0x28}, 0xc000314480, {0xc0002058e0, 0x5})
	github.com/vercel/turbo/cli/internal/context/context.go:228 +0x8de fp=0xc00011d618 sp=0xc00011d328 pc=0xf1427e
github.com/vercel/turbo/cli/internal/run.(*run).run(0xc000012048, {0x45ab58, 0xc0000ce020}, {0xc0000abf40, 0x1, 0x4}, 0xc00010cc00)
	github.com/vercel/turbo/cli/internal/run/run.go:167 +0x1be fp=0xc00011dec8 sp=0xc00011d618 pc=0xf3809e
github.com/vercel/turbo/cli/internal/run.ExecuteRun({0x45ab58, 0xc0000ce020}, 0x0?, 0x0?, 0xc00010cc00)
	github.com/vercel/turbo/cli/internal/run/run.go:50 +0x167 fp=0xc00011df70 sp=0xc00011dec8 pc=0xf376c7
github.com/vercel/turbo/cli/internal/cmd.RunWithExecutionState.func1()
	github.com/vercel/turbo/cli/internal/cmd/root.go:69 +0x75 fp=0xc00011dfe0 sp=0xc00011df70 pc=0xf3b6b5
runtime.goexit()
	runtime/asm_amd64.s:1598 +0x1 fp=0xc00011dfe8 sp=0xc00011dfe0 pc=0x97d1e1
created by github.com/vercel/turbo/cli/internal/cmd.RunWithExecutionState
	github.com/vercel/turbo/cli/internal/cmd/root.go:64 +0x2d8

goroutine 1 [select]:
runtime.gopark(0xc0001cbec0?, 0x2?, 0x0?, 0x0?, 0xc0001cbe5c?)
	runtime/proc.go:381 +0xd6 fp=0xc0001cbcd0 sp=0xc0001cbcb0 pc=0x94a276
runtime.selectgo(0xc0001cbec0, 0xc0001cbe58, 0x0?, 0x0, 0xc0001ed090?, 0x1)
	runtime/select.go:327 +0x7be fp=0xc0001cbe10 sp=0xc0001cbcd0 pc=0x95a79e
github.com/vercel/turbo/cli/internal/cmd.RunWithExecutionState(0xc00010cc00, {0x35ce65, 0x6})
	github.com/vercel/turbo/cli/internal/cmd/root.go:79 +0x328 fp=0xc0001cbef8 sp=0xc0001cbe10 pc=0xf3b4a8
main.main()
	github.com/vercel/turbo/cli/cmd/turbo/main.go:30 +0x1bb fp=0xc0001cbf80 sp=0xc0001cbef8 pc=0xf3c07b
runtime.main()
	runtime/proc.go:250 +0x207 fp=0xc0001cbfe0 sp=0xc0001cbf80 pc=0x949e47
runtime.goexit()
	runtime/asm_amd64.s:1598 +0x1 fp=0xc0001cbfe8 sp=0xc0001cbfe0 pc=0x97d1e1

goroutine 2 [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	runtime/proc.go:381 +0xd6 fp=0xc000062fb0 sp=0xc000062f90 pc=0x94a276
runtime.goparkunlock(...)
	runtime/proc.go:387
runtime.forcegchelper()
	runtime/proc.go:305 +0xb0 fp=0xc000062fe0 sp=0xc000062fb0 pc=0x94a0b0
runtime.goexit()
	runtime/asm_amd64.s:1598 +0x1 fp=0xc000062fe8 sp=0xc000062fe0 pc=0x97d1e1
created by runtime.init.6
	runtime/proc.go:293 +0x25

goroutine 18 [GC sweep wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	runtime/proc.go:381 +0xd6 fp=0xc00005e780 sp=0xc00005e760 pc=0x94a276
runtime.goparkunlock(...)
	runtime/proc.go:387
runtime.bgsweep(0x0?)
	runtime/mgcsweep.go:278 +0x8e fp=0xc00005e7c8 sp=0xc00005e780 pc=0x9352ee
runtime.gcenable.func1()
	runtime/mgc.go:178 +0x26 fp=0xc00005e7e0 sp=0xc00005e7c8 pc=0x92a5a6
runtime.goexit()
	runtime/asm_amd64.s:1598 +0x1 fp=0xc00005e7e8 sp=0xc00005e7e0 pc=0x97d1e1
created by runtime.gcenable
	runtime/mgc.go:178 +0x6b

goroutine 19 [GC scavenge wait]:
runtime.gopark(0xc0000a6000?, 0x44f5f8?, 0x1?, 0x0?, 0x0?)
	runtime/proc.go:381 +0xd6 fp=0xc00005ef70 sp=0xc00005ef50 pc=0x94a276
runtime.goparkunlock(...)
	runtime/proc.go:387
runtime.(*scavengerState).park(0x1356ca0)
	runtime/mgcscavenge.go:400 +0x53 fp=0xc00005efa0 sp=0xc00005ef70 pc=0x933213
runtime.bgscavenge(0x0?)
	runtime/mgcscavenge.go:628 +0x45 fp=0xc00005efc8 sp=0xc00005efa0 pc=0x9337e5
runtime.gcenable.func2()
	runtime/mgc.go:179 +0x26 fp=0xc00005efe0 sp=0xc00005efc8 pc=0x92a546
runtime.goexit()
	runtime/asm_amd64.s:1598 +0x1 fp=0xc00005efe8 sp=0xc00005efe0 pc=0x97d1e1
created by runtime.gcenable
	runtime/mgc.go:179 +0xaa

goroutine 20 [finalizer wait]:
runtime.gopark(0x1a0?, 0x1357700?, 0x20?, 0xa8?, 0xc000062770?)
	runtime/proc.go:381 +0xd6 fp=0xc000062628 sp=0xc000062608 pc=0x94a276
runtime.runfinq()
	runtime/mfinal.go:193 +0x107 fp=0xc0000627e0 sp=0xc000062628 pc=0x9295e7
runtime.goexit()
	runtime/asm_amd64.s:1598 +0x1 fp=0xc0000627e8 sp=0xc0000627e0 pc=0x97d1e1
created by runtime.createfing
	runtime/mfinal.go:163 +0x45

goroutine 21 [select, locked to thread]:
runtime.gopark(0xc00005f7a8?, 0x2?, 0xf2?, 0xa5?, 0xc00005f7a4?)
	runtime/proc.go:381 +0xd6 fp=0xc00005f618 sp=0xc00005f5f8 pc=0x94a276
runtime.selectgo(0xc00005f7a8, 0xc00005f7a0, 0x0?, 0x0, 0x0?, 0x1)
	runtime/select.go:327 +0x7be fp=0xc00005f758 sp=0xc00005f618 pc=0x95a79e
runtime.ensureSigM.func1()
	runtime/signal_unix.go:1004 +0x1b0 fp=0xc00005f7e0 sp=0xc00005f758 pc=0x974d30
runtime.goexit()
	runtime/asm_amd64.s:1598 +0x1 fp=0xc00005f7e8 sp=0xc00005f7e0 pc=0x97d1e1
created by runtime.ensureSigM
	runtime/signal_unix.go:987 +0xbd

goroutine 34 [syscall]:
runtime.notetsleepg(0x0?, 0x0?)
	runtime/lock_futex.go:236 +0x34 fp=0xc0002927a0 sp=0xc000292768 pc=0x91d4f4
os/signal.signal_recv()
	runtime/sigqueue.go:152 +0x2f fp=0xc0002927c0 sp=0xc0002927a0 pc=0x97960f
os/signal.loop()
	os/signal/signal_unix.go:23 +0x19 fp=0xc0002927e0 sp=0xc0002927c0 pc=0xbffd39
runtime.goexit()
	runtime/asm_amd64.s:1598 +0x1 fp=0xc0002927e8 sp=0xc0002927e0 pc=0x97d1e1
created by os/signal.Notify.func1.1
	os/signal/signal.go:151 +0x2a

goroutine 3 [chan receive]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	runtime/proc.go:381 +0xd6 fp=0xc000063700 sp=0xc0000636e0 pc=0x94a276
runtime.chanrecv(0xc0000aca20, 0x0, 0x1)
	runtime/chan.go:583 +0x49d fp=0xc000063790 sp=0xc000063700 pc=0x917e5d
runtime.chanrecv1(0x0?, 0x0?)
	runtime/chan.go:442 +0x18 fp=0xc0000637b8 sp=0xc000063790 pc=0x917958
github.com/vercel/turbo/cli/internal/signals.NewWatcher.func1()
	github.com/vercel/turbo/cli/internal/signals/signals.go:56 +0x28 fp=0xc0000637e0 sp=0xc0000637b8 pc=0xe857c8
runtime.goexit()
	runtime/asm_amd64.s:1598 +0x1 fp=0xc0000637e8 sp=0xc0000637e0 pc=0x97d1e1
created by github.com/vercel/turbo/cli/internal/signals.NewWatcher
	github.com/vercel/turbo/cli/internal/signals/signals.go:55 +0x12f

rax    0x0
rbx    0x0
rcx    0x12aa4c1
rdx    0x0
rdi    0x2
rsi    0x7f3019e15578
rbp    0x12de818
rsp    0x7f3019e15568
r8     0x0
r9     0x0
r10    0x8
r11    0x246
r12    0x7f3019e15c30
r13    0x7f3017513710
r14    0x7f3019e15578
r15    0x7f3019e15780
rip    0x12aa4c1
rflags 0x246
cs     0x33
fs     0x0
gs     0x0
FAIL: 2

@FrancoRATOVOSON
Copy link
Author

@FrancoRATOVOSON We still need to solve the panic, but I believe you want to change * to workspace:* for your internal dependencies. With Yarn 4 npm is the default protocol when one isn't provided. This gets encoded in the lockfile like this where I think you want something like this.

Saw this and I fixed it but nothing changed

@FrancoRATOVOSON
Copy link
Author

@FrancoRATOVOSON I noticed in the repro you linked that you're using Yarn 4 which we don't officially support yet. Could you try using Yarn 3, regenerating the yarn.lock and see if the issue persists?

Yarn 4 was the problem, so we can now rename this issue as "support yarn 4" 😅

@piyushchauhan2011
Copy link

piyushchauhan2011 commented Dec 27, 2023

Seems like this happens with pnpm lockfile also. Using latest turbo version "turbo": "^1.11.2", , seeing random crashes while running commands even after turbo daemon clean, sometimes it works and sometimes it crashes

chris-olszewski added a commit that referenced this issue Jan 2, 2024
### Description

Fixes #6715

Yarn 4 now makes the default protocol of `npm` (e.g. `"foo": "*"` is
really `"foo": "npm:*"`) explicit in the lockfile representation. This
means that workspaces that reference a package with and without a
protocol will end up with multiple protocols for a single descriptor.

e.g. If one package has a dependency `"c": "*"` and another package in
the workspace has a dependency `"c": "workspace:*"`
In Yarn3 those result in the following descriptors: `c@*, c@workspace:*,
c@workspace:pkgs/c`
In Yarn4 those result in the following descriptors: `c@npm:*,
c@workspace:*, c@workspace:pkgs/c`

We cannot get rid of the logic for case without a protocol as that would
break our Yarn3 usage.

### Testing Instructions

Added unit test that has a lockfile with mixed protocols.

Existing unit tests verify the Yarn3 behavior is still supported.


Closes TURBO-1856

Co-authored-by: Chris Olszewski <Chris Olszewski>
Zertsov pushed a commit that referenced this issue Jan 5, 2024
### Description

Fixes #6715

Yarn 4 now makes the default protocol of `npm` (e.g. `"foo": "*"` is
really `"foo": "npm:*"`) explicit in the lockfile representation. This
means that workspaces that reference a package with and without a
protocol will end up with multiple protocols for a single descriptor.

e.g. If one package has a dependency `"c": "*"` and another package in
the workspace has a dependency `"c": "workspace:*"`
In Yarn3 those result in the following descriptors: `c@*, c@workspace:*,
c@workspace:pkgs/c`
In Yarn4 those result in the following descriptors: `c@npm:*,
c@workspace:*, c@workspace:pkgs/c`

We cannot get rid of the logic for case without a protocol as that would
break our Yarn3 usage.

### Testing Instructions

Added unit test that has a lockfile with mixed protocols.

Existing unit tests verify the Yarn3 behavior is still supported.


Closes TURBO-1856

Co-authored-by: Chris Olszewski <Chris Olszewski>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind: bug Something isn't working needs: triage New issues get this label. Remove it after triage owned-by: turborepo
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants