Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build failed on Mac #16

Closed
lotux opened this issue May 2, 2018 · 5 comments
Closed

build failed on Mac #16

lotux opened this issue May 2, 2018 · 5 comments

Comments

@lotux
Copy link

lotux commented May 2, 2018

build failed on Mac

Starting local Bazel server and connecting to it...
............
INFO: Analysed target //runsc:runsc (174 packages loaded).
INFO: Found 1 target...
INFO: From Compiling external/com_google_protobuf/src/google/protobuf/compiler/js/embed.cc [for host]:
external/com_google_protobuf/src/google/protobuf/compiler/js/embed.cc:37:12: warning: unused variable 'output_file' [-Wunused-const-variable]
const char output_file[] = "well_known_types_embed.cc";
           ^
1 warning generated.
ERROR: /Users/[REDACTED]/code/gvisor/vdso/BUILD:8:1: Executing genrule //vdso:vdso failed (Exit 1)
clang: error: invalid linker name in argument '-fuse-ld=gold'
Target //runsc:runsc failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 84.488s, Critical Path: 9.48s
INFO: 125 processes: 115 darwin-sandbox, 10 local.
FAILED: Build did NOT complete successfully
$bazel build runsc --verbose_failures --sandbox_debug
INFO: Analysed target //runsc:runsc (0 packages loaded).
INFO: Found 1 target...
ERROR: /Users/[REDACTED]/code/gvisor/vdso/BUILD:8:1: Executing genrule //vdso:vdso failed (Exit 1): sandbox-exec failed: error executing command
  (cd /private/var/tmp/_bazel_[REDACTED]/2aa46d382f3da4c861c496bba67d8af8/execroot/__main__ && \
  exec env - \
    PATH=/Library/Frameworks/Python.framework/Versions/3.6/bin:/bin:/Users/[REDACTED]/code/flutter/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/go/bin:/opt/X11/bin:/usr/local/opt/go/libexec/bin:/Users/[REDACTED]/Library/Python/2.7/bin:/Users/[REDACTED]/code/gocode/bin:/Users/[REDACTED]/Library/Android/sdk/tools:/Users/[REDACTED]/Library/Android/sdk/platform-tools:/Users/[REDACTED]/.mix:/Users/[REDACTED]/.mix/escripts:/Users/[REDACTED]/.pub-cache/bin/ \
    TMPDIR=/var/folders/65/cn602n1d21z_cn3cwnjmy5lwrnkmr0/T/ \
  /usr/bin/sandbox-exec -f /private/var/tmp/_bazel_[REDACTED]/2aa46d382f3da4c861c496bba67d8af8/sandbox/565523233562725973/sandbox.sb /private/var/tmp/_bazel_[REDACTED]/2aa46d382f3da4c861c496bba67d8af8/execroot/__main__/_bin/process-wrapper '--timeout=0' '--kill_delay=15' /bin/bash -c 'source external/bazel_tools/tools/genrule/genrule-setup.sh; external/local_config_cc/cc_wrapper.sh  -I. -O2 -std=c++11 -fPIC -fuse-ld=gold -m64 -shared -nostdlib -Wl,-soname=linux-vdso.so.1 -Wl,--hash-style=sysv -Wl,--no-undefined -Wl,-Bsymbolic -Wl,-z,max-page-size=4096 -Wl,-z,common-page-size=4096 -Wl,-Tvdso/vdso.lds -o bazel-out/darwin-fastbuild/genfiles/vdso/vdso.so vdso/vdso.cc vdso/vdso_time.cc && bazel-out/host/bin/vdso/check_vdso --check-data --vdso bazel-out/darwin-fastbuild/genfiles/vdso/vdso.so ')
clang: error: invalid linker name in argument '-fuse-ld=gold'
Target //runsc:runsc failed to build
INFO: Elapsed time: 0.548s, Critical Path: 0.23s
INFO: 0 processes.
FAILED: Build did NOT complete successfully
@shibukawa
Copy link

I could work-around this issue by using gcc7. I use MacPorts. I don't know Homebrew way.

$ sudo port install gcc7
$ port select --list gcc
Available versions for gcc:
	mp-gcc7
	none (active)
$ sudo port select --set gcc mp-gcc7
$ bazel build runsc

But I got another issue:

/private/var/tmp/_bazel_shibu/83795306cfc830ea296723053d3de8e5/sandbox/6590907040525656452/execroot/__main__/pkg/syserror/syserror.go:44:23: undefined: syscall.ELIBBAD
GoCompile: error running compiler: exit status 1
Target //runsc:runsc failed to build

@prattmic
Copy link
Member

prattmic commented May 2, 2018

gVisor only supports Linux, I suppose our README should be more clear about that.

It might be possible to do cross-compilation builds of Linux outputs on OS X hosts, but it looks like you are trying to build for OS X directly.

@chyroc
Copy link

chyroc commented May 3, 2018

Need a cross-platform compilation solution

@ghost
Copy link

ghost commented May 3, 2018

@prattmic Update the Readme .. I was also tripped up by this.

@nlacasse
Copy link
Collaborator

nlacasse commented May 3, 2018

Sorry for the confusion. Readme has been updated.

@nlacasse nlacasse closed this as completed May 3, 2018
lemin9538 pushed a commit to lemin9538/gvisor that referenced this issue Oct 29, 2020
current when save fpsmid register is using following
instruction:

	# FMOVD Fx, 16*1(R0)

this instruction will compiled to:

	# str     Dx, [x0, google#16]

Dx is 64bit fp register not 128bit, then upper 64bit data
will be lossed, this will cause application meet many random
crash issue. need use 128bit register Vx or Q0 to save and
restore the fpsmid context.

Signed-off-by: Min Le <lemin.lm@antgroup.com>
copybara-service bot pushed a commit that referenced this issue Nov 7, 2020
current when save and restore fpsmid register is using following
instruction:

	# FMOVD F0, 16*1(R0)

this instruction will compiled to:

	# str     d0, [x0, #16]

D0 is a 64bit register not 128bit, then upper 64bit data
are loss, which will cause application meet many random
crash issue. need use 128bit register v0 or q0 to save
the fpsmid context.

seems golang do not support fpsmid well for arrch64, I do not find related instruction to do this work, so use WORD instead.

FUTURE_COPYBARA_INTEGRATE_REVIEW=#4683 from lemin9538:lemin_fpsmid_fix 185b88e
PiperOrigin-RevId: 341155522
adamliyi pushed a commit to adamliyi/gvisor that referenced this issue Nov 18, 2020
current when save fpsmid register is using following
instruction:

	# FMOVD Fx, 16*1(R0)

this instruction will compiled to:

	# str     Dx, [x0, google#16]

Dx is 64bit fp register not 128bit, then upper 64bit data
will be lossed, this will cause application meet many random
crash issue. need use 128bit register Vx or Q0 to save and
restore the fpsmid context.

Signed-off-by: Min Le <lemin.lm@antgroup.com>
copybara-service bot pushed a commit that referenced this issue Oct 11, 2023
sync.SeqCount relies on the following memory orderings:

- All stores following BeginWrite() in program order happen after the atomic
  read-modify-write (RMW) of SeqCount.epoch. In the Go 1.19 memory model, this
  is implied by atomic loads being acquire-seqcst.

- All stores preceding EndWrite() in program order happen before the RMW of
  SeqCount.epoch. In the Go 1.19 memory model, this is implied by atomic stores
  being release-seqcst.

- All loads following BeginRead() in program order happen after the load of
  SeqCount.epoch. In the Go 1.19 memory model, this is implied by atomic loads
  being acquire-seqcst.

- All loads preceding ReadOk() in program order happen before the load of
  SeqCount.epoch. The Go 1.19 memory model does not imply this property.

The x86 memory model *does* imply this final property, and in practice the
current Go compiler does not reorder memory accesses around the load of
SeqCount.epoch, so sync.SeqCount behaves correctly on x86.
However, on ARM64, the instruction that is actually emitted for the atomic load
from SeqCount.epoch is LDAR:

```
gvisor/pkg/sentry/kernel/kernel.SeqAtomicTryLoadTaskGoroutineSchedInfo():
gvisor/pkg/sentry/kernel/seqatomic_taskgoroutineschedinfo_unsafe.go:34
  56371c:       f9400025        ldr     x5, [x1]
  563720:       f9400426        ldr     x6, [x1, #8]
  563724:       f9400822        ldr     x2, [x1, #16]
  563728:       f9400c23        ldr     x3, [x1, #24]
gvisor/pkg/sentry/kernel/seqatomic_taskgoroutineschedinfo_unsafe.go:36
  56372c:       d503201f        nop
gvisor/pkg/sync/sync.(*SeqCount).ReadOk():
gvisor/pkg/sync/seqcount.go:107
  563730:       88dffc07        ldar    w7, [x0]
  563734:       6b0400ff        cmp     w7, w4
```

LDAR is explicitly documented as not implying the required memory ordering:
https://developer.arm.com/documentation/den0024/latest/Memory-Ordering/Barriers/One-way-barriers
Consequently, SeqCount.ReadOk() is incorrectly memory-ordered on weakly-ordered
architectures. To fix this, we need to introduce an explicit memory fence.

On ARM64, there is no way to implement the memory fence in question without
resorting to assembly, so the implementation is straightforward. On x86, we
introduce a compiler fence, since future compilers might otherwise reorder
memory accesses to after atomic loads; the only apparent way to do so is also
by using assembly, which unfortunately introduces overhead:

- After the call to sync.MemoryFenceReads(), callers zero XMM15 and reload the
  runtime.g pointer from %fs:-8, reflecting the switch from ABI0 to
  ABIInternal. This is a relatively small cost.

- Before the call to sync.MemoryFenceReads(), callers spill all registers to
  the stack, since ABI0 function calls clobber all registers. The cost of this
  depends on the state of the caller before the call, and is not reflected in
  BenchmarkSeqCountReadUncontended (which does not read any protected state
  between the calls to BeginRead() and ReadOk()).

Both of these problems are caused by Go assembly functions being restricted to
ABI0. Go provides a way to mark assembly functions as using ABIInternal
instead, but restricts its use to functions in package runtime
(golang/go#44065). runtime.publicationBarrier(),
which is semantically "sync.MemoryFenceWrites()", is implemented as a compiler
fence on x86; defining sync.MemoryFenceReads() as an alias for that function
(using go:linkname) would mitigate the former problem, but not the latter.
Thus, for simplicity, we define sync.MemoryFenceReads() in (ABI0) assembly, and
have no choice but to eat the overhead.

("Fence" and "barrier" are often used interchangeably in this context; Linux
uses "barrier" (e.g. `smp_rmb()`), while C++ uses "fence" (e.g.
`std::atomic_memory_fence()`). We choose "fence" to reduce ambiguity with
"write barriers", since Go is a GC'd language.)

PiperOrigin-RevId: 572675753
copybara-service bot pushed a commit that referenced this issue Oct 16, 2023
sync.SeqCount relies on the following memory orderings:

- All stores following BeginWrite() in program order happen after the atomic
  read-modify-write (RMW) of SeqCount.epoch. In the Go 1.19 memory model, this
  is implied by atomic loads being acquire-seqcst.

- All stores preceding EndWrite() in program order happen before the RMW of
  SeqCount.epoch. In the Go 1.19 memory model, this is implied by atomic stores
  being release-seqcst.

- All loads following BeginRead() in program order happen after the load of
  SeqCount.epoch. In the Go 1.19 memory model, this is implied by atomic loads
  being acquire-seqcst.

- All loads preceding ReadOk() in program order happen before the load of
  SeqCount.epoch. The Go 1.19 memory model does not imply this property.

The x86 memory model *does* imply this final property, and in practice the
current Go compiler does not reorder memory accesses around the load of
SeqCount.epoch, so sync.SeqCount behaves correctly on x86.
However, on ARM64, the instruction that is actually emitted for the atomic load
from SeqCount.epoch is LDAR:

```
gvisor/pkg/sentry/kernel/kernel.SeqAtomicTryLoadTaskGoroutineSchedInfo():
gvisor/pkg/sentry/kernel/seqatomic_taskgoroutineschedinfo_unsafe.go:34
  56371c:       f9400025        ldr     x5, [x1]
  563720:       f9400426        ldr     x6, [x1, #8]
  563724:       f9400822        ldr     x2, [x1, #16]
  563728:       f9400c23        ldr     x3, [x1, #24]
gvisor/pkg/sentry/kernel/seqatomic_taskgoroutineschedinfo_unsafe.go:36
  56372c:       d503201f        nop
gvisor/pkg/sync/sync.(*SeqCount).ReadOk():
gvisor/pkg/sync/seqcount.go:107
  563730:       88dffc07        ldar    w7, [x0]
  563734:       6b0400ff        cmp     w7, w4
```

LDAR is explicitly documented as not implying the required memory ordering:
https://developer.arm.com/documentation/den0024/latest/Memory-Ordering/Barriers/One-way-barriers
Consequently, SeqCount.ReadOk() is incorrectly memory-ordered on weakly-ordered
architectures. To fix this, we need to introduce an explicit memory fence.

On ARM64, there is no way to implement the memory fence in question without
resorting to assembly, so the implementation is straightforward. On x86, we
introduce a compiler fence, since future compilers might otherwise reorder
memory accesses to after atomic loads; the only apparent way to do so is also
by using assembly, which unfortunately introduces overhead:

- After the call to sync.MemoryFenceReads(), callers zero XMM15 and reload the
  runtime.g pointer from %fs:-8, reflecting the switch from ABI0 to
  ABIInternal. This is a relatively small cost.

- Before the call to sync.MemoryFenceReads(), callers spill all registers to
  the stack, since ABI0 function calls clobber all registers. The cost of this
  depends on the state of the caller before the call, and is not reflected in
  BenchmarkSeqCountReadUncontended (which does not read any protected state
  between the calls to BeginRead() and ReadOk()).

Both of these problems are caused by Go assembly functions being restricted to
ABI0. Go provides a way to mark assembly functions as using ABIInternal
instead, but restricts its use to functions in package runtime
(golang/go#44065). runtime.publicationBarrier(),
which is semantically "sync.MemoryFenceWrites()", is implemented as a compiler
fence on x86; defining sync.MemoryFenceReads() as an alias for that function
(using go:linkname) would mitigate the former problem, but not the latter.
Thus, for simplicity, we define sync.MemoryFenceReads() in (ABI0) assembly, and
have no choice but to eat the overhead.

("Fence" and "barrier" are often used interchangeably in this context; Linux
uses "barrier" (e.g. `smp_rmb()`), while C++ uses "fence" (e.g.
`std::atomic_memory_fence()`). We choose "fence" to reduce ambiguity with
"write barriers", since Go is a GC'd language.)

PiperOrigin-RevId: 573861378
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants