During a recent profiling session, I noticed spurious non-zero samples against lines I knew to be unreachable in the test scenario. It appears that the compiler's DWARF information is incorrect. I don't know to what extent the rest of the profiler's information is reliable.
I'm using today's tip:
$ go version
go version devel +8b98498a58 Wed Jun 17 19:48:45 2020 +0000 linux/amd64
It's easy to reproduce deterministically, though the test case is far from minimal. The profiled application is go.starlark.net/cmd/starlark. The executable file is here called 'x'.
$ cd $GOROOT/src/go.starlark.net
$ git log | head -n 1
commit c6daab680f283fdf16a79c99d8e86a1562400761
$ go build -o x ./cmd/starlark
There are 49 program counter locations that purport to belong to interp.go:333. One of them is address 0x593311.
$ nl -ba starlark/interp.go | grep -A7 ' 96'
96 for s := uint(0); ; s += 7 {
97 b := code[pc]
98 pc++
99 arg |= uint32(b&0x7f) << s
100 if b < 0x80 {
101 break
102 }
103 }
Running the program in gdb and breakpointing that instruction appears to confirm this inference. Picking some of the other instructions associated with interp.go:333, several appear to be logic for the binary tree in the control flow graph for 'switch op'.
All the affected PC addresses appear to be loads of 0x130(SP). Perhaps the liveness analysis is using that stack slot as a temporary as well as for 'iterstack', and somehow location information is leaking between the two variables assigned to the same slot?
I realize a perfect debug view is basically impossible to achieve, but I wonder if there is an easy fix in this case.
The text was updated successfully, but these errors were encountered:
During a recent profiling session, I noticed spurious non-zero samples against lines I knew to be unreachable in the test scenario. It appears that the compiler's DWARF information is incorrect. I don't know to what extent the rest of the profiler's information is reliable.
I'm using today's tip:
It's easy to reproduce deterministically, though the test case is far from minimal. The profiled application is go.starlark.net/cmd/starlark. The executable file is here called 'x'.
There are 49 program counter locations that purport to belong to interp.go:333. One of them is address 0x593311.
The actual code at interp.go:333 (https://github.com/google/starlark-go/blob/c6daab680f283fdf16a79c99d8e86a1562400761/starlark/interp.go#L333) is a case in a byte code interpreter that handles the ITERPUSH bytecode, which is not reached in the profiled scenario. (I confirmed this by replacing it with a panic statement; it did not panic.)
But disassembling the program shows that address 0x593311 is in fact in code for the body of the loop at lines 96-104 (https://github.com/google/starlark-go/blob/c6daab680f283fdf16a79c99d8e86a1562400761/starlark/interp.go#L96). (The add 7 instruction is the giveaway.) I've shown the whole basic block.
Here's the corresponding source:
Running the program in gdb and breakpointing that instruction appears to confirm this inference. Picking some of the other instructions associated with interp.go:333, several appear to be logic for the binary tree in the control flow graph for 'switch op'.
All the affected PC addresses appear to be loads of 0x130(SP). Perhaps the liveness analysis is using that stack slot as a temporary as well as for 'iterstack', and somehow location information is leaking between the two variables assigned to the same slot?
I realize a perfect debug view is basically impossible to achieve, but I wonder if there is an easy fix in this case.
The text was updated successfully, but these errors were encountered: