Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[LD] Erroneous R_ARC_NONE in Relocation #282

Open
shahab-vahedi opened this issue Oct 24, 2019 · 6 comments
Open

[LD] Erroneous R_ARC_NONE in Relocation #282

shahab-vahedi opened this issue Oct 24, 2019 · 6 comments
Assignees

Comments

@shahab-vahedi
Copy link
Member

In this small example, you see the following output for relocations of lib.so:

$ make reloc

arc-gnu-linux-readelf -r lib.so

Relocation section '.rela.dyn' at offset 0x2d4 contains 10 entries:
 Offset     Info    Type            Sym.Value  Sym. Name + Addend
00003f0c  00000038 R_ARC_RELATIVE               9c
00003f10  00000038 R_ARC_RELATIVE               64
00004010  00000038 R_ARC_RELATIVE               0
00000000  00000000 R_ARC_NONE                   0
00000000  00000000 R_ARC_NONE                   0
00000000  00000000 R_ARC_NONE                   0
00003fec  00000536 R_ARC_GLOB_DAT    00000000   _ITM_deregisterTMClone + 0
00003ff0  00000736 R_ARC_GLOB_DAT    00000000   _ITM_registerTMCloneTa + 0
00003ff4  00000836 R_ARC_GLOB_DAT    00000000   __cxa_finalize@GLIBC_2.30 + 0
00003ff8  00000636 R_ARC_GLOB_DAT    00000000   foo + 0

Relocation section '.rela.plt' at offset 0x34c contains 2 entries:
 Offset     Info    Type            Sym.Value  Sym. Name + Addend
00004008  00000637 R_ARC_JMP_SLOT    00000000   foo + 0
0000400c  00000837 R_ARC_JMP_SLOT    00000000   __cxa_finalize@GLIBC_2.30 + 0

There are a few R_ARC_NONEs which shouldn't be.

@shahab-vahedi shahab-vahedi changed the title [LD] Extraneous R_ARC_NONE in Relocation [LD] Erroneous R_ARC_NONE in Relocation Oct 24, 2019
@claziss
Copy link
Member

claziss commented Oct 24, 2019

The issue is due to hidden symbols which are defined in a different object, but still they are local. Because of this fact, the linker was erroneously allocating dynamic section space, while the reloc itself was solved latter on.

A simplified example is this one:

FileA.S:

	.data
	.hidden hidobj
	.global hidobj
	.type hidobj,@object
hidobj:
	.zero 4

FileB.S

	.hidden	hidobj
	.section	.text
	add	r2,pcl,@hidobj@pcl

Linker command line:
arc-elf32-ld --shared -marclinux -o libx.so fileA.o fileB.o

@vineetgarc
Copy link
Member

FWIW, there's STAR : 9001086764- build linker to prune extraneous R_ARC_NONE dynamic relos

@claziss
Copy link
Member

claziss commented Oct 25, 2019

A tentative patch is here 6fb6db52d75404682fe57ffc62f339555bc0c939

@abrodkin
Copy link
Member

@claziss what is our plan with this one?

shahab-vahedi referenced this issue in foss-for-synopsys-dwc-arc-processors/binutils-gdb Jun 10, 2020
There has been some breakage for aarch64-linux, arm-linux and s390-linux in
terms of inline frame unwinding. There may be other targets, but these are
the ones i'm aware of.

The following testcases started to show numerous failures and trigger internal
errors in GDB after commit 1009d92,
"Find tailcall frames before inline frames".

gdb.opt/inline-break.exp
gdb.opt/inline-cmds.exp
gdb.python/py-frame-inline.exp
gdb.reverse/insn-reverse.exp

The internal errors were of this kind:

binutils-gdb/gdb/frame.c:579: internal-error: frame_id get_frame_id(frame_info*): Assertion `fi->level == 0' failed.

After a lengthy investigation to try and find the cause of these assertions,
it seems we're dealing with some fragile/poorly documented code to handle inline
frames and we are attempting to unwind from this fragile section of code.

Before commit 1009d92, the tailcall sniffer
was invoked from dwarf2_frame_prev_register. By the time we invoke the
dwarf2_frame_prev_register function, we've probably already calculated the
frame id (via compute_frame_id).

After said commit, the call to dwarf2_tailcall_sniffer_first was moved to
dwarf2_frame_cache. This is very early in a frame creation process, and
we're still calculating the frame ID (so compute_frame_id is in the call
stack).

This would be fine for regular frames, but the above testcases all deal
with some inline frames.

The particularity of inline frames is that their frame ID's depend on
the previous frame's ID, and the previous frame's ID relies in the inline
frame's registers. So it is a bit of a messy situation.

We have comments in various parts of the code warning about some of these
particularities.

In the case of dwarf2_tailcall_sniffer_first, we attempt to unwind the PC,
which goes through various functions until we eventually invoke
frame_unwind_got_register. This function will eventually attempt to create
a lazy value for a particular register, and this lazy value will require
a valid frame ID.  Since the inline frame doesn't have a valid frame ID
yet (remember we're still calculating the previous frame's ID so we can tell
what the inline frame ID is) we will call compute_frame_id for the inline
frame (level 0).

We'll eventually hit the assertion above, inside get_frame_id:

--
      /* If we haven't computed the frame id yet, then it must be that
         this is the current frame.  Compute it now, and stash the
         result.  The IDs of other frames are computed as soon as
         they're created, in order to detect cycles.  See
         get_prev_frame_if_no_cycle.  */
      gdb_assert (fi->level == 0);
--

It seems to me we shouldn't have reached this assertion without having the
inline frame ID already calculated. In fact, it seems we even start recursing
a bit when we invoke get_prev_frame_always within inline_frame_this_id. But
a check makes us quit the recursion and proceed to compute the id.

Here's the call stack for context:

 #0  get_prev_frame_always_1 (this_frame=0xaaaaab85a670) at ../../../repos/binutils-gdb/gdb/frame.c:2109
 RECURSION - #1  0x0000aaaaaae1d098 in get_prev_frame_always (this_frame=0xaaaaab85a670) at ../../../repos/binutils-gdb/gdb/frame.c:2124
 #2  0x0000aaaaaae95768 in inline_frame_this_id (this_frame=0xaaaaab85a670, this_cache=0xaaaaab85a688, this_id=0xaaaaab85a6d0)
     at ../../../repos/binutils-gdb/gdb/inline-frame.c:165
 #3  0x0000aaaaaae1916c in compute_frame_id (fi=0xaaaaab85a670) at ../../../repos/binutils-gdb/gdb/frame.c:550
 #4  0x0000aaaaaae19318 in get_frame_id (fi=0xaaaaab85a670) at ../../../repos/binutils-gdb/gdb/frame.c:582
 #5  0x0000aaaaaae13480 in value_of_register_lazy (frame=0xaaaaab85a730, regnum=30) at ../../../repos/binutils-gdb/gdb/findvar.c:296
 #6  0x0000aaaaaae16c00 in frame_unwind_got_register (frame=0xaaaaab85a730, regnum=30, new_regnum=30) at ../../../repos/binutils-gdb/gdb/frame-unwind.c:268
 #7  0x0000aaaaaad52604 in dwarf2_frame_prev_register (this_frame=0xaaaaab85a730, this_cache=0xaaaaab85a748, regnum=30)
     at ../../../repos/binutils-gdb/gdb/dwarf2/frame.c:1296
 #8  0x0000aaaaaae1ae68 in frame_unwind_register_value (next_frame=0xaaaaab85a730, regnum=30) at ../../../repos/binutils-gdb/gdb/frame.c:1229
 #9  0x0000aaaaaae1b304 in frame_unwind_register_unsigned (next_frame=0xaaaaab85a730, regnum=30) at ../../../repos/binutils-gdb/gdb/frame.c:1320
 #10 0x0000aaaaaab76574 in aarch64_dwarf2_prev_register (this_frame=0xaaaaab85a730, this_cache=0xaaaaab85a748, regnum=32)
     at ../../../repos/binutils-gdb/gdb/aarch64-tdep.c:1114
 #11 0x0000aaaaaad52724 in dwarf2_frame_prev_register (this_frame=0xaaaaab85a730, this_cache=0xaaaaab85a748, regnum=32)
     at ../../../repos/binutils-gdb/gdb/dwarf2/frame.c:1316
 #12 0x0000aaaaaae1ae68 in frame_unwind_register_value (next_frame=0xaaaaab85a730, regnum=32) at ../../../repos/binutils-gdb/gdb/frame.c:1229
 #13 0x0000aaaaaae1b304 in frame_unwind_register_unsigned (next_frame=0xaaaaab85a730, regnum=32) at ../../../repos/binutils-gdb/gdb/frame.c:1320
 #14 0x0000aaaaaae16a84 in default_unwind_pc (gdbarch=0xaaaaab81edc0, next_frame=0xaaaaab85a730) at ../../../repos/binutils-gdb/gdb/frame-unwind.c:223
 #15 0x0000aaaaaae32124 in gdbarch_unwind_pc (gdbarch=0xaaaaab81edc0, next_frame=0xaaaaab85a730) at ../../../repos/binutils-gdb/gdb/gdbarch.c:3074
 #16 0x0000aaaaaad4f15c in dwarf2_tailcall_sniffer_first (this_frame=0xaaaaab85a730, tailcall_cachep=0xaaaaab85a830, entry_cfa_sp_offsetp=0x0)
     at ../../../repos/binutils-gdb/gdb/dwarf2/frame-tailcall.c:388
 #17 0x0000aaaaaad520c0 in dwarf2_frame_cache (this_frame=0xaaaaab85a730, this_cache=0xaaaaab85a748) at ../../../repos/binutils-gdb/gdb/dwarf2/frame.c:1190
 #18 0x0000aaaaaad52204 in dwarf2_frame_this_id (this_frame=0xaaaaab85a730, this_cache=0xaaaaab85a748, this_id=0xaaaaab85a790)
     at ../../../repos/binutils-gdb/gdb/dwarf2/frame.c:1218
 #19 0x0000aaaaaae1916c in compute_frame_id (fi=0xaaaaab85a730) at ../../../repos/binutils-gdb/gdb/frame.c:550
 #20 0x0000aaaaaae1c958 in get_prev_frame_if_no_cycle (this_frame=0xaaaaab85a670) at ../../../repos/binutils-gdb/gdb/frame.c:1927
 #21 0x0000aaaaaae1cc44 in get_prev_frame_always_1 (this_frame=0xaaaaab85a670) at ../../../repos/binutils-gdb/gdb/frame.c:2006
 FIRST CALL - #22 0x0000aaaaaae1d098 in get_prev_frame_always (this_frame=0xaaaaab85a670) at ../../../repos/binutils-gdb/gdb/frame.c:2124
 #23 0x0000aaaaaae18f68 in skip_artificial_frames (frame=0xaaaaab85a670) at ../../../repos/binutils-gdb/gdb/frame.c:495
 #24 0x0000aaaaaae193e8 in get_stack_frame_id (next_frame=0xaaaaab85a670) at ../../../repos/binutils-gdb/gdb/frame.c:596
 #25 0x0000aaaaaae87a54 in process_event_stop_test (ecs=0xffffffffefc8) at ../../../repos/binutils-gdb/gdb/infrun.c:6857
 #26 0x0000aaaaaae86bdc in handle_signal_stop (ecs=0xffffffffefc8) at ../../../repos/binutils-gdb/gdb/infrun.c:6381
 #27 0x0000aaaaaae84fd0 in handle_inferior_event (ecs=0xffffffffefc8) at ../../../repos/binutils-gdb/gdb/infrun.c:5578
 #28 0x0000aaaaaae81588 in fetch_inferior_event (client_data=0x0) at ../../../repos/binutils-gdb/gdb/infrun.c:4020
 #29 0x0000aaaaaae5f7fc in inferior_event_handler (event_type=INF_REG_EVENT, client_data=0x0) at ../../../repos/binutils-gdb/gdb/inf-loop.c:43
 #30 0x0000aaaaaae8d768 in infrun_async_inferior_event_handler (data=0x0) at ../../../repos/binutils-gdb/gdb/infrun.c:9377
 #31 0x0000aaaaaabff970 in check_async_event_handlers () at ../../../repos/binutils-gdb/gdb/async-event.c:291
 #32 0x0000aaaaab27cbec in gdb_do_one_event () at ../../../repos/binutils-gdb/gdbsupport/event-loop.cc:194
 #33 0x0000aaaaaaef1894 in start_event_loop () at ../../../repos/binutils-gdb/gdb/main.c:356
 #34 0x0000aaaaaaef1a04 in captured_command_loop () at ../../../repos/binutils-gdb/gdb/main.c:416
 #35 0x0000aaaaaaef3338 in captured_main (data=0xfffffffff1f0) at ../../../repos/binutils-gdb/gdb/main.c:1254
 #36 0x0000aaaaaaef33a0 in gdb_main (args=0xfffffffff1f0) at ../../../repos/binutils-gdb/gdb/main.c:1269
 #37 0x0000aaaaaab6e0dc in main (argc=6, argv=0xfffffffff348) at ../../../repos/binutils-gdb/gdb/gdb.c:32

The following patch addresses this by using a function that unwinds the PC
from the next (inline) frame directly as opposed to creating a lazy value
that is bound to the next frame's ID (still not computed).

gdb/ChangeLog:

2020-04-23  Luis Machado  <luis.machado@linaro.org>

	* dwarf2/frame-tailcall.c (dwarf2_tailcall_sniffer_first): Use
	get_frame_register instead of gdbarch_unwind_pc.
@vineetgarc vineetgarc transferred this issue from foss-for-synopsys-dwc-arc-processors/binutils-gdb Jun 17, 2020
@cupertinomiranda
Copy link

cupertinomiranda commented Jan 26, 2021

I will make a pull request with 2 commits that at least reduce number of erroneous ARC_NONE relocations.
I will need that pull request to be used to fully execute dejagnu and glibc testsuite on an HSDK before I can consider it to be merged.

This is one of those fixes that never seems to get pulled in.
BTW, dejagnu on Linux HSDK should not take more then 2h of HSDK time.

@abrodkin
Copy link
Member

I guess that's the PR @cupertinomiranda has meant: foss-for-synopsys-dwc-arc-processors/binutils-gdb#52.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants