Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[LLVM][DWARF] Fix uniquness of AccelTable Values #74391

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

ayermolo
Copy link
Contributor

@ayermolo ayermolo commented Dec 4, 2023

The finalize function sorts by order, which is DieOffset. It then uses erase in
combination of std::unique to remove duplicates, but it doesn't specify
comparison function. So pointer addresses are used. Presumably it should also use
order() for compare?

The finalize function sorts by order, which is DieOffset. It then uses erase in
combination of std::unique to remove duplicates, but it doesn't specify
comparison function. Presumably it should also use order() for compare?
@dwblaikie
Copy link
Collaborator

Seems like a bug, but would be good to understand where this fails/if this can be tested (& if not, why not)...

Like should this result in duplicate entries in the final output?

@ayermolo
Copy link
Contributor Author

I was hoping, someone who originally worked on it would know. :)
I'll dig in when I have more bandwidth, probably after my oncall.

@labath
Copy link
Collaborator

labath commented May 3, 2024

I was hoping, someone who originally worked on it would know. :)

Unfortunately, I have successfully purged all memory of this. Looking at the history, it seems like this code isn't even exactly my invention -- it was copied/adapted from the original apple accel table implementation. I'm not sure if it was correct then(*), but I have a feeling it's not correct now. For debug_names, the order() function returns a die offset, and (in the presence of multiple compile and type units), I don't think that's guaranteed to uniquely identify a DIE (i.e., erasure could cause us to lose information). Maybe the erase call just needs to be deleted ?

(*) It definitely was not correct in the sense that AppleAccelTableData also did not have an equality operator, but it might have been (given that apple_names doesn't do type units, and uses section-relative (instead of cu-relative) die offsets) correct if one were to implement the operator.

@dwblaikie
Copy link
Collaborator

So this "order" is the DIE offset - so I think this uniquing step implies that sometimes the same DIE might be inserted into the index more than once. I'm not sure where/why that'd happen (it should happen legitimately in simple cases - a single DIE might be in the index for multiple names (like it's simple DW_AT_name, and its DW_AT_linkage_name)) - but I don't know why that'd require sorting/uniquing in DWARFv5 tables, since I don't think the DIE entry is shared between name entries (it could be in limited cases, but can't be in general since each name entry needs to point to a contiguous list of entries that have that name - so unless two names share their entire list, I don't think they can share any part of it)

First thing might be to check the revision history for the sort/unique to see when it was introduced and if there was an explanation and test coverage.

Maybe the simplest thing to do might be to add an assertion that this uniquing operation doesn't reduce the size of the container (ie: that the uniquing is pointless/doesn't do anything) - then run check-lldb and see if any tests fail. If they don't, then the functionality is under-tested - then try something bigger, like recompiling the entirety of clang (maybe even with LTO if a simpler build doesn't find an assertion failure) - then you'd have some example of the problem, and can either automatically or manually reduce it to something commitable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants