Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

debug info: f128 values do not display correctly in debuggers #2086

Open
andrewrk opened this issue Mar 21, 2019 · 6 comments
Open

debug info: f128 values do not display correctly in debuggers #2086

andrewrk opened this issue Mar 21, 2019 · 6 comments
Labels
bug Observed behavior contradicts documented or intended behavior contributor friendly This issue is limited in scope and/or knowledge of Zig internals. upstream An issue with a third party project that Zig uses.
Milestone

Comments

@andrewrk
Copy link
Member

andrewrk commented Mar 21, 2019

test "aoeu" {
    var x = @bitCast(f128, 0x40042eab345678439abcdefea5678234);
    @breakpoint();
}

Expected behavior:

(gdb) p x
$1 = 37.8335959201289829757620127916183076

Actual behavior:

(gdb) p x
$1 = 1.3135418964036871286e+4336

Here's where the debug info for floats is set:

add_fp_entry(g, "f128", 128, LLVMFP128Type(), &g->builtin_types.entry_f128);

@andrewrk andrewrk added bug Observed behavior contradicts documented or intended behavior stage1 The process of building from source via WebAssembly and the C backend. labels Mar 21, 2019
@andrewrk andrewrk added this to the 0.5.0 milestone Mar 21, 2019
@andrewrk andrewrk added the contributor friendly This issue is limited in scope and/or knowledge of Zig internals. label Mar 21, 2019
@shawnl
Copy link
Contributor

shawnl commented Mar 21, 2019

This bug also exists when using __float128 with gcc or clang, and this source file, compiled with -O0 -g, on amd64:

#include <stdio.h>
#include <stdlib.h>

int main(void) {
  __float128 f = 37.8335959201289829757620127916183076;
  printf("Hello, world!\n");
  return EXIT_SUCCESS;
}

Sometimes lldb or gdb give
(long double) $0 = 1.35821198058130049262E+4336
and other times they give:
$1 = 37.8335959201289853126581874676048756

both are incorrect

@andrewrk
Copy link
Member Author

andrewrk commented Mar 21, 2019

both are incorrect

That's good to know. In the original issue description, I assumed the gdb/clang output for __float128 was correct. I guess this is an upstream issue then.

So now we need to file an upstream bug against LLVM or maybe one against gdb.

@andrewrk andrewrk added the upstream An issue with a third party project that Zig uses. label Mar 21, 2019
@rGradeStd
Copy link

I did a quick review of the lldb source code. It seems like there is lack of support for float128: many switches don't even have a float128 case.
Speaking of gdb, I found this topic, the situation is similar: http://sourceware-org.1504.n7.nabble.com/Debugger-support-for-float128-type-td348253.html
I may be wrong, but it seems that both lldb and gdb treat float128 as a long double, while it’s actually not long double. (or is it?)

@shawnl
Copy link
Contributor

shawnl commented May 10, 2019

I may be wrong, but it seems that both lldb and gdb treat float128 as a long double, while it’s actually not long double. (or is it?)

This is not surprising. types in C are weird, because of legacy considerations.

@andrewrk andrewrk modified the milestones: 0.5.0, 0.6.0 Aug 27, 2019
@andrewrk andrewrk modified the milestones: 0.6.0, 0.7.0 Dec 31, 2019
@andrewrk andrewrk modified the milestones: 0.7.0, 0.8.0 Aug 13, 2020
@andrewrk andrewrk modified the milestones: 0.8.0, 0.9.0 Nov 6, 2020
@andrewrk andrewrk modified the milestones: 0.9.0, 0.10.0 May 19, 2021
@nektro
Copy link
Contributor

nektro commented Oct 16, 2022

confirming this affects stage2 as well

test {
    var x = @bitCast(f128, @as(u128, 0x40042eab345678439abcdefea5678234));
    @breakpoint();
    _ = x;
}
(gdb) p x
$1 = 1.3135418964036871286e+4336

@Vexu Vexu removed the stage1 The process of building from source via WebAssembly and the C backend. label Dec 7, 2022
@mikdusan
Copy link
Member

did some digging:

  1. gdb with main.c works with both gcc and clang on linux

  2. lldb with main.c does not work on linux, can't try it on macos because __float128 not target supported

  3. gdb with main.zig does not work because:

!2368 = !DICompositeType(tag: DW_TAG_union_type, name: "z2.main__union_3484", size: 128, align: 128, elements: !2369)
!2369 = !{!2370, !2372, !2374}
!2370 = !DIDerivedType(tag: DW_TAG_member, name: "u", scope: !2368, baseType: !2371, size: 128, align: 64)
!2371 = !DIBasicType(name: "u128", size: 128, encoding: DW_ATE_unsigned)
!2372 = !DIDerivedType(tag: DW_TAG_member, name: "f", scope: !2368, baseType: !2373, size: 128, align: 128)
!2373 = !DIBasicType(name: "f128", size: 128, encoding: DW_ATE_float)

on !2373 the name f128 is not recognized by gdb. Yes, gdb hard-codes a list of strings here:

https://github.com/bminor/binutils-gdb/blob/a62320ed0818decde5f3265ebc508756f517d6f9/gdb/i386-tdep.c#L8154-L8178

a potential hack would be for zig to emit __float128 in debug info. For example, manually editing a .ll and then building an executable works:

*** a	2024-02-20 18:47:59.757925875 -0500
--- b	2024-02-20 18:48:15.887751324 -0500
***************
*** 1 ****
! !2373 = !DIBasicType(name: "f128", size: 128, encoding: DW_ATE_float)
--- 1 ----
! !2373 = !DIBasicType(name: "__float128", size: 128, encoding: DW_ATE_float)

--

useful test sources...

main.c

#include <stdint.h>

int main(void) {
    typedef union {
        unsigned __int128 u;
        __float128 f;
        struct {
            uint64_t lo;
            uint64_t hi;
        } aggregate;
    } value;

    value num;
    num.aggregate.lo = 0x9abcdefea5678234;
    num.aggregate.hi = 0x40042eab34567843;

    char *p = 0;
    *p = 0;
}

main.zig

pub fn main() void {
    @setRuntimeSafety(false);

    var value: union {
        u: u128,
        f: f128,
        aggregate: struct {
            lo: u64,
            hi: u64,
        },
    } = undefined;
    value.u = 0x40042eab345678439abcdefea5678234;

    @breakpoint();
}

debug sessions

# GOOD: look to `f =` value
$ gcc -g -o mc main.c
$ gdb ./mc
(gdb) run
Program received signal SIGSEGV, Segmentation fault.
(gdb) info locals
num = {u = 85092307472724069154655634854253134388, f = 37.8335959201289829757620127916183076, aggregate = {lo = 11150031962740589108, hi = 4612863231186597955}}

# BAD
$ zig build-exe main.zig -femit-bin=mz
$ gdb ./mz
(gdb) run
Program received signal SIGTRAP, Trace/breakpoint trap.
(gdb) info locals
value = {u = 85092307472724069154655634854253134388, f = 1.3135418964036871286e+4336, aggregate = {lo = 11150031962740589108, hi = 4612863231186597955}}

# GOOD
$ zig build-obj main.zig -femit-llvm-ir
$ cat main.ll | sed 's,"f128","__float128",' > main_hack.ll
$ zig build-exe main_hack.ll
$ gdb ./main_hack
Program received signal SIGTRAP, Trace/breakpoint trap.
(gdb) info locals
value = {u = 85092307472724069154655634854253134388, f = 37.8335959201289829757620127916183076, aggregate = {lo = 11150031962740589108, hi = 4612863231186597955}}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Observed behavior contradicts documented or intended behavior contributor friendly This issue is limited in scope and/or knowledge of Zig internals. upstream An issue with a third party project that Zig uses.
Projects
None yet
Development

No branches or pull requests

6 participants