Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: float64 nanmin() returns NaN on little-endian MIPS (mipsel, mips64el) #23158

Closed
stefanor opened this issue Feb 3, 2023 · 24 comments
Closed

Comments

@stefanor
Copy link
Contributor

stefanor commented Feb 3, 2023

Describe the issue:

Build failures of silx on debian logs point to a problem with numpy.nanmin() for float64 on little endian MIPS platforms.

Reproduce the code example:

import numpy
data = numpy.array((float('nan'), 1.0), dtype='float64')
minimum = numpy.nanmin(data)
print(f"minimum={minimum}")
assert minimum == 1.0

Error message:

minimum=nan
Traceback (most recent call last):
  File "/tmp/testcase.py", line 6, in <module>
    assert minimum == 1.0
           ^^^^^^^^^^^^^^
AssertionError

Runtime information:

import sys, numpy; print(numpy.version); print(sys.version)
1.24.1
3.11.1 (main, Dec 31 2022, 10:23:59) [GCC 12.2.0]
print(numpy.show_runtime())
WARNING: threadpoolctl not found in system! Install it by pip install threadpoolctl. Once installed, try np.show_runtime again for more detailed build information
[{'simd_extensions': {'baseline': [], 'found': [], 'not_found': []}}]
None

Context for the issue:

This is a fundamental bug in a data type, breaking higher level verification tests in other packages.

@seberg seberg added this to the 1.24.3 release milestone Feb 4, 2023
@tylerjereddy
Copy link
Contributor

Is there a GCC compile farm machine with the appropriate configuration to debug this? I checked https://cfarm.tetaneutral.net/machines/list/ and tried host gcc230, which also runs Debian, but it is Big Endian it seems:

Architecture:        mips64
Byte Order:          Big Endian
CPU(s):              4
On-line CPU(s) list: 0-3
Thread(s) per core:  1
Core(s) per socket:  4
Socket(s):           1
NUMA node(s):        1
BogoMIPS:            2000.00
L1d cache:           32K
L1i cache:           78K
L2 cache:            512K
NUMA node0 CPU(s):   0-3

@stefanor
Copy link
Contributor Author

stefanor commented Feb 5, 2023

I'm not aware of any public machines, I'm afraid. I can test things on Debian's mips porterbox, if that helps. I haven't tried to debug this, at all, beyond pinpointing the issue in numpy.

I believe that QEMU's support for mips64el and mipsel are both usable.

@seberg
Copy link
Member

seberg commented Feb 15, 2023

I tried to have a bit of a look, but need to try to get emulation running. It does seem like this this should just end up using fmin in C, unless it goes down some interesting SIMD paths (but I don't think it does from np.show_runtime()). It would be surprising to fail though.

@seberg
Copy link
Member

seberg commented Apr 12, 2023

@stefanor any chance you could make sure that fmin itself isn't broken? My guess is, something like this should do:

#include <math.h>
#include <stdio.h>

int main() {
    volatile double f1 = 1.;
    volatile double f2 = NAN;

    printf("Results are: %f, %f\n", fmin(f1, f2), fmin(f2, f1));
    return 0;
}

Compile with gcc -lm and run. I really don't see how we will not end up with fmin.

@stefanor
Copy link
Contributor Author

On both mips64el and mipsel:

Results are: 1.000000, 1.000000

@seberg
Copy link
Member

seberg commented Apr 21, 2023

Whenever I return to this, I don't really see how this can happen (sorry I didn't get to a working qemu/native setup, yet).

Do you have the NumPy build setup/flags ready or are they the defaults? Maybe there is something simple like someone being a bit overzealous on optimization and enabling -ffast-math?!
(In godbold, -ffast-math seems to replace the fmin call with __ledf2 which is not NaN aware, there may be other ways to trigger that I guess. EDIT: I doubt it, but am not sure some of the following instructions doesn't do something about it.)

@stefanor
Copy link
Contributor Author

Looks like it's all defaults.

You can see them in a build log: https://buildd.debian.org/status/fetch.php?pkg=numpy&arch=mips64el&ver=1%3A1.24.2-1&stamp=1675927314&raw=0

@mattip
Copy link
Member

mattip commented Apr 25, 2023

I see gcc12 and these flags:

mips64el-linux-gnuabi64-gcc -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -O2 -ffile-prefix-map=/<<PKGBUILDDIR>>=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC

Do we have any known successful little-endian systems?

@stefanor
Copy link
Contributor Author

Do we have any known successful little-endian systems?

We don't see the issue on any other Debian platforms: https://buildd.debian.org/status/logs.php?pkg=silx&ver=1.1.0%2Bdfsg-4 (there is a test failure on s390x, but that's unrelated)

@mattip
Copy link
Member

mattip commented Apr 26, 2023

Thanks. Looking for instance at the sucessful ppc64el build log, I see gcc12 but not the specific flags used. With that, I do not see any flag in the mips64el build that should cause problems.

@seiko2plus
Copy link
Member

Maybe disabling all kinds of optimization including disabling optimization level 3 can be used as a workaround for this issue through the following command:

python setup.py build --disable-optimization install --user 

through pip:

pip install --no-use-pep517 --global-option=build \
--global-option="--disable-optimization" ./

This issue may be related to the following manual unroll which can be disabled by build option --disable-optimization:

#ifndef NPY_DISABLE_OPTIMIZATION
// scalar unrolls
if (IS_BINARY_REDUCE) {
// Note, 8x unroll was chosen for best results on Apple M1
npy_intp elemPerLoop = 8;
if((i+elemPerLoop) <= len){
@type@ m0 = *((@type@ *)(ip2 + (i + 0) * is2));
@type@ m1 = *((@type@ *)(ip2 + (i + 1) * is2));
@type@ m2 = *((@type@ *)(ip2 + (i + 2) * is2));
@type@ m3 = *((@type@ *)(ip2 + (i + 3) * is2));
@type@ m4 = *((@type@ *)(ip2 + (i + 4) * is2));
@type@ m5 = *((@type@ *)(ip2 + (i + 5) * is2));
@type@ m6 = *((@type@ *)(ip2 + (i + 6) * is2));
@type@ m7 = *((@type@ *)(ip2 + (i + 7) * is2));
i += elemPerLoop;
for(; (i+elemPerLoop)<=len; i+=elemPerLoop){
@type@ v0 = *((@type@ *)(ip2 + (i + 0) * is2));
@type@ v1 = *((@type@ *)(ip2 + (i + 1) * is2));
@type@ v2 = *((@type@ *)(ip2 + (i + 2) * is2));
@type@ v3 = *((@type@ *)(ip2 + (i + 3) * is2));
@type@ v4 = *((@type@ *)(ip2 + (i + 4) * is2));
@type@ v5 = *((@type@ *)(ip2 + (i + 5) * is2));
@type@ v6 = *((@type@ *)(ip2 + (i + 6) * is2));
@type@ v7 = *((@type@ *)(ip2 + (i + 7) * is2));
m0 = SCALAR_OP(m0, v0);
m1 = SCALAR_OP(m1, v1);
m2 = SCALAR_OP(m2, v2);
m3 = SCALAR_OP(m3, v3);
m4 = SCALAR_OP(m4, v4);
m5 = SCALAR_OP(m5, v5);
m6 = SCALAR_OP(m6, v6);
m7 = SCALAR_OP(m7, v7);
}
m0 = SCALAR_OP(m0, m1);
m2 = SCALAR_OP(m2, m3);
m4 = SCALAR_OP(m4, m5);
m6 = SCALAR_OP(m6, m7);
m0 = SCALAR_OP(m0, m2);
m4 = SCALAR_OP(m4, m6);
m0 = SCALAR_OP(m0, m4);
*((@type@ *)op1) = SCALAR_OP(*((@type@ *)op1), m0);
}
} else{
// Note, 4x unroll was chosen for best results on Apple M1
npy_intp elemPerLoop = 4;
for(; (i+elemPerLoop)<=len; i+=elemPerLoop){
/* Note, we can't just load all, do all ops, then store all here.
* Sometimes ufuncs are called with `accumulate`, which makes the
* assumption that previous iterations have finished before next
* iteration. For example, the output of iteration 2 depends on the
* result of iteration 1.
*/
/**begin repeat2
* #unroll = 0, 1, 2, 3#
*/
@type@ v@unroll@ = *((@type@ *)(ip1 + (i + @unroll@) * is1));
@type@ u@unroll@ = *((@type@ *)(ip2 + (i + @unroll@) * is2));
*((@type@ *)(op1 + (i + @unroll@) * os1)) = SCALAR_OP(v@unroll@, u@unroll@);
/**end repeat2**/
}
}
#endif // NPY_DISABLE_OPTIMIZATION

@seiko2plus
Copy link
Member

note that we are using C99 fmin/fmax under flag level 3 which gives a strong hint to the compiler to generate SIMD paths:

#define scalar_maxp_f fmaxf
#define scalar_maxp_d fmax
#define scalar_maxp_l fmaxl
#define scalar_minp_f fminf
#define scalar_minp_d fmin
#define scalar_minp_l fminl

@stefanor
Copy link
Contributor Author

Maybe disabling all kinds of optimization including disabling optimization level 3 can be used as a workaround for this issue through the following command:
python setup.py build --disable-optimization

I tried a build with --disable-optimization and it still reproduces.

From the end of the build log:

INFO: C compiler: mips64el-linux-gnuabi64-gcc -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC

@seberg
Copy link
Member

seberg commented Apr 27, 2023

I am not sure that it makes sense to try to spend much time on this TBH, but just to see... one more question. Given:

import numpy as np
print(np.core._multiarray_umath.__file__)

as <file_path>, what does:

 % gdb -batch -ex 'file <file_path>' -ex 'disassemble DOUBLE_fmin'

give you? On x86 (without optimizatoin) it gives for example:

Dump of assembler code for function DOUBLE_fmin:
   0x0000000000147860 <+0>:	push   %r15
   0x0000000000147862 <+2>:	push   %r14
   0x0000000000147864 <+4>:	push   %r13
   0x0000000000147866 <+6>:	push   %r12
   0x0000000000147868 <+8>:	push   %rbp
   0x0000000000147869 <+9>:	push   %rbx
   0x000000000014786a <+10>:	xor    %ebx,%ebx
   0x000000000014786c <+12>:	sub    $0x28,%rsp
   0x0000000000147870 <+16>:	mov    0x10(%rdx),%rcx
   0x0000000000147874 <+20>:	mov    (%rsi),%rax
   0x0000000000147877 <+23>:	mov    (%rdi),%r12
   0x000000000014787a <+26>:	mov    0x8(%rdi),%rbp
   0x000000000014787e <+30>:	mov    %rsi,0x18(%rsp)
   0x0000000000147883 <+35>:	mov    0x10(%rdi),%r15
   0x0000000000147887 <+39>:	mov    (%rdx),%r14
   0x000000000014788a <+42>:	mov    %rcx,0x8(%rsp)
   0x000000000014788f <+47>:	mov    %rax,0x10(%rsp)
   0x0000000000147894 <+52>:	mov    0x8(%rdx),%r13
   0x0000000000147898 <+56>:	test   %rax,%rax
   0x000000000014789b <+59>:	jle    0x1478cb <DOUBLE_fmin+107>
   0x000000000014789d <+61>:	nopl   (%rax)
   0x00000000001478a0 <+64>:	movsd  0x0(%rbp),%xmm1
   0x00000000001478a5 <+69>:	movsd  (%r12),%xmm0
   0x00000000001478ab <+75>:	add    $0x1,%rbx
   0x00000000001478af <+79>:	add    %r14,%r12
   0x00000000001478b2 <+82>:	add    %r13,%rbp
   0x00000000001478b5 <+85>:	call   0x27610 <fmin@plt>
   0x00000000001478ba <+90>:	movsd  %xmm0,(%r15)
   0x00000000001478bf <+95>:	add    0x8(%rsp),%r15
   0x00000000001478c4 <+100>:	cmp    %rbx,0x10(%rsp)
   0x00000000001478c9 <+105>:	jne    0x1478a0 <DOUBLE_fmin+64>
   0x00000000001478cb <+107>:	mov    0x18(%rsp),%rdi
   0x00000000001478d0 <+112>:	add    $0x28,%rsp
   0x00000000001478d4 <+116>:	pop    %rbx
   0x00000000001478d5 <+117>:	pop    %rbp
   0x00000000001478d6 <+118>:	pop    %r12
   0x00000000001478d8 <+120>:	pop    %r13
   0x00000000001478da <+122>:	pop    %r14
   0x00000000001478dc <+124>:	pop    %r15
   0x00000000001478de <+126>:	jmp    0x20fcc0 <npy_clear_floatstatus_barrier>
End of assembler dump.

Which calls fmin in there, which we suspected is good from the C program (but who knows...).

(objdump <file> -t | grep fmin should show that there is only the DOUBLE_fmin version around)

@stefanor
Copy link
Contributor Author

stefanor commented Apr 27, 2023

FWIW, a bisect pointed to #20131 as the origin point of this bug (which may not be that surprising, it was a massive rewrite of the relevant code).

The disassemble (from 1.24.2):

$ gdb -batch -ex 'file /home/stefanor/numpy-1.24.2/build/lib.linux-mips64-cpython-311/numpy/core/_multiarray_umath.cpython-311-mips64el-linux-gnuabi64.so' -ex 'disassemble DOUBLE_fmin' 
Dump of assembler code for function DOUBLE_fmin:
   0x00000000001a2c90 <+0>:     daddiu  sp,sp,-96
   0x00000000001a2c94 <+4>:     sd      gp,72(sp)
   0x00000000001a2c98 <+8>:     sd      s4,40(sp)
   0x00000000001a2c9c <+12>:    lui     gp,0x16
   0x00000000001a2ca0 <+16>:    ld      s4,0(a1)
   0x00000000001a2ca4 <+20>:    daddu   gp,gp,t9
   0x00000000001a2ca8 <+24>:    sd      s8,80(sp)
   0x00000000001a2cac <+28>:    sd      s7,64(sp)
   0x00000000001a2cb0 <+32>:    sd      s6,56(sp)
   0x00000000001a2cb4 <+36>:    sd      s5,48(sp)
   0x00000000001a2cb8 <+40>:    sd      s3,32(sp)
   0x00000000001a2cbc <+44>:    sd      s2,24(sp)
   0x00000000001a2cc0 <+48>:    sd      s1,16(sp)
   0x00000000001a2cc4 <+52>:    sd      s0,8(sp)
   0x00000000001a2cc8 <+56>:    ld      s3,0(a0)
   0x00000000001a2ccc <+60>:    ld      s2,8(a0)
   0x00000000001a2cd0 <+64>:    ld      s1,16(a0)
   0x00000000001a2cd4 <+68>:    ld      s7,0(a2)
   0x00000000001a2cd8 <+72>:    ld      s6,8(a2)
   0x00000000001a2cdc <+76>:    ld      s5,16(a2)
   0x00000000001a2ce0 <+80>:    sd      ra,88(sp)
   0x00000000001a2ce4 <+84>:    daddiu  gp,gp,-5904
   0x00000000001a2ce8 <+88>:    move    s8,a1
   0x00000000001a2cec <+92>:    blez    s4,0x1a2d20 <DOUBLE_fmin+144>
   0x00000000001a2cf0 <+96>:    move    s0,zero
   0x00000000001a2cf4 <+100>:   nop
   0x00000000001a2cf8 <+104>:   ldc1    $f13,0(s2)
   0x00000000001a2cfc <+108>:   ldc1    $f12,0(s3)
   0x00000000001a2d00 <+112>:   ld      t9,-23064(gp)
   0x00000000001a2d04 <+116>:   daddiu  s0,s0,1
   0x00000000001a2d08 <+120>:   jalr    t9
   0x00000000001a2d0c <+124>:   daddu   s3,s3,s7
   0x00000000001a2d10 <+128>:   daddu   s2,s2,s6
   0x00000000001a2d14 <+132>:   sdc1    $f0,0(s1)
   0x00000000001a2d18 <+136>:   bne     s4,s0,0x1a2cf8 <DOUBLE_fmin+104>
   0x00000000001a2d1c <+140>:   daddu   s1,s1,s5
   0x00000000001a2d20 <+144>:   ld      t9,-31096(gp)
   0x00000000001a2d24 <+148>:   jalr    t9
   0x00000000001a2d28 <+152>:   move    a0,s8
   0x00000000001a2d2c <+156>:   ld      ra,88(sp)
   0x00000000001a2d30 <+160>:   ld      s8,80(sp)
   0x00000000001a2d34 <+164>:   ld      gp,72(sp)
   0x00000000001a2d38 <+168>:   ld      s7,64(sp)
   0x00000000001a2d3c <+172>:   ld      s6,56(sp)
   0x00000000001a2d40 <+176>:   ld      s5,48(sp)
   0x00000000001a2d44 <+180>:   ld      s4,40(sp)
   0x00000000001a2d48 <+184>:   ld      s3,32(sp)
   0x00000000001a2d4c <+188>:   ld      s2,24(sp)
   0x00000000001a2d50 <+192>:   ld      s1,16(sp)
   0x00000000001a2d54 <+196>:   ld      s0,8(sp)
   0x00000000001a2d58 <+200>:   jr      ra
   0x00000000001a2d5c <+204>:   daddiu  sp,sp,96
End of assembler dump.

And yes, only one DOUBLE_fmin:

$ objdump /home/stefanor/numpy-1.24.2/build/lib.linux-mips64-cpython-311/numpy/core/_multiarray_umath.cpython-311-mips64el-linux-gnuabi64.so -t | grep fmin
00000000002f61b8 l     O .data  00000000000000a8              fmin_functions
000000000031bc50 l     O .bss   00000000000000a8              fmin_data
00000000002f4258 l     O .data  000000000000003f              fmin_signatures
00000000001a30a0 l     F .text  00000000000000dc              LONGDOUBLE_fmin
00000000001a29c0 l     F .text  00000000000000d0              FLOAT_fmin
0000000000199e58 l     F .text  00000000000000e8              CFLOAT_fmin
00000000001a2c90 l     F .text  00000000000000d0              DOUBLE_fmin
000000000019d8b8 l     F .text  00000000000001dc              CLONGDOUBLE_fmin
000000000019af28 l     F .text  00000000000000e8              CDOUBLE_fmin
0000000000197fc8 l     F .text  00000000000000f8              HALF_fmin
00000000001917d0 l     F .text  0000000000000018              TIMEDELTA_fmin
00000000001912c8 l     F .text  0000000000000084              DATETIME_fmin
0000000000280d70       F *UND*  0000000000000000              fminl@GLIBC_2.2
0000000000281a80       F *UND*  0000000000000000              fmin@GLIBC_2.2
0000000000281db0       F *UND*  0000000000000000              fminf@GLIBC_2.2

@seberg
Copy link
Member

seberg commented Apr 28, 2023

Honestly, I see nothing weird. I still suspect it is a compiler issue somehow. But I have no idea, I guess it jumps to t9,-23064(gp) which should be the call to s1 = fmin(s2, s3) (s1, s2, s3 being the registers)? But is that a jump into an fmin version that is included into NumPy rather than external?

We could put a work-around in place if necessary, but right now I don't even know how that would look like, since the looks perfectly fine.

@wzssyqa
Copy link

wzssyqa commented Apr 30, 2023

Is it about the encoding of nan?

If so, I guess it is due to the NaN-legacy vs NaN2008 problem?

For NaN-legacy hardware, the encoding of qNaN is different with other hardwares.

I will try to dig it out.

@seberg
Copy link
Member

seberg commented May 2, 2023

Is it about the encoding of nan?

I don't really see why/how, but considering that there is nothing obvious here, maybe? If float("nan") representation has to do with it, that might at least explain why NumPy it sounds like the NumPy test-suite apparently doesn't see this (much).

I.e. maybe code using np.nan works, or at least doing np.fmin.reduce(np.array([0.0, 1.0]) / np.array([0., 2.])) works?!

@stefanor
Copy link
Contributor Author

stefanor commented May 4, 2023

maybe code using np.nan works,

It seems to work, and yes it's different:

>>> print("float('nan')", struct.pack('d', float('nan')))
>>> print("math.nan", struct.pack('d', math.nan))
>>> print("numpy.nan", struct.pack('d', numpy.nan))

float('nan') b'\x00\x00\x00\x00\x00\x00\xf8\x7f'
math.nan b'\x00\x00\x00\x00\x00\x00\xf8\x7f'
numpy.nan b'\x00\x00\x00\x00\x00\x00\xf4\x7f'

@seberg
Copy link
Member

seberg commented May 5, 2023

Fine, so I guess you get: https://sourceware.org/binutils/docs/as/MIPS-NaN-Encodings.html

So @wzssyqa is right and MIPS is weird, which supposedly can be specified what NaNs you want, but no idea what helps but maybe you should just compile everything with -mnan=2008.

There are layers of problems here I think:

  • NumPy uses the "right" legacy NaN. This looks like an accident to me, because I think it is generated by casting a wrong single precision NaN.
  • Python uses __builtin_nan(""), that can probably be tweaked.
  • Using NAN C99 probably would work for everyone nowadays, but Python may want to support systems that don't have quiet NaNs, so probably would need to use an #ifdef NAN.
  • I suspect the fmin is technically also wrong for not ignoring the signalling NaN as well, although signalling NaNs seem pretty useless to me and we do not really bother with them anyway.

Do you/we care enough to push fixes into Python for this? Maybe we should just add that MIPS -mnan=2008 config if that is pragmatic. If Python uses it, NumPy would presumably inherit it.

Or, you fix the definition of NAN to use C99 NAN when available (for NumPy I would be happy to just use it always until someone dares to complain, but Python may have different ideas for supporting strange platforms; I find it hard to believe that a platform would have no quiet NaNs but implement signaling ones...).

@stefanor
Copy link
Contributor Author

stefanor commented May 5, 2023

Python uses __builtin_nan(""), that can probably be tweaked.

Interestingly, using that instead of NAN doesn't seem to trigger the fmin bug (in a trivial C example). So there's something deeper going on, there.

But yeah, I think I'm happy to call this the MIPS port's problem, and let them drive a solution. I can't even compile with -mnan=2008 on current Debian unstable...

Thanks for your time :(

@seberg
Copy link
Member

seberg commented May 5, 2023

I may have misread the Python code. In the dark dark depths, there is this function: _Py_dg_stdnan. Which is used if and only if _PY_SHORT_FLOAT_REPR == 1. I have no clue what that means! But that function would return the wrong NaN (fmin doing the right thing or not, its not a quiet one MIPS).

This code was recently touched here: python/cpython#31171 but neither before nor after can I figure out if/why that path might actually be taken on MIPS (I guess due to a configure mistake?). I suppose a python -m sysconfig | grep DOUBLE might shed light on whether it is taken.


But the thing about all of this is. Below the nice neat comment (one place where it is used):

/* Constant nan value, generated in the same way as float('nan'). */
/* We don't currently assume that Py_NAN is defined everywhere. */

is an unguarded function that also uses Py_NAN. So, maybe do have a look at the sysconfig, and then maybe we should just delete that function from cPython because deleting 100 lines of dead code is always nice :).

@seberg
Copy link
Member

seberg commented May 5, 2023

I misunderstood the _PY_SHORT_FLOAT_REPR var and that path is always taken. So it definitely generates the wrong NaN.

The minimal Python fix would be to just modify _Py_dg_stdnan to return Py_NAN I think. I suspect Py_NAN is guaranteed to be defined and this is all just a huge mess that was never cleaned up in Python, mixing up proper IEEE floats and the definition of NaN. Maybe, that is fine for whether math.nan is exposed, but for the rest it seems all unnecessary complexity.

@seberg
Copy link
Member

seberg commented May 5, 2023

Closing this issue, it clearly isn't NumPy here. Its MIPS, and Python, and the compiler. And potentially all three working together with each doing something wrong.

@seberg seberg closed this as completed May 5, 2023
jessecomeau87 pushed a commit to jessecomeau87/Python that referenced this issue May 20, 2024
It seems to me code all around relies on both being correct anyway.
The actual value for Py_NAN is subtly incorrect on MIPS (depending
on settings) or at least nonstandard, which seems to confuse some
builtin functions.
(Probably it is signalling, but NumPy saw this with fmin, which probably
should also ignore signalling NaNs, see also numpy/numpy#23158).

The guards about `_PY_SHORT_FLOAT_REPR` making sense are relatively
unrelated to NAN and INF being available.

Nevertheless, I currently hide the Py_NAN definition if that is not
set, since I am not sure what good alternative there is to be certain
that Py_NAN is well defined.
OTOH, I do suspect there is no platform where it is not and it should
probably be changed?!
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants