#485 reported an issue with loss of precision for floating-point (and integer) literals. It was address in 0d3e1ea, but many similar issues can still arise.
The core of the issue is that Python's float type, which is 64-bit double-precision in Python code, is interpreted in Warp kernels the same way as a CUDA float, and thus 32-bit single-precision. This is fine for strongly typed variables; it maps to wp.float32 and we have wp.float64 to represent double-precision variables. But literals are also of type float and thus lose precision compared to Python literals. The expression wp.float64(1.00000005) now does preserve precision with the above fix in place, but e.g. wp.float64(1.0 + 0.00000005) does not, nor does it work when first assigning the literal to a variable. Likewise during function overload resolution, a literal will match the float32 type and this may go unnoticed when other parameters and the return type are the same. It's also very verbose to have to write wp.vec3h(wp.float16(1.0), wp.float16(2.0), wp.float16(3.0)) when wp.vec3h(1.0, 2.0, 3.0) would be unambiguous.
Note that NumPy experienced similar issues prior to formalizing handling these differences in precision and implementing NEP 50.
My proposal is to separate the weakly typed Python float from Warp's float dtype early on. The latter still gets interpreted as float32 but literals and constant expressions that can be evaluated at compile-time remain backed by double-precision storage. Only in the context of strongly-typed variables or constructors should they convert to a matching strong type. Likewise for integers except an overflow error gets raised if it cannot be represented in the target type.
#485 reported an issue with loss of precision for floating-point (and integer) literals. It was address in 0d3e1ea, but many similar issues can still arise.
The core of the issue is that Python's
floattype, which is 64-bit double-precision in Python code, is interpreted in Warp kernels the same way as a CUDAfloat, and thus 32-bit single-precision. This is fine for strongly typed variables; it maps towp.float32and we havewp.float64to represent double-precision variables. But literals are also of typefloatand thus lose precision compared to Python literals. The expressionwp.float64(1.00000005)now does preserve precision with the above fix in place, but e.g.wp.float64(1.0 + 0.00000005)does not, nor does it work when first assigning the literal to a variable. Likewise during function overload resolution, a literal will match thefloat32type and this may go unnoticed when other parameters and the return type are the same. It's also very verbose to have to writewp.vec3h(wp.float16(1.0), wp.float16(2.0), wp.float16(3.0))whenwp.vec3h(1.0, 2.0, 3.0)would be unambiguous.Note that NumPy experienced similar issues prior to formalizing handling these differences in precision and implementing NEP 50.
My proposal is to separate the weakly typed Python
floatfrom Warp'sfloatdtype early on. The latter still gets interpreted asfloat32but literals and constant expressions that can be evaluated at compile-time remain backed by double-precision storage. Only in the context of strongly-typed variables or constructors should they convert to a matching strong type. Likewise for integers except an overflow error gets raised if it cannot be represented in the target type.