Skip to content
This repository has been archived by the owner on Dec 22, 2021. It is now read-only.

Double-precision SIMD conversions #348

Open
Maratyszcza opened this issue Sep 17, 2020 · 5 comments
Open

Double-precision SIMD conversions #348

Maratyszcza opened this issue Sep 17, 2020 · 5 comments

Comments

@Maratyszcza
Copy link
Contributor

Currently WebAssembly SIMD specification is missing all kinds of double-precision f64x2 conversions. The instruction set needs at least the following conversions to be usable:

  • SIMD double-precision to single-precision floating-point
  • SIMD single-precision to double-precision floating-point
  • SIMD double-precision to 32-bit integer
  • SIMD 32-bit integer to double-precision
@tlively
Copy link
Member

tlively commented Sep 19, 2020

Can you clarify what you mean by "to be usable?" Are there real-world applications that would require these conversions for which scalarizing them would be insufficient?

@Maratyszcza
Copy link
Contributor Author

Without these instructions SIMD specification is functionally incomplete: it doesn't cover many basic C/C++ operations (conversions between double type and other types). Double-precision computations are mostly used in scientific computing, and so are conversions between double and int/float:

  • Conversions from double to int are used when storing data in quantized representation and in table-based algorithms (need to real indices to int index for table lookup).
  • Conversions from int to double are used when loading data in quantized representation and in random number generation (the raw output of RNGs is typically int, but numerically algorithms need real values).
  • Conversions between double and float are used in mixed-precision algorithms, where less sensitive parts of the application are computed in single precision and the more sensitive parts - in double precision. E.g. iterative algorithms may first run in single precision to convergence, and then starting with the single-precision solution run double precision iterations.

@ngzhian
Copy link
Member

ngzhian commented Sep 24, 2020

What have you been doing intrinsics wise in the absence of these instructions? Are you scalarizing them?

I took a quick peek at the suggested conversions, it looks like on x86 they map to pretty straightforward instructions (cvt* family), and also we have the scalar instructions in Wasm already. So symmetry wise and codegen wise they seem pretty okay.

@Maratyszcza
Copy link
Contributor Author

XNNPACK doesn't use double-precision, so I didn't need to find a work-around. The emmintrin.h header in Emscripten implements these functions through scalarization.

@jan-wassenberg
Copy link

+1 for these being useful, the math library being built for Highway requires these.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants