From 801fe1d6f1f56b8379d69c372f9f8515550963cc Mon Sep 17 00:00:00 2001 From: Thomas Lively Date: Mon, 23 Sep 2019 14:01:09 -0700 Subject: [PATCH] Reorder extending loads to have signed variants first This is consistent with the ordering for all other instructions that have signed and unsigned variants. This does renumber these instructions, but no engine or toolchain has documented support for these instructions yet, so that should be ok. --- proposals/simd/BinarySIMD.md | 12 ++++++------ proposals/simd/ImplementationStatus.md | 6 +++--- proposals/simd/SIMD.md | 6 +++--- 3 files changed, 12 insertions(+), 12 deletions(-) diff --git a/proposals/simd/BinarySIMD.md b/proposals/simd/BinarySIMD.md index 77cfa1820..bf973cf21 100644 --- a/proposals/simd/BinarySIMD.md +++ b/proposals/simd/BinarySIMD.md @@ -183,9 +183,9 @@ The `v8x16.shuffle` instruction has 16 bytes after `simdop`. | `i32x4.widen_high_i16x8_s` | `0xcf`| - | | `i32x4.widen_low_i16x8_u` | `0xd0`| - | | `i32x4.widen_high_i16x8_u` | `0xd1`| - | -| `i16x8.load8x8_u` | `0xd2`| m:memarg | -| `i16x8.load8x8_s` | `0xd3`| m:memarg | -| `i32x4.load16x4_u` | `0xd4`| m:memarg | -| `i32x4.load16x4_s` | `0xd5`| m:memarg | -| `i64x2.load32x2_u` | `0xd6`| m:memarg | -| `i64x2.load32x2_s` | `0xd7`| m:memarg | +| `i16x8.load8x8_s` | `0xd2`| m:memarg | +| `i16x8.load8x8_u` | `0xd3`| m:memarg | +| `i32x4.load16x4_s` | `0xd4`| m:memarg | +| `i32x4.load16x4_u` | `0xd5`| m:memarg | +| `i64x2.load32x2_s` | `0xd6`| m:memarg | +| `i64x2.load32x2_u` | `0xd7`| m:memarg | diff --git a/proposals/simd/ImplementationStatus.md b/proposals/simd/ImplementationStatus.md index 4409fc35a..12be744f5 100644 --- a/proposals/simd/ImplementationStatus.md +++ b/proposals/simd/ImplementationStatus.md @@ -144,12 +144,12 @@ | `f64x2.convert_i64x2_u` | `-munimplemented-simd128` | | :heavy_check_mark: | :heavy_check_mark: | | `v8x16.swizzle` | | | :heavy_check_mark: | | | `v8x16.shuffle` | | | :heavy_check_mark: | :heavy_check_mark: | -| `i16x8.load8x8_u` | | | | | | `i16x8.load8x8_s` | | | | | -| `i32x4.load16x4_u` | | | | | +| `i16x8.load8x8_u` | | | | | | `i32x4.load16x4_s` | | | | | -| `i64x2.load32x2_u` | | | | | +| `i32x4.load16x4_u` | | | | | | `i64x2.load32x2_s` | | | | | +| `i64x2.load32x2_u` | | | | | | `i8x16.narrow_i16x8_s` | | :heavy_check_mark: | :heavy_check_mark: | | | `i8x16.narrow_i16x8_u` | | :heavy_check_mark: | :heavy_check_mark: | | | `i16x8.narrow_i32x4_s` | | :heavy_check_mark: | :heavy_check_mark: | | diff --git a/proposals/simd/SIMD.md b/proposals/simd/SIMD.md index c7239b7aa..35984004c 100644 --- a/proposals/simd/SIMD.md +++ b/proposals/simd/SIMD.md @@ -676,12 +676,12 @@ Load a single element and splat to all lanes of a `v128` vector. ### Load and Extend -* `i16x8.load8x8_u(memarg) -> v128`: load eight 8-bit integers and zero extend each one to a 16-bit lane * `i16x8.load8x8_s(memarg) -> v128`: load eight 8-bit integers and sign extend each one to a 16-bit lane -* `i32x4.load16x4_u(memarg) -> v128`: load four 16-bit integers and zero extend each one to a 32-bit lane +* `i16x8.load8x8_u(memarg) -> v128`: load eight 8-bit integers and zero extend each one to a 16-bit lane * `i32x4.load16x4_s(memarg) -> v128`: load four 16-bit integers and sign extend each one to a 32-bit lane -* `i64x2.load32x2_u(memarg) -> v128`: load two 32-bit integers and zero extend each one to a 64-bit lane +* `i32x4.load16x4_u(memarg) -> v128`: load four 16-bit integers and zero extend each one to a 32-bit lane * `i64x2.load32x2_s(memarg) -> v128`: load two 32-bit integers and sign extend each one to a 64-bit lane +* `i64x2.load32x2_u(memarg) -> v128`: load two 32-bit integers and zero extend each one to a 64-bit lane Fetch consequtive integers up to 32-bit wide and produce a vector with lanes up to 64 bits.