Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

improve target support #53

Merged
merged 1 commit into from
Dec 30, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,10 +38,10 @@ Check out the [paper](https://github.com/ogxd/gxhash-rust/blob/main/article/arti

### Architecture Compatibility
GxHash is compatible with:
- X86 processors with `AES-NI` intrinsics
- ARM processors with `NEON` intrinsics
- X86 processors with `AES-NI` & `SSE2` intrinsics
- ARM processors with `AES` & `NEON` intrinsics
> **Warning**
> Other platforms are currently not supported (there is no fallback). The behavior on these platforms is undefined.
> Other platforms are currently not supported (there is no fallback). GxHash will not build on these platforms.

### Hashes Stability
All generated hashes for a given version of GxHash are stable, meaning that for a given input the output hash will be the same across all supported platforms.
Expand Down
14 changes: 9 additions & 5 deletions src/gxhash/platform/aarch64.rs → src/gxhash/platform/arm.rs
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
#[cfg(target_arch = "arm")]
use core::arch::arm::*;
#[cfg(target_arch = "aarch64")]
use core::arch::aarch64::*;

use super::*;
Expand All @@ -21,6 +24,7 @@ pub unsafe fn load_unaligned(p: *const State) -> State {

#[inline(always)]
pub unsafe fn get_partial(p: *const State, len: usize) -> State {
// Safety check
if check_same_page(p) {
get_partial_unsafe(p, len)
} else {
Expand All @@ -47,11 +51,6 @@ pub unsafe fn get_partial_unsafe(data: *const State, len: usize) -> State {
vaddq_s8(partial_vector, vdupq_n_s8(len as i8))
}

#[inline(always)]
pub unsafe fn ld(array: *const u32) -> State {
vreinterpretq_s8_u32(vld1q_u32(array))
}

#[inline(always)]
// See https://blog.michaelbrase.com/2018/05/08/emulating-x86-aes-intrinsics-on-armv8-a
pub unsafe fn aes_encrypt(data: State, keys: State) -> State {
Expand All @@ -72,6 +71,11 @@ pub unsafe fn aes_encrypt_last(data: State, keys: State) -> State {
vreinterpretq_s8_u8(veorq_u8(encrypted, vreinterpretq_u8_s8(keys)))
}

#[inline(always)]
pub unsafe fn ld(array: *const u32) -> State {
vreinterpretq_s8_u32(vld1q_u32(array))
}

#[inline(always)]
pub unsafe fn finalize(hash: State) -> State {
let mut hash = aes_encrypt(hash, ld(KEYS.as_ptr()));
Expand Down
8 changes: 4 additions & 4 deletions src/gxhash/platform/mod.rs
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
#[cfg(target_arch = "aarch64")]
#[path = "aarch64.rs"]
#[cfg(all(any(target_arch = "arm", target_arch = "aarch64"), target_feature = "aes", target_feature = "neon"))]
#[path = "arm.rs"]
mod platform;

#[cfg(target_arch = "x86_64")]
#[path = "x86_64.rs"]
#[cfg(all(any(target_arch = "x86", target_arch = "x86_64"), target_feature = "aes", target_feature = "sse2"))]
#[path = "x86.rs"]
mod platform;

pub use platform::*;
Expand Down
3 changes: 3 additions & 0 deletions src/gxhash/platform/x86_64.rs → src/gxhash/platform/x86.rs
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
#[cfg(target_arch = "x86")]
use core::arch::x86::*;
#[cfg(target_arch = "x86_64")]
use core::arch::x86_64::*;

use super::*;
Expand Down