Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using MKL vectorized functions for things like tanh, exp, log, ... #415

Open
usamec opened this issue Feb 6, 2018 · 4 comments
Open

Using MKL vectorized functions for things like tanh, exp, log, ... #415

usamec opened this issue Feb 6, 2018 · 4 comments

Comments

@usamec
Copy link

usamec commented Feb 6, 2018

I found, that using MKL library functions like vsTanh (https://software.intel.com/en-us/mkl-developer-reference-fortran-v-tanh) is quite faster than doing vector.mapv(|x| x.tanh()).

Is it worth including this in ndarray crate behind feature gate?

@bluss
Copy link
Member

bluss commented Feb 6, 2018

What's the Rust crate that provides bindings to this, and as a first step, what is the general level interface or functionality in ndarray that is needed to implement this? Access to ndarray data that can be used for this, but is still independent of for example the MKL.

@usamec
Copy link
Author

usamec commented Feb 6, 2018

The situation around this is little bit messy.
There is https://crates.io/crates/intel-mkl-src crate, but no direct bindings for MKL.

Right now, go get working solution one has to reexport MKL functions:

extern "C" {
    fn vsTanh(n: c_int, a: *const c_float, y: *mut c_float);
}

And then instead of code mentioned above, one has to do:

unsafe {
    let ptr = vector.as_mut_ptr();
    vsTanh(vector_len, ptr, ptr);
}

What I was thinking about was to add .exp(), .log(), .tanh(), ... methods for ndarrays, which depending whether we have MKL feature would have faster implementation.

But, if you say, that this is very low priority enhacement and not worth the effort, I would understand.

@termoshtt
Copy link
Member

It sounds good.
I think it is good idea to define tanh and other basic math functions to allow vectorize.

pub trait VectorizedMath {
  fn tanh(self) -> Self;  // or vtanh?
  fn exp(self) -> Self;
  ...
}

impl VectorizedMath for ArrayBase<S, D> { /* like @usamec done */ }

It is basically independent from MKL. We will be able to implement them using SIMD extensions e.g. vsinf in accelerate (macOS). But this must be depends on the platform where the application works, and we must switch its backend, e.g. intel-mkl-src for Intel CPU and accelerate-src for macOS, and so on.

One way to implement it is a feature flag "intel-mkl-vectorized" to ndarray crate, which enables VectorizedMath trait. We can separate this part as "ndarray-vectorized" sub-crate like ndarray-rand.

Another way is to create a binding of MKL "mkl-sys", which includes functions not included in blas-sys, lapack-sys, and fftw-sys. It should be argued on https://github.com/termoshtt/rust-intel-mkl

@burrbull
Copy link

Tested:

pub trait VectorizedMath {
    fn exp(&self) -> Self;
}

use ndarray::{DataOwned,DataMut};

impl<S, D> VectorizedMath for ArrayBase<S, D>
    where S: DataMut<Elem=f64>+DataOwned<Elem=f64>, D: Dimension
{
    fn exp(&self) -> Self {
        let mut new = unsafe { ArrayBase::uninitialized(self.dim()) };
        unsafe {
            let self_ptr = self.as_ptr();
            let new_ptr  = new.as_mut_ptr();
            vdExp(self.len() as i32, self_ptr, new_ptr);
        }
        new
    }
}

Works fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants