Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

oneMKL full code examples #22

Closed
ducbueno opened this issue May 27, 2020 · 8 comments
Closed

oneMKL full code examples #22

ducbueno opened this issue May 27, 2020 · 8 comments
Labels
question A request for more information or clarification

Comments

@ducbueno
Copy link

Hello.

Does anyone know where I can find full oneMKL code examples?

Intel MKL (not oneMKL) comes with some SYCL examples, however there are some discrepancies between their syntax and the oneMKL syntax as given in the oneAPI spec (https://spec.oneapi.com/versions/latest/index.html). Just to give an example, in the Intel MKL SYCL examples the mkl::sparse::init_matrix_handle function is used for initializing sparse matrix handlers while in the oneAPI specs the same is done with the onemkl::sparse::matrixInit function.

I'm little bit confused by this. What exactly is the difference between Intel MKL SYCL and oneMKL?

@spencerpatty
Copy link

spencerpatty commented May 28, 2020

Hi, good questions!

What exactly is the difference between Intel MKL SYCL and oneMKL?

What you are calling Intel MKL SYCL is actually the Intel oneMKL product. oneMKL is the short name for oneAPI Math Kernel Library and is essentially the new term for the MKL-like library within the oneAPI paradigm. There are in fact three separate products that fall under the name of oneMKL. Each of them are distinct and have or are their own documentation which is consistent with the product:

The oneMKL specification and the oneMKL open source project are closely related as the open source project is an implementation of the oneMKL spec which defines the DPC++ apis for the math kernel library. These APIs defined by the spec and implemented in the oneMKL open source project are designed so that any math library can implement to them, (we refer to different math libraries integrated into oneMKL open source project as oneMKL backends) and the open source project will expand to support any accelerator hardware that people decide to implement math libraries to. For instance, the oneMKL open source project currently supports among other things, Intel MKL BLAS APIs for x86 CPUs and for Intel GPUs as well as the cuBLAS library for Nvidia GPUs, using the oneMKL BLAS Specification definitions.

The Intel oneMKL product is slightly different and can be thought of as a specific implementation for a targeted set of Intel hardware. It is our Intel MKL product and includes along with the traditional highly optimized functionality for CPU with C and Fortran APIs, also new functionality for both Intel CPU/GPU with the DPC++ APIs and for Intel GPU only, the C Openmp Offload and Fortran Openmp Offload APIs. This is our specific binary implementation and the natural successor to the Intel MKL product you are familiar with. The name oneMKL is one of a series of changes to different performance oriented products under this oneAPI initiative. See Intel oneAPI for more details. It is not required to have the same API names as the oneMKL Spec or the oneMKL Open source project although it may at times. It implements the functionality for the hardware and serves as a backend for the oneMKL Open source or can be used on its own. It has it’s own documentation separate from the other two products, each of which have or are their own documentation.

Does anyone know where I can find full oneMKL code examples?

As you pointed out, there are DPC++ code examples in the Intel oneMKL product in the examples/sycl folder (from unzipping examples/examples_sycl.tgz). The oneMKL open source project implements unit tests for the APIs currently supported link to unit_tests, however currently there are no examples in the oneMKL open source project.

Intel MKL (not oneMKL) comes with some SYCL examples, however there are some discrepancies between their syntax and the oneMKL syntax as given in the oneAPI spec (https://spec.oneapi.com/versions/latest/index.html).

The current relationship between oneMKL Spec and the Intel oneMKL product:

  • Currently in the oneMKL Specification version 0.7, only the BLAS domain portions of the oneMKL Specification have been fully updated and is consistent with the ever evolving Intel oneMKL product.
  • In the future releases of oneMKL Specification, other domains will be updated and at some future point the oneMKL Specification and the Intel oneMKL product may be completely in synch for the DPC++ APIs. Or it may be that they are never in synch for API names (not likely), but the provided DPC++ functionality will be likely be in sync at some point. The oneMKL Specification will always run ahead of the oneMKL Open Source project. However, new things may also be added to the Intel oneMKL product which don’t have anything to do with the DPC++ oneMKL Spec or open source project.

Just to give an example, in the Intel MKL SYCL examples the mkl::sparse::init_matrix_handle function is used for initializing sparse matrix handlers while in the oneAPI specs the same is done with the onemkl::sparse::matrixInit function.

In this specific case, the mkl::sparse::init_matrix_handle function (init_matrix_handle doc) is the more modern name and was changed from the previous onemkl::sparse::matrixInit function in the Intel oneMKL 2021.1-beta06 release. As we have been evolving, we have identified inconsistencies in the naming conventions being used in oneMKL and have started converging toward a more consistent look and feel of the APIs. This is one of those types of changes. At some future point these will be in synch and reflected as well in the oneMKL Spec, but the documentation for beta06 release is correct for our Intel oneMKL product.

@ducbueno
Copy link
Author

ducbueno commented May 28, 2020

Hello @spencerpatty. Thank you for the very complete answer! It was very illuminating.

I took little while to respond to digest everything you said.

So, just to check if I got things right:

  1. Basically, the Intel oneMKL product and the oneMKL Open Source Project are implementations of the oneMKL specification, with the difference that the former is targeted specifically to Intel hardware while the latter can target a larger spectrum of hardware platforms (including Intel).

  2. Essentially, assuming the sintaxes are in sync, the only difference between a code based on the Intel oneMKL product and a code on the oneMKL Open Source Project is the headers included (mkl.hpp + mkl_spblas_sycl.hpp for the former and onemkl/onemkl.hpp for the latter) and the namespaces (mkl:: and onemkl::, respectivelly).

If you could tell me if these affirmations are correct or not it would help a lot.

Thanks again!

@ducbueno
Copy link
Author

oneMKL Open Source Project doesn't seem to support sparse BLAS libraries :(. Would it be too hard to add it?

@spencerpatty
Copy link

Hi @ducbueno.

It is essentially correct if you are talking about DPC++ functionality. The oneMKL open source project may use the Intel oneMKL product for x86 CPUs or for Intel GPUs or could use other implementations for these same hardware and also support other hardware and libraries. It is intended to be a community project to unify the disparate approaches under a single specification when developing on various hardware. The functionality described in the oneMKL Spec will be implemented by the oneMKL open source project and the Intel oneMKL product, but it is not absolutely necessary that the Intel oneMKL product has exactly the same names, and it may have much more functionality even in the DPC++ apis.

For instance, the Intel oneMKL product will contain the following types of functionality

  • C APIs optimized for Intel CPU (mkl.h)
  • Fortran APIs optimized for Intel CPU (mkl.fi)
  • C Openmp offload APIs optimized for Intel GPU (mkl_omp_offload.h)
  • Fortran Openmp offload APIs optimized for Intel GPU (will be added at some point)
  • DPC++ APIs optimized for Intel CPU/GPU (currently available through mkl_sycl.hpp or a more specific version like you pointed out, but this may also change in time)

The DPC++ namespaces are currently different as well (as you noted), but this may not always be the case.

You could use either one. oneMKL Open source only currently supports BLAS functionality, but the oneMKL Spec will give an indication of what it eventually will encompass. These are all in some form of beta development right now so there is a lot of evolution as we continue to grow.

@jasukhar jasukhar added the question A request for more information or clarification label May 28, 2020
@jasukhar
Copy link
Contributor

jasukhar commented Jun 3, 2020

Hi @ducbueno , hopefully we addressed your questions. If something is not clear yet, let us know.

@jasukhar jasukhar closed this as completed Jun 3, 2020
@paravoid
Copy link

paravoid commented Jun 5, 2020

Thanks for the extensive clarifications. The distinction between this project ("oneMKL open source project") and the ex-Intel MKL, now "oneMKL product" confused me quite a bit as well. I wondered whether this was all temporary as well, with the intention being to converge everything into this project?

I suppose it's too late to disambiguate all this with different branding, but perhaps it would be good to mention a distilled version of the above comments in the README or FAQ? Just a suggestion, and thanks for your efforts regardless!

@jasukhar
Copy link
Contributor

jasukhar commented Jun 5, 2020

Hi @paravoid! Agree, it may be a common confusion point. We will update the documentation to explain the difference.

@vmalia vmalia mentioned this issue Jul 3, 2020
@jasukhar
Copy link
Contributor

Hello @paravoid ! FAQ was updated in commit db6bc08 to address your question.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question A request for more information or clarification
Projects
None yet
Development

No branches or pull requests

4 participants