Skip to content
.NET bindings for native numerical computing
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
Compute.Bindings.CUDA
Compute.Bindings.Cli
Compute.Bindings.IntelMKL
Compute.Tests Fix bug in CUDA shared library name. Jan 22, 2018
Compute.Winx64.CUDA Ficed bug in cuBlas shared library name. Jan 22, 2018
Compute.Winx64.IntelMKL
.gitattributes
.gitignore
CUDAPackageDoc.md Fix bug in CUDA shared library name. Jan 22, 2018
Compute.NET.sln Ficed bug in cuBlas shared library name. Jan 22, 2018
LICENSE
README.md
bind.cmd
numanalysis.png

README.md

Compute.NET: .NET bindings for native numerical computing

bind program screenshot

Get the latest release from the Compute.NET package feed.

About

Compute.NET provides auto-generated bindings for native numerical computing libraries like Intel Math Kernel Library, AMD Core Math Library (and its successors), NVIDIA CUDA, AMD clBLAS, cl* and others. The bindings are auto-generated from the library's C headers using the excellent CppSharp library. The generator is a CLI program that be can used to generate individual modules of each library as well as customize key aspects of the generated code, such as the use of .NET structs instead of classes for complex data types, and marshalling array parameters in native code functions (either as managed arrays or pointers.)

Status

  • CLI Bindings Generator: Works on Windows.

  • Bindings:

    • Compute.Bindings.IntelMKL package available on Myget feed. This library is not Windows-specific but I haven't tested it on Linux or other platforms yet. The following modules are available:
      • BLAS, CBLAS, SpBLAS, and PBLAS
      • LAPACK and SCALAPACK
      • VML
      • VSL
    • Compute.Bindings.CUDA package available on NuGet and MyGet. This library is not Windows-specific but I haven't tested it on Linux or other platforms yet. The entire runtime API is bound together with the following modules:
      • cuBLAS
  • Native Library Packages:

    • Compute.Winx64.IntelMKL package available on MyGet feed.
    • Compute.Winx64.CUDA package available on MyGet and NuGet.

Usage

Intel MKL Bindings

  1. Add the Compute.NET package feed to your NuGet package sources: https://www.myget.org/F/computedotnet/api/v2
  2. Install the bindings package into your project: Install-Package Compute.Bindings.IntelMKL.
  3. (Optional) Install the native library package into your project: Install-Package Compute.Winx64.IntelMKL.

Without step 2 you will need to make sure the .NET runtime can locate the native MKL library DLLs or shared library files. You can set your path to include the directory where the library files are located (typically %MKLROOT%\redist). Or you can copy the needed files into your project output directory with a build task.

With the packages installed you can use the MKL BLAS or vector math or other routines in your code. E.g the following code is translated from the Intel MKL examples for CBLAS:

using IntelMKL.ILP64;
public class BlasExamples
{
	public const int GENERAL_MATRIX = 0;
	public const int UPPER_MATRIX = 1;
	public const int LOWER_MATRIX = -1;

	public void RunBlasExample1()
	{
	    int m = 3, n = 2, i, j;
	    int lda = 3, ldb = 3, ldc = 3;
	    int rmaxa, cmaxa, rmaxb, cmaxb, rmaxc, cmaxc;
	    float alpha = 0.5f, beta = 2.0f;
	    float[] a, b, c;
	    CBLAS_LAYOUT layout = CBLAS_LAYOUT.CblasRowMajor;
	    CBLAS_SIDE side = CBLAS_SIDE.CblasLeft;
	    CBLAS_UPLO uplo = CBLAS_UPLO.CblasUpper;
	    int ma, na, typeA;
	    if (side == CBLAS_SIDE.CblasLeft)
	    {
		rmaxa = m + 1;
		cmaxa = m;
		ma = m;
		na = m;
	    }
	    else
	    {
		rmaxa = n + 1;
		cmaxa = n;
		ma = n;
		na = n;
	    }
	    rmaxb = m + 1;
	    cmaxb = n;
	    rmaxc = m + 1;
	    cmaxc = n;
	    a = new float[rmaxa * cmaxa];
	    b = new float[rmaxb * cmaxb];
	    c = new float[rmaxc * cmaxc];
	    if (layout == CBLAS_LAYOUT.CblasRowMajor)
	    {
		lda = cmaxa;
		ldb = cmaxb;
		ldc = cmaxc;
	    }
	    else
	    {
		lda = rmaxa;
		ldb = rmaxb;
		ldc = rmaxc;
	    }
	    if (uplo == CBLAS_UPLO.CblasUpper)
		typeA = UPPER_MATRIX;
	    else
		typeA = LOWER_MATRIX;
	    for (i = 0; i < m; i++)
	    {
		for (j = 0; j < m; j++)
		{
		    a[i + j * lda] = 1.0f;
		}
	    }
	    for (i = 0; i < m; i++)
	    {
		for (j = 0; j < n; j++)
		{
		    c[i + j * ldc] = 1.0f;
		    b[i + j * ldb] = 2.0f;
		}
	    } 
	    CBlas.Ssymm(layout, side, uplo, m, n, alpha, ref a[0], lda, ref b[0], ldb, beta, ref c[0], ldc);
	}
}

Enums like CBLAS_UPLO are generated from the CBLAS header file. You pass double[] and float[] arrays to the BLAS functions using a ref alias to the first element of the array which is converted to a pointer and passed to the native function. You can use either LP64 or ILP64 array indexing depending on the namespace you import.

Bindings Generator

The basic syntax is bind LIBRARY MODULE [OPTIONS] e.g bind mkl --vml --ilp64 -n IntelMKL -o .\Compute.Bindings.IntelMKL -c Vml --file vml.ilp64.cs will create bindings for the Intel MKL VML routines, with ILP64 array indexing, in the .NET class Vml and namespace IntelMKL and in the file vmk.ilp64.cs in the .\Compute.Bindings.IntelMKL output directory.

You can’t perform that action at this time.