Skip to content
Multi-device OpenCL kernel load balancer and pipeliner API for C#. Uses shared-distributed memory model to keep GPUs updated fast while using same kernel on all devices(for simplicity).
Branch: master
Clone or download
Latest commit 3bc614d Feb 9, 2018
Type Name Latest commit message Commit time
Failed to load latest commit information.
Cekirdekler flushLastUsedCommandQueue() added to numberCruncher for finer grained… Nov 17, 2017
.gitignore Create .gitignore Mar 31, 2017
Kamera.cs Unity example script for zero-copy mesh deformation computation Apr 16, 2017
LICENSE Update LICENSE Mar 29, 2017 Update Feb 9, 2018
_config.yml Set theme jekyll-theme-time-machine Mar 31, 2017
async.png async enqueue mode example picture May 30, 2017
c_array.png image: cs array Jun 6, 2017
cekirdekler_dll_v1_3_8.rar 64-bit binaries for lazy developers. Reference cekirdekler.dll in pro… Jun 17, 2017
cekirdekler_dll_v1_4_1_opencl_1_2_update2.rar 64-binaries. Reference cekirdekler.dll in project. Keep kutuphanecl.d… Jul 2, 2017
cekirdekler_dll_v1_4_1_opencl_1_2_update3.rar compiled as 64-bit for lazy developers. reference cekirdekler.dll in … Oct 16, 2017
cekirdekler_dll_v1_4_1_opencl_1_2_update4.rar Added "-cl-std=CL1.2" compile option(in C++ DLL) so it may work on ev… Nov 7, 2017
cekirdekler_dll_v1_4_1_opencl_1_2_update5.rar new function to flush() last used command queue in enqueue mode Nov 17, 2017
cekirdekler_dll_v1_4_1_opencl_2_0_update2.rar 64-binaries. Reference cekirdekler.dll in project. Keep kutuphanecl.d… Jul 2, 2017
cekirdekler_dll_v1_4_1_opencl_2_0_update3.rar compiled as 64-bit for lazy developers. reference cekirdekler.dll in … Oct 16, 2017
cekirdekler_dll_v1_4_1_opencl_2_0_update5.rar new function to flush() last used command queue in enqueue mode Nov 17, 2017
cl_array.png image: ClArray Jun 6, 2017
deviceToDevicePipeline.gif Add files via upload May 9, 2017
devicetodevicepipeline.png image: device to device pipeline Jun 6, 2017
drivercontrolledpipeline.png image: driver controlled pipeline Jun 6, 2017
dynamicparallelism.png image: dynamic parallelism OpenCL 2.0 Jul 4, 2017
enqueuemode.png image: enqueue mode Jun 6, 2017
eventdrivenpipeline.png image: event driven pipeline Jun 6, 2017
helloworld.png image: hello world Jun 6, 2017
loadbalancing.png image: load balancing Jun 6, 2017
mandelbrot_bench_v4.rar auto selects cpu and gpu May 13, 2017
opencl64.png how configuration manager should look like May 12, 2017
singledevicepipeline.png image: single device pipeline Jun 6, 2017
streaming_zero_copy_c_array.png image: streaming + zero copy + C# array Jun 6, 2017
streaming_zero_copy_cl_array.png image: streaming + zero copy + cl array Jun 6, 2017
task_device_pool.png image: task-device pool Jun 11, 2017


C# Multi-device GPGPU(OpenCL) compute API with an iterative interdevice-loadbalancing feature using multiple pipelining on read/write/compute operations for developers' custom opencl kernels. Main idea is to treat N devices as a single device when possible, taking advantage of entire platform, easily, through shared-distributed memory model under the hood.

64-bit only. "project settings -> build -> platform target -> x64" Also configuration manager needs to look like this:

Needs extra C++ dll built in 64-bit(x86_64) from which must be named KutuphaneCL.dll

The other needed dll is Microsoft's System.Threading.dll and its xml helper for .Net 2.0 - or - you can adjust "using" and use .Net 3.5+ for your own project and don't need System.Threading.dll.

In total, Cekirdekler.dll and KutuphaneCL.dll and using .Net 3.5 should be enough.

Usage: add only Cekirdekler.dll and system.threading.dll as references to your C# projects. Other files needs to exist in same folder with Cekirdekler.dll or the executable of main project.

This project is being enhanced using ZenHub:


  • Implicit multi device control: from CPUs to any number of GPUs and ACCelerators. Explicit in library-side for compatibility and performance, implicit for client-coder for the ease of GPGPU to concentrate on opencl kernel code. Selection of devices can be done implicitly or explicitly to achieve ease-of-setup or detailed device query. Handling(computing things) of devices are implicit, selection can be both implicit or explicit. Explicitly chosen multiple devices can be added together with a simple + operator.
  • Iterative load balancing between devices: uniquely done for each different compute(explicit control with user-given compute-id). Multiple devices get more and more fair work loads until the ratio of work distribution converges to some point. Partitionig workload completes a kernel with less latency which is applicable for hot-spot loops and some simple embarrassingly-parallel algorithms. Even better for streaming data with pipelining option enabled.
  • Pipelining for reads, computes and writes(host - device link): either by the mercy of device drivers or explicit event-based queue management. Hides the latency of least time consuming part(such as writes) behind the most time consuming part(such as compute). GPUs can run buffer copies and opencl kernels concurrently.
  • Pipelining between devices(device - host - device): Concurrently run multiple stages to overlap them in timeline and gain advantage of multiple GPUs(and FPGAa, CPUs) for even non-separable(because of atomics and low-level optimizations) kernels of a time-consuming pipeline. Each device runs a different kernel but at the same time with other devices and uses double buffers to overlap even data movements between pipeline stages.
  • Batch computing using task pools and device pools: Use every async pipeline of every gpu in system, for a pool of non-separable kernels(as tasks to compute later). Uses greedy scheduling algorithm to keep all GPUs busy.
  • Working with different numeric arrays: Either C#-arrays like float[], int[], byte[],... or C++-array wrappers like ClFloatArray, ClArray<float>, ClByteArray, ClArray<byte>
  • Automatic buffer copy optimizations for devices: If a device shares RAM with CPU, it uses map/unmap commands to reduce number of array copies(instead of read/write). If also that device is given a C++ wrapper array(such as ClArray<float>), it also uses cl_use_host_ptr flag on buffer for a zero-copy access aka" streaming". By default, all devices have their own buffers.
  • Two different usage types: First one lets the developer choose all kernel parameters as arrays more explicitly for a more explicitly readable execution, second one creates same thing using a much shorter definition to complete in less code lines and change only the necessary flags instead of all.
  • Automatic resource dispose: When C++ array wrappers are finalized(out-of-scope, garbage collected), they release resources. Also dispose method can be called explicitly by developer.
  • Uses OpenCL 1.2: C++ bindings from for its base. Developers are expected to know C99 and its OpenCL kernel constraints to write their own genuine GPGPU kernels. CekirdeklerCPP project produces OpenCL 1.2 backend dll file.
  • Uses OpenCL 2.0: C++ bindings from for its base. Developers are expected to know C99 and its OpenCL kernel constraints to write their own genuine GPGPU kernels. CekirdeklerCPP2 project produces OpenCL 2.0 backend dll file.(needs to be renamed to KutuphaneCL.dll)


You can see details and tutorial here in Cekirdekler-wiki

Known Issues

  • For C++ array wrappers like Array<float> there is no out-of-bounds-check, don't cross boundaries when accessing array indexing.
  • Don't use C++ array wrappers after they are disposed. These features are not added to speed-up array indexing.
  • Don't use ClNumberCruncher or Core instances after they are disposed.
  • Pay attention to "number of array elements used" per workitem in kernel and how they are given as parameters from API compute() method.
  • Pay attenton to "partial read"/"read"/"write" array copy modifiers when your kernel is altering(or reading) whole array or just a part of it.
  • No performance output at first iteration. Load balancer needs at least several iterations to distribute fairly and performance report needs at least 2 iterations for console output.

Example that computes 1000 workitems accross all GPUs in a PC: GPU1 computes global id range from 0 to M, GPU2 computes from M+1 to K and GPU_N computes for global id range of Y to Z

        Cekirdekler.ClNumberCruncher cr = new Cekirdekler.ClNumberCruncher(
            Cekirdekler.AcceleratorType.GPU, @"
                __kernel void hello(__global char * arr)
                    printf(""hello world"");

        Cekirdekler.ClArrays.ClArray<byte> array = new Cekirdekler.ClArrays.ClArray<byte>(1000);
        // Cekirdekler.ClArrays.ClArray<byte> array = new byte[1000]; // host arrays are usable too!
        array.compute(cr, 1, "hello", 1000, 100); 
        // local id range is 100 here. so this example spawns 10x workgroups and all GPUs share them like GPU1 computes 2 groups,
        // GPU2 computes 5 groups and another GPU computes 3 groups. Global id values are continuous through all global workitems,
        // local id values are also safe to use. 
        // faster GPUs get more work share over iterations. Performance aware over repeatations of a work.
        // no need to dispose anything at the end. they do it themselves when out of scope or gc.  
You can’t perform that action at this time.