Skip to content

Implementing a Job Scheduler for GPUs in OpenCL

Notifications You must be signed in to change notification settings

TaihuLight/clqueue

 
 

Repository files navigation

	Implementing a Job Scheduler for GPUs in Opencl
	Graphics Processors can now be used as a computational plataform. The parallel nature of modern Graphics Processing Units added to the fact that almost every modern computer has a GPU make them very usefull on solving data-parallel problems. GPUs nowadays have more transistors then CPUs, most of those transistores are employed on more Single Instruction Multiple Data(SIMD) unites, while on CPUs almost two thirds of those transistors are used to implement cache memory and cache logic. 
	One of the challenges of most parallel implementations on GPUs is to minimize the communication between the CPU(host) and the GPU(device), because the IO overhead is too large. Another common problem is the need to minimize the idle time on the GPU working groups.
	One idea to solve this problems is to implement a job scheduler on the GPU. This way, as long as the scheduler data structure is not empty there will be no idle working groups, the number of kernel calls will also be lower, implying possibly in better run times.
	This project implements a queue on the GPU memory, each GPU working group will dequeue jobs from this queue and will process them. Hopefully, by implementing this job scalonator the number of kernel calls and the number of idle working groups will be lower and there will be a greater speed up. After the implementation, this data structure will be used on a GPU ray tracing engine.

About

Implementing a Job Scheduler for GPUs in OpenCL

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C 97.4%
  • Makefile 2.6%