CUDA backend does not work when using dynamic switching on OS X #67

tmcdonell opened this Issue Sep 11, 2012 · 2 comments


None yet
2 participants

tmcdonell commented Sep 11, 2012

This is probably an issue with the underlying CUDA bindings package. A cursory glance at a few of the sample SDK programs suggests that it should work.


mchakravarty commented Sep 19, 2012

Somewhere in the CUDA for OS X docs, NVIDIA actually writes that the current driver doesn't properly work with dynamic graphics switching.


tmcdonell commented Jun 8, 2013

Here is a relevant thread from haskell-cafe:

It turns out the same solution works here as well. Running with +RTS -V0 to disable the master tick interval, everything works as expected.

Using a value of zero disables the RTS clock completely, and has the effect of
disabling timers that depend on it: the context switch timer and the heap
profiling timer. Context switches will still happen, but deterministically and
at a rate much faster than normal. Disabling the interval timer is useful for
debugging, because it eliminates a source of non-determinism at runtime.

So it seems this context switch is messing with foreign calls. Are there any disadvantages to disabling the timer?

I'll at least mention this in the documentation somewhere.

tmcdonell added a commit to tmcdonell/accelerate-cuda that referenced this issue Jun 9, 2013

@tmcdonell tmcdonell modified the milestone: _|_ Apr 14, 2017

@tmcdonell tmcdonell closed this Jul 4, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment