Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable building with CUDA support on Mac OS X #664

Merged
merged 13 commits into from Apr 28, 2016

Conversation

ville-k
Copy link
Contributor

@ville-k ville-k commented Jan 1, 2016

  • building with CUDA support on OS X requires GNU coreutils due to the OS X
    native readlink command behaving differently from the GNU version - you can
    install it using homebrew: "brew install coreutils"
  • OS X CUDA builds against CUDA toolkit v. 7.5 to overcome host compiler
    incompatibility - the toolkit versions (CUDA & cuDNN) are now controlled by
    variables set in the configure script
  • cuda/platform.bzl file is dynamically generated by the configure script to
    overcome bazel limitations for using "select" to set the platform
    specific names and paths of CUDA libraries
  • SE_STATIC_THREAD_LOCAL_POD now uses __thread instead of thread_local which
    is not yet supported by the version of clang shipped by Apple. __thread
    supports only primitive types, but is more performant
  • Updates Eigen to a newer version that fixes a (clang) compilation error

@Mistobaan
Copy link
Contributor

Nice! I think this is clearly and improvement over the existing one. I tried on my macbook with cuda and it works. I also published a small tutorial on how to use this patch]. Hope it can be merged soon. @vrv @mrry thoughts ?

@ville-k
Copy link
Contributor Author

ville-k commented Jan 4, 2016

Thanks for trying this out @Mistobaan and writing a tutorial! I'd add a step for installing GNU coreutils ("brew install coreutils") to the tutorial - most people probably don't have it installed.

@vrv
Copy link

vrv commented Jan 4, 2016

This is very nice! Thanks for this contribution -- we'll have @zheng-xq take a look at this soon.

@Mistobaan
Copy link
Contributor

@ville-k good point. Updated :)

@zheng-xq
Copy link
Contributor

zheng-xq commented Jan 5, 2016

@leary-google, could you review the stream-execuctor portion of this change?

@fpmchu
Copy link

fpmchu commented Jan 5, 2016

@Mistobaan I followed your instructions, but "brew cask install cuda" defaults to CUDA 7.0, and @ville-k patch is using 7.5 by default. I tried both. With CUDA 7.0, I get a compile error as such:

INFO: From Compiling tensorflow/core/kernels/cwise_op_gpu_sin.cu.cc:
nvcc fatal   : The version ('70002') of the host compiler ('Apple clang') is not supported
ERROR: /Users/fpmc/git/tensorflow/tensorflow/core/BUILD:339:1: error while parsing .d file: /private/var/tmp/_bazel_fpmc/b41e6b4c9df9b99106d3673ec4f590dc/tensorflow/bazel-out/local_darwin-opt/bin/tensorflow/core/_objs/gpu_kernels/tensorflow/core/kernels/cwise_op_gpu_sin.cu.d (No such file or directory).

By manually downloading CUDA 7.5 and installing it, it compiles.

@fpmchu
Copy link

fpmchu commented Jan 5, 2016

@ville-k In testing your change I find that the newly added ALT_PATH doesn't really work. If you have libcudnn.6.5.dylib located inside /usr/local/cuda/ and not /usr/local/cuda/lib/, the symlink command at the end of your cuda_config.sh will end up silently creating a bad symlink like this

libcudnn.6.5.dylib@ -> /usr/local/cuda/lib/libcudnn.6.5.dylib/lib/libcudnn.6.5.dylib

I think having the ALT_PATH stuff is hard to get right.

@ville-k
Copy link
Contributor Author

ville-k commented Jan 5, 2016

@fpmchu Thanks for testing the ALT_PATH build scenario for cuDNN! The fix for the problem you discovered turned out to be pretty simple - I have it on a separate branch for now:
ville-k@1232b37

@vrv What's the project's policy for issues found and fixed during PR review? New commits on the existing PR or open a separate PR?

@vrv
Copy link

vrv commented Jan 5, 2016

New commits on existing PR seems fine -- we'll just ask to squash the commits prior to validation and merging.

@fpmchu
Copy link

fpmchu commented Jan 5, 2016

@ville-k Cool. While the fix looks ok, I'm still not understanding the purpose of adding ALT_PATH. Is it just to allow users to put them in /usr/local/lib? What's wrong with using /usr/local/cuda/lib?

@ville-k
Copy link
Contributor Author

ville-k commented Jan 5, 2016

@fpmchu ALT_PATH is there to support existing lib search path functionality for both linux and mac. The original configure and cuda_config.sh scripts look for cudnn.so.6.5 under both "/usr/local/cuda" and "/usr/local/cuda/lib64". Depending on the platform, these locations will now be searched if the user inputs "/usr/local/cuda" as the cudnn install dir:
Linux

  • /usr/local/cuda/lib64/cudnn.so.6.5
  • /usr/local/cuda/cudnn.so.6.5 (ALT_PATH)

Mac

  • /usr/local/cuda/lib/cudnn.6.5.dylib
  • /usr/local/cuda/cudnn.6.5.dylib (ALT_PATH)

@Mistobaan
Copy link
Contributor

Thanks @fpmchu to try the tutorial out. I think it installed 7.0 because you have an old cuda formula. I updated the tutorial to suggest to update homebrew first and check for the cuda version.

@fpmchu
Copy link

fpmchu commented Jan 6, 2016

Thanks @Mistobaan. I actually think you mean brew update though. I did try "upgrade" before and that didn't work. I didn't know that "update" is the thing to do to "upgrade brew" :-)

@zheng-xq zheng-xq assigned martinwicke and unassigned zheng-xq Jan 7, 2016
@martinwicke
Copy link
Member

Because of peculiarities in our internal build process, we won't be able to merge this right now. I'll leave this open since it may be useful for people. When we find someone to resolve the internal problems, we may be able to absorb it at a later time.

I'm sorry about that -- I would love to have this in.

@tensorflow-jenkins
Copy link
Collaborator

Can one of the admins verify this patch?

@ville-k
Copy link
Contributor Author

ville-k commented Jan 8, 2016

Thanks for the update @martinwicke ! Is the main issue causing problems with your internal build process the automatic generation of the "platform.bzl" file?

@martinwicke
Copy link
Member

That, and reuse of shared code elsewhere. The code elsewhere can be updated
(internally) which is why we need someone here to fix that up. The problem
with stream_executor is that it almost has to be treated like generated
code. In sorry this bit you after you've done all this work, I'm still
hoping we can absorb it somehow, and we'll make the restrictions on
stream_executor clearer for the future.
On Fri, Jan 8, 2016 at 07:25 Ville Kallioniemi notifications@github.com
wrote:

Thanks for the update @martinwicke https://github.com/martinwicke ! Is
the main issue causing problems with your internal build process the
automatic generation of the "platform.bzl" file?


Reply to this email directly or view it on GitHub
#664 (comment)
.

@NathanHowell
Copy link
Contributor

I get a segfault when the cuda libs are not setup properly due to getenv("LD_LIBRARY_PATH") returning null:

(lldb) bt
* thread #1: tid = 0x30ae49, 0x00007fff8ed8e752 libsystem_c.dylib`strlen + 18, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x0)
  * frame #0: 0x00007fff8ed8e752 libsystem_c.dylib`strlen + 18
    frame #1: 0x0000000105f0d7a4 _pywrap_tensorflow.so`std::__1::basic_ostream<char, std::__1::char_traits<char> >& std::__1::operator<<<std::__1::char_traits<char> >(std::__1::basic_ostream<char, std::__1::char_traits<char> >&, char const*) [inlined] std::__1::char_traits<char>::length(__s=0x0000000000000000) + 52 at string:651
    frame #2: 0x0000000105f0d790 _pywrap_tensorflow.so`std::__1::basic_ostream<char, std::__1::char_traits<char> >& std::__1::operator<<<std::__1::char_traits<char> >(__os=0x00007fff5fbfbb10, __str=0x0000000000000000) + 32 at ostream:882
    frame #3: 0x000000010895c96a _pywrap_tensorflow.so`perftools::gputools::internal::DsoLoader::GetDsoHandle(tensorflow::StringPiece, void**, perftools::gputools::internal::DsoLoader::LoadKind) [inlined] std::__1::enable_if<(__os=0x00007fff5fbfbb10, __x=0x00007fff5fbfbb08)) && (is_base_of<std::__1::ios_base, tensorflow::internal::LogMessage>::value), tensorflow::internal::LogMessage&&>::type std::__1::operator<<<tensorflow::internal::LogMessage, char*>(tensorflow::internal::LogMessage&&, char* const&) + 19 at ostream:1057
<trimmed>
(lldb) frame select 4
frame #4: 0x000000010895c957 _pywrap_tensorflow.so`perftools::gputools::internal::DsoLoader::GetDsoHandle(path=(data_ = "libcuda.dylib", size_ = 13), dso_handle=0x00007fff5fbfc138, load_kind=kLocal) + 887 at dso_loader.cc:99
   96     string path_string = path.ToString();
   97     *dso_handle = dlopen(path_string.c_str(), dynload_flags);
   98     if (*dso_handle == nullptr) {
-> 99       LOG(INFO) << "Couldn't open CUDA library " << path
   100                << ". LD_LIBRARY_PATH: " << getenv("LD_LIBRARY_PATH");
   101      // TODO(b/22689637): Eliminate unnecessary ToString once StrCat has been
   102      // moved to the open-sourceable version.

@NathanHowell
Copy link
Contributor

I eventually did get this working but the version of Eigen referenced in here is very broken. Eigen HEAD (fd9611fa2d9c) does work aside from a nvcc build break in TensorIntDiv.h, DividerHelper<64, T>::computeMultiplier is missing a cast... but it does at least seem to work.

Previous failure looked like this:

libc++abi.dylib: terminating with uncaught exception of type std::__1::system_error: mutex lock failed: Invalid argument

After a bit of hunting around it turns out that the mutex instances had already been destructed after someone called exit(1) 😯

(lldb) bt
* thread #2: tid = 0x4649e, 0x00007fff98e3f738 libsystem_c.dylib`exit, stop reason = breakpoint 1.1
  * frame #0: 0x00007fff98e3f738 libsystem_c.dylib`exit
    frame #1: 0x00000001066e9f0e _pywrap_tensorflow.so`void Eigen::internal::EigenMetaKernel_Vectorizable<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap<Eigen::Tensor<float, 1, 1, int>, 16>, Eigen::TensorCwiseUnaryOp<Eigen::internal::scalar_right<float, float, Eigen::internal::scalar_difference_op<float>, true>, Eigen::TensorMap<Eigen::Tensor<float const, 1, 1, int>, 16> const> const> const, Eigen::GpuDevice>, int>(Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap<Eigen::Tensor<float, 1, 1, int>, 16>, Eigen::TensorCwiseUnaryOp<Eigen::internal::scalar_right<float, float, Eigen::internal::scalar_difference_op<float>, true>, Eigen::TensorMap<Eigen::Tensor<float const, 1, 1, int>, 16> const> const> const, Eigen::GpuDevice>, int) + 14
    frame #2: 0x00000001066e6ec1 _pywrap_tensorflow.so`tensorflow::functor::BinaryFunctor<Eigen::GpuDevice, tensorflow::functor::sub<float>, 1>::Right(Eigen::GpuDevice const&, Eigen::TensorMap<Eigen::Tensor<float, 1, 1, long>, 16>, Eigen::TensorMap<Eigen::Tensor<float const, 1, 1, long>, 16>, Eigen::TensorMap<Eigen::TensorFixedSize<float const, Eigen::Sizes<>, 1, long>, 16>) + 257

And it turns out that all the EigenMetaKernel_Vectorizable specializations don't work as intended:

(lldb) disassemble
_pywrap_tensorflow.so`void Eigen::internal::EigenMetaKernel_Vectorizable<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap<Eigen::Tensor<float, 1, 1, int>, 16>, Eigen::TensorCwiseUnaryOp<Eigen::internal::scalar_right<float, float, Eigen::internal::scalar_difference_op<float>, true>, Eigen::TensorMap<Eigen::Tensor<float const, 1, 1, int>, 16> const> const> const, Eigen::GpuDevice>, int>:
    0x1066e9f00 <+0>:  pushq  %rbp
    0x1066e9f01 <+1>:  movq   %rsp, %rbp
    0x1066e9f04 <+4>:  movl   $0x1, %edi
    0x1066e9f09 <+9>:  callq  0x1069fe13e               ; symbol stub for: exit
    0x1066e9f0e <+14>: nop

@ville-k
Copy link
Contributor Author

ville-k commented Jan 10, 2016

I'm not able to reproduce the issue with LD_LIBRARY_PATH you mentioned @NathanHowell Which version of OSX and Python are you using?
I reported the missing cast issue to Eigen before this PR and they fixed it for the version that I'm using (their fix was to add a constructor, not cast the argument). Sounds like they might've had some regression if it's broken in their HEAD. I'll push an update to this PR today with latest from master and try to see if I can find a rev of Eigen that is never and does not have have regression.

@NathanHowell
Copy link
Contributor

@ville-k the segfault might have been from an older version of Xcode, I upgraded to 7.2 trying to track down the other issue and I think it's been fixed... but it should use DYLD_LIBRARY_PATH on osx rather than LD_LIBRARY_PATH right?

@ville-k
Copy link
Contributor Author

ville-k commented Jan 10, 2016

@NathanHowell I was surprised about LD_LIBRARY_PATH working too when I stumbled into it working by accident. Apple added this at some point for UNIX conformance - they're checking both env vars nowadays:
http://www.opensource.apple.com/source/dyld/dyld-360.18/src/dyld.cpp

@elbamos
Copy link

elbamos commented Jan 24, 2016

Worked for me as well with @Mistobaan 's instructions. Note that coreutils has to be installed from brew even if you think it's already installed, because the installer calls greadlink, so the brew package is a hard dependency on OS X. In addition, there should be a better way of handling the LD_LIBRARY_PATH issue. Setting any of those environment variables manually can wreak all sorts of havok on OS X. Neither theano nor torch needs it set explicitly.

@elbamos
Copy link

elbamos commented Jan 25, 2016

@ville-k would you mind terribly rebasing? On linux, updates to bazel have created a number of installation issues where fixes were rolled into the git in the last few weeks.

fi
if [ -e "$CUDNN_INSTALL_PATH/libcudnn.so${TF_CUDNN_EXT}" -o -e "$CUDNN_INSTALL_PATH/lib64/libcudnn.so${TF_CUDNN_EXT}" ]; then
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the error here is:

ERROR: /workspace/third_party/gpus/cuda/BUILD:94:12: in srcs attribute of cc_library rule //third_party/gpus/cuda:cudnn: file '//third_party/gpus/cuda:lib64/libcudnn.so.' is misplaced here (expected .cc, .cpp, .cxx, .c++, .C, .c, .h, .hh, .hpp, .hxx, .inc, .S, .s, .asm, .a, .pic.a, .lo, .pic.lo, .so, .dylib, .o or .pic.o).

Prior to your change, if the user did not set a version, it would default to libcudnn.so, which is a symlink to whatever the latest installed version is (and this is what our tests use). Now it's being set to libcudnn.so.${TF_CUDNN_VERSION} and that variable is empty.

Is it possible to switch back to the old mode of doing things, where the default is the empty version and the user can specify a specific version (OS X users will manually type in 4 and 7.5 for the versions). That will also make sure that existing users who depend on the current behavior aren't broken by this change.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I pushed a change that makes the *_library_path functions in platform.bzl handle an empty version string. I'll take a look tomorrow to see if I can make the configure script work the way it did before.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vrv My latest commit restores the original behavior of defaulting to the symlinked cuda libraries when version is left empty in configure.

@markb729
Copy link

markb729 commented Apr 25, 2016

Good work on the update.

There may still be an issue with cudnn v5 when packaged/installed with pip.

For both cudnn v4 and v5, the compile completes successfully. Invoking an example trainer,
bazel-bin/tensorflow/cc/tutorials_example_trainer, runs successfully with BOTH libraries:

I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcublas.7.5.dylib locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcudnn.4.dylib or ibcudnn.5.dylib locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.7.5.dylib locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcuda.dylib locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcurand.7.5.dylib locally
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:883] OS X does not support NUMA - returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties:
name: GeForce GTX 980 Ti
major: 5 minor: 2 memoryClockRate (GHz) 1.076
...
000001/000009 lambda = 2.000000 x = [0.894427 -0.447214] y = [1.788854 -0.894427]

However, when packaged and installed with pip, a compile with cudnn 4 works as expected but cudnn 5 will fail on load of the cudnn.5.dylib library:

import tensorflow as tf
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcublas.7.5.dylib locally
Segmentation fault: 11

This is the point at which libcudnn.5.dylib loads. Since cudnn 5 works with the example trainer, the packaging must be breaking something, perhaps a misplaced non-versioned symlink? Odd that cudnn 4 works though. Perhaps a misplaced non-versioned symlink?

@vrv
Copy link

vrv commented Apr 25, 2016

@markb729: right now we don't package one binary that works with both cudnn4 and 5, because the APIs are different.

At some point I think it would be conceivable to implement the stream executor in such a way that different cudnn versions are different implementations of the same interface, and we dispatch exactly once during initialization to the right one.

if PLATFORM == "Darwin":
return "lib/lib{}.{}.dylib".format(name, version)
else:
return "lib64/lib{}.so.{}".format(name, version)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the problem is that you are assuming the version is always set -- but we allow people to use the unversioned library (libfoo.so, not libfoo.so.version).

I'll kick off a test just to make sure, but I still think this may not work.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like you were viewing an outdated diff - this was fixed in a previous commit: 0933043

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I still am terrible at github.

@vrv
Copy link

vrv commented Apr 27, 2016

@tensorflow-jenkins test this please

elif test -e ${CUDNN_INSTALL_PATH}/include/cudnn.h; then
CUDNN_HEADER_PATH=${CUDNN_INSTALL_PATH}/include
elif test -e /usr/include/cudnn.h; then
CUDNN_HEADER_PATH=/usr/include
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this possibly the cause of the test failure?
ERROR: cannot find cudnn.h under: /usr/lib/x86_64-linux-gnu

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I bet! looks like I rebased that final elif out of existence:)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vrv should be fixed now .

@vrv
Copy link

vrv commented Apr 27, 2016

One more try! test this please

@vrv vrv merged commit 59faa82 into tensorflow:master Apr 28, 2016
@vrv
Copy link

vrv commented Apr 28, 2016

Woohoo!! thank you so much for this contribution. We'll try our best to keep it working, though without OS X / GPU test machines, we can't promise too much.

@ville-k
Copy link
Contributor Author

ville-k commented Apr 28, 2016

@vrv Awesome! I really appreciate your thoughtful feedback and help in figuring out the build issues!

@martinwicke
Copy link
Member

Finally! Thanks for all the hard work! I will get an external GPU so we can test this and make sure it continues to work.

@Mistobaan
Copy link
Contributor

Nice ! 👍

@chrhansen
Copy link

Hi guys, I'm new here. I was wondering if there shouldn't be a "MacOS GPU Tests" in the lists of tests being run at http://ci.tensorflow.org to make sure the new CUDA/GPU functionality keeps working?

@martinwicke
Copy link
Member

Yes. We're installing hardware for that over the weekend. A test should
come some time next week.
On Fri, May 13, 2016 at 17:09 Christian Hansen notifications@github.com
wrote:

Hi guys, I'm new here. I was wondering if there shouldn't be a "MacOS _G_PU
Tests" in the lists of tests being run at http://ci.tensorflow.org to
make sure the new CUDA/GPU functionality keeps working?


You are receiving this because you were mentioned.

Reply to this email directly or view it on GitHub
#664 (comment)

@jstaker7
Copy link

jstaker7 commented Jun 5, 2016

I'm trying to build with CUDA 8.0, CuDNN 5.0, and clang-703.0.31 (CUDA test projects seem to build just fine).

I get the following error:

INFO: Found 1 target...
INFO: From Executing genrule //third_party/gpus/cuda:cuda_config_check [for host]:
/bin/bash: greadlink: command not found
ERROR: /Projects/tensorflow/third_party/gpus/cuda/BUILD:204:1: declared output 'third_party/gpus/cuda/cuda.config' was not created by genrule. This is probably because the genrule actually didn't create this output, or because the output was a directory and the genrule was run remotely (note that only the contents of declared file outputs are copied from genrules run remotely).
ERROR: /Projects/tensorflow/third_party/gpus/cuda/BUILD:204:1: not all outputs were created.
Target //tensorflow/tools/pip_package:build_pip_package failed to build
INFO: Elapsed time: 120.460s, Critical Path: 0.23s

Any ideas?

Edit: Oops! Never mind, I forget to install coreutils. I'm running into more trouble with CUDA 8.0, but it looks like these are known issues not related to this ticket.

@martin-gorner
Copy link

Since Apple does not sell any computers with Nvidia GPUs, could you tell us what hardware you are using this with ? Is is some sort of Thunderbolt-attached GPU enclosure with an Nvidia card in it ?

@jstaker7
Copy link

Once-upon-a-time Apple did provide computers with NVIDIA chips. I have a 2012 MacBook Pro with a 650M.

@esd100
Copy link

esd100 commented Jun 30, 2016

Yes. I have the same the same MacBook pro with 650m card.

@royalstream
Copy link

royalstream commented Jul 4, 2016

I have a 2014 MacBook Pro with a 750m card which is still acceptably recent.
I hope Apple is going to offer NVIDIA as an option in some future Macs, if not this year maybe next year.
And of course there's also iBuildMacs.com and create.pro

@yaroslavvb
Copy link
Contributor

@martin-gorner
Copy link

Has anyone tried this with GPU in a Thunderbolt enclosure ? This video seems to imply that CUDA works well in that situation: https://youtu.be/Bsf9lHM8qLk

@martinwicke
Copy link
Member

Our (new) Mac GPU tests run in just such a setup: Mac Pro + Quadro M4000 in
a Bizon2.
On Thu, Jul 28, 2016 at 01:56 martin-gorner notifications@github.com
wrote:

Has anyone tried this with GPU in a Thunderbolt enclosure ? This video
seems to imply that CUDA works well in that situation:
https://youtu.be/Bsf9lHM8qLk


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#664 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAjO_Y_Kae-uJER4i9bXzTIJg8y5tqBdks5qaG6ggaJpZM4G9X8_
.

bquast added a commit to bquast/tensorflow that referenced this pull request Aug 18, 2016
@ghost
Copy link

ghost commented Jul 23, 2019

Once-upon-a-time Apple did provide computers with NVIDIA chips. I have a 2012 MacBook Pro with a 650M.

i have the same mac pro. do you have it using gpu?i had been trying for 3 days trying to find good tutorials to make it work but nothing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet