Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I change the build config for the platform natives to mkl_aarch64 #409

Open
ashesfall opened this issue Jan 16, 2022 · 7 comments
Open

Comments

@ashesfall
Copy link

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04 x86_64): Aarch64 (Apple Sillicon)
  • TensorFlow installed from (source or binary): Source
  • TensorFlow version: 2.7
  • Java version (i.e., the output of java -version): 17
  • Java command line flags (e.g., GC parameters):
  • Installed from Maven Central?: No
  • Bazel version (if compiling from source): 3.7.2
  • GCC/Compiler version (if compiling from source): Apple clang version 13.0.0
  • CUDA/cuDNN version: N/A
  • GPU model and memory: Apple M1

When building core-api I can see from the logs it is targeting x86_64. I wish to target Aarch64.

./build.sh in core-api

@Craigacp
Copy link
Collaborator

Craigacp commented Jan 17, 2022

Do you want to cross-compile from an x86 machine or compile natively on an M1?

I've got a branch where native compilation works, but you need to run the bazel build as the superuser due to some library discovery issue I've not figured out. #394 (comment)

I don't think we know how to cross-compile it from an x86 Mac.

@ashesfall
Copy link
Author

The goal was compiling directly on the M1.

And yes I eventually found your old discussion and have succeeded.

When will this become more official?

@Craigacp
Copy link
Collaborator

When I or someone else figures out how to compile it without needing to run bazel as root we'll merge it into master. We won't be able to do builds for it due to a lack of appropriate build resources unless we manage to figure out cross compiling.

@ashesfall
Copy link
Author

Okay. You can't lose this ticket.

@ashesfall
Copy link
Author

Oh one last thing. Is integration with tensorflow-metal on the road map either?

@Craigacp
Copy link
Collaborator

We don't currently expose TF_LoadPluggableDeviceLibrary which is the entry point for the pluggable device infrastructure. If we did, then I think you should be able to download the tensorflow-metal whl, unzip it and then load it with that function. As far as I can tell tensorflow-metal is closed source, so I don't think we'd be able to repackage it for Java.

Apple's docs do say "V1 TensorFlow Networks" are unsupported, but I'm not sure what they mean by that.

@ashesfall
Copy link
Author

Yea it can't be included, but it would be great if I could load the pluggable device. You can close the ticket.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants