New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Platform interoperability #27
Comments
Related to #12. I think the issue you're encountering is that lleaves basically compiles with In the current lleaves version there's not much you can do except to compile on the machine that you'll run the final binary on. The way to fix this is to introduce a new flag in the Issues like this are to some degree the consequence of using |
Okay, it totally makes sense now. Thanks again for your quick response. For me it would still be beneficial to use specific instruction targeting, however I need to know which compiled version my machine requires. For now I will hash the llvm.get_host_cpu_features to compute interoperability. That should work right 😄? Something like: import hashlib, json
import llvmlite.binding as llvm
h = hashlib.sha256()
h.update(json.dumps(dict(llvm.get_host_cpu_features()), sort_keys=True).encode())
key = h.hexdigest() |
Without think about it for long, I'd probably use |
Is there a way to effectively check if compiled models are able to run on a machine?
I am running predictions on various platforms, when loading the compiled model, I load the one which was compiled on the same platform (using:
PLATFORM = sys.platform + '-' + sysconfig.get_platform().split('-')[-1].lower()
, resulting in either darwin-arm64 or linux-x86_64). However sometimes models which are compiled in a linux-x86_64 environment, are not interoperable with other linux-x86_64 machines (I use AWS Fargate, which runs the container on whatever hardware is available). This results in exit code 132 (Illegal Instruction) in the model.predict() loop.The underlying reason is probably that the underlying machines are not the same architecture (ARM based?). For example, when I compile a model within a Docker container (with DOCKER_DEFAULT_PLATFORM=linux/amd64) on my M1 Mac, it registers the platform as linux-x86_64, but the model cannot be used on AWS linux machine using Docker.
What would be a solid way to go about this issue? Is there some LLVM version which I need to look at in order for models to be interoperable?
Thanks a lot.
The text was updated successfully, but these errors were encountered: