New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
M1 Apple Silicon support #330
Comments
Hi @antranttu -- Not yet, but if you're willing to work with us and try a few experimental builds we could probably get this working. We don't have an m1 mac to test this out on, but we were able to build an m1 version on an Intel mac. If you're willing to try this out, then do the following and let us know how it goes:
-InterpretML team |
Hi, Yes I'm willing to try the experimental builds to get support for M1 going. However, I wasn't able to install using the mentioned command because I do not have access and that the request requires authentication. |
Ok then, let's try this. Go to: Download the bottom item labeled "wheel" Extract interpret_core-0.2.7-py3-none-any.whl from the wheel.zip onto your computer. Then, run:
|
Sorry, that's the wrong link. Will post the right one in a sec |
Here is the correct download. I will also update the message above. |
It's working! Thank you very much for the help. One last question: is the performance optimized compared to other platforms? Because I ran the example code on the adult income salary provided at https://interpret.ml/docs/ebm.html, it took like a second on my intel machine to execute but consistently ~3 seconds on the M1 mac, not sure if the computation would be linearly expensive when applied on a larger dataset. |
Great! Wow, when does stuff like that work on the first try. :) This build was a real hacky hack just to test things out, so I'll refine the solution to work on both m1 and Intel macs. Once we have that working, hopefully you can help us again by testing the finished working product. In the meantime, it should work fine for you if you aren't seeing crashes. On the speed thing: I'd try it on bigger datasets before speculating on which chipset is faster. 1 sec vs 3 secs could easily be due to things like the time to load/initialize libraries. The build above is running on native ARM instructions and is not emulated. -InterpretML |
Thank you, I will test on bigger datasets to test its efficiency. But yes it's working great so far. Please do not hesitate to reach out if you need me for any testing in the future. Closing this for now. Thanks again for the help. |
Hi @antranttu -- The fully integrated m1 build is ready to be tested. If this works it will be in our next pypi release. Can you please try out the wheel here: |
Hello, I got this error this time around:
|
Thanks @antranttu . I see the problem and will post a new build shortly. |
Hi @antranttu -- This build should fix that issue: |
Yup, working great this time! Do you mind me asking what the error was about? |
Great to hear. Thanks for your help in testing this! It was an issue in our build pipeline see dfd773e . There is a new arm specific shared library that python calls when it's running on an m1. That arm specific shared library was being built properly, but on the last step it wasn't being copied into the final wheel. -InterpretML team |
Hello @interpret-ml ! |
Hello @interpret-ml! |
Hello, I am not sure if the artifacts for M1 support have been pushed to the official |
Hi @interpret-ml, any ETA for a official release that supports M1? |
Hello @antranttu , thanks a lot for sharing the ZIP. Tried in my M1, but got the same error as you did before: dlopen(/Users/antran/miniforge3/envs/boosting/lib/python3.8/site-packages/interpret/glassbox/ebm/../../lib/lib_ebm_native_mac_arm.dylib, 0x0006): I tried with then install -U and also --force-reinstall. Any hint how to identify, which version is installed? Thanks @interpret-ml : I would join the question of @markustoivonen, is there a ETA? |
Hello @antranttu, installed in a fresh venv, now it is working. Cool! Thanks for sharing the ZIP again. Erwin |
Also leaving a comment here to express interest in having this more easily available for use in M1 Macs. @interpret-ml |
Hello,
I was trying to installing and use EBM Classifier on my M1 computer but came across the following error:
I was wondering if
interpret
is supported on M1 chip yet? Is there any work-around for the error?Thank you!
The text was updated successfully, but these errors were encountered: