Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Specify multiple architectures for Julia to precompile to #2044

Merged
merged 4 commits into from Nov 29, 2023

Conversation

yuvipanda
Copy link
Contributor

Describe your changes

For amd64 (x86_64), we should specify what specific targets the precompilation should be done for. If we don't specify it, it's only done for the target of the host doing the compilation. When the container runs on a host that's still x86_64, but a different generation of CPU than what the build host was, the precompilation is useless and Julia takes a long long time to start up. This specific multitarget comes from
https://docs.julialang.org/en/v1/devdocs/sysimg/#Specifying-multiple-system-image-targets, and is the same set of options that the official Julia x86_64 build is compiled with. If the architecture the container runs on is different, precompilation may still have to be re-done on first startup - but this should catch most of the issues.

h/t to
https://discourse.julialang.org/t/is-it-possible-to-make-precompilation-portable-for-docker-images-built-with-a-different-cpu/95913 which helped point me towards JULIA_CPU_TARGET.

Issue ticket if applicable

Fixes #2015 for more information

Checklist (especially for first-time contributors)

  • I have performed a self-review of my code
  • If it is a core feature, I have added thorough tests
  • I will try not to use force-push to make the review process easier for reviewers
  • I have updated the documentation for significant changes

For amd64 (x86_64), we should specify what specific targets the
precompilation should be done for.  If we don't specify it,
it's *only* done for the target of the host doing the compilation.
When the container runs on a host that's still x86_64, but
a *different* generation of CPU than what the build host was, the
precompilation is useless and Julia takes a long long time to start
up. This specific multitarget comes from
https://docs.julialang.org/en/v1/devdocs/sysimg/#Specifying-multiple-system-image-targets,
and is the same set of options that the official Julia x86_64 build is
compiled with.  If the architecture the container runs on is
different, precompilation may still have to be re-done on first
startup - but this *should* catch most of the issues.

h/t to
https://discourse.julialang.org/t/is-it-possible-to-make-precompilation-portable-for-docker-images-built-with-a-different-cpu/95913
which helped point me towards `JULIA_CPU_TARGET`.

Fixes jupyter#2015 for more information
@yuvipanda yuvipanda mentioned this pull request Nov 28, 2023
1 task
@yuvipanda
Copy link
Contributor Author

I'm not actually sure how we can test this, since the error occurs when the docker image is built with a certain CPU but then run with a different CPU.

@yuvipanda yuvipanda marked this pull request as ready for review November 28, 2023 21:06
@yuvipanda
Copy link
Contributor Author

I tested this locally and seemed ok. I think the way out here is to build this out on GitHub actions again, push and then we can test on our local x86s's :)

@mathbunnyru
Copy link
Member

mathbunnyru commented Nov 29, 2023

I tested this locally and seemed ok. I think the way out here is to build this out on GitHub actions again, push and then we can test on our local x86s's :)

There is a better way.
All the images are available as GitHub artifacts on the summary page (scroll down and you will see all the artifacts).
https://github.com/jupyter/docker-stacks/actions/runs/7023919029?pr=2044

To load the image, after unzipping the file, use a command like this:
zstd --uncompress --stdout file.tar.zst | docker load

This is a benefit of our upload/download architecture - many things can be checked before merging to main.
They are only available for 3 days though, but normally it should be enough to download and test the image.

@mathbunnyru
Copy link
Member

@yuvipanda waiting for you to check the image in the PR on your x86_64 machine.
If everything works fine, then I'll merge.

@mathbunnyru
Copy link
Member

mathbunnyru commented Nov 29, 2023

Do you maybe know how to fix this when running aarch64 images on Apple Silicon Macs?
I mean, it would be great to have faster Julia startup in more cases.
If it's more difficult, then don't worry and let's leave it as is.

I think the important file to look at is this one: https://github.com/JuliaLang/julia/blob/master/base/cpuid.jl
I also noticed there are more Instruction Set Architectures in this file for x86_64, for example skylake.
Should we include them in this PR as well? (I assume the documentation might be outdated, but I also might be wrong)

Disclaimer: this is the first time I look at the source code of Julia, so maybe we need to take a look at other file(s).

@mathbunnyru
Copy link
Member

Yep, the ISAs mentioned in the docs were added 5 years ago, so the documentation is outdated.
JuliaLang/julia@d5d01ef

I suggest we resolve this issue for the modern CPUs as well, and also add a link for a cpuid file to make it easier to update our script in the future, how do you feel about this?

@mathbunnyru
Copy link
Member

I also have an idea in the opposite direction - stop trying to deal with this on the build machine at all.

We want Pkg.precompile() be run on a user machine.
So let's run it on a user machine.

To do this, we can use (wait for it) run-hooks capabilities, we need to add a file which runs this command in the background.

Pros:

  • we don't need to have a mess with CPU architectures
  • most likely to work for all the CPUs and unlikely to break
  • so we don't need to do anything for new Julia versions / new CPUs
  • also should work for aarch64 (and even Apple Silicon Macs)

Cons:

  • we should definitely run this at background, as this takes some time and we can't wait for this process to finish, so users won't know if Julia is fully ready or not.
  • we have to educate users about this
  • need to be a bit careful in cases where a user is renamed (but maybe it's gonna be easy)

My opinion on this - if we can fix this issue in a sustainable way and for modern CPUs as well (I'm okay with manual updates of JULIA_CPU_TARGET, but we need to have a clear understanding how and when to update it), let's do it (users will be more happy. If we can't - then it's better to be fixed during runtime.

@benz0li
Copy link
Contributor

benz0li commented Nov 29, 2023

I suggest we resolve this issue for the modern CPUs as well, and also add a link for a cpuid file to make it easier to update our script in the future, how do you feel about this?

@mathbunnyru These are the targets settings for all the architectures: https://github.com/JuliaCI/julia-buildkite/blob/70bde73f6cb17d4381b62236fc2d96b1c7acbba7/utilities/build_envs.sh#L20-L76

Original post: docker-library/julia#79 (comment)


File: https://github.com/JuliaCI/julia-buildkite/blob/main/utilities/build_envs.sh

@mathbunnyru
Copy link
Member

I suggest we resolve this issue for the modern CPUs as well, and also add a link for a cpuid file to make it easier to update our script in the future, how do you feel about this?

@mathbunnyru These are the targets settings for all the architectures: https://github.com/JuliaCI/julia-buildkite/blob/70bde73f6cb17d4381b62236fc2d96b1c7acbba7/utilities/build_envs.sh#L20-L76

Original post: docker-library/julia#79 (comment)

File: https://github.com/JuliaCI/julia-buildkite/blob/main/utilities/build_envs.sh

Great, thanks!
Let's add a link to this file, and set JULIA_CPU_TARGETS for aarch64 as well and this should be good to go.

- Don't need `export` as this is only used within this
  script
- Steal from upstream what should be setup for aarch64
@yuvipanda
Copy link
Contributor Author

That's a great find, thanks @benz0li!

While newer architectures do exist in cpuid, I want to stick to whatever the official Julia build is built with. Primarily because we don't know what other options are needed - both sandybridge and haswell have options about what is included and what is missing, and we don't know what that would be for newer architectures. So if this passes the test of me downloading the built image and testing it locally, I'd like to stick to what's in the julia-buildkite repo (which is what produces the official binaries, I think?). How does that sound, @mathbunnyru?

@benz0li
Copy link
Contributor

benz0li commented Nov 29, 2023

That's a great find, thanks @benz0li!

@yuvipanda Likewise. #2015 (comment) ff is exemplary in finding out the cause.

[...] I'd like to stick to what's in the julia-buildkite repo (which is what produces the official binaries, I think?).

Your assumption is correct.

@yuvipanda
Copy link
Contributor Author

yuvipanda commented Nov 29, 2023

Ok, testing on aarch64:

❯ docker run -it -p 8888:8888 quay.io/jupyter/julia-notebook:latest /bin/bash
(base) jovyan@0fd491481b74:~$ time julia -e 'import Pluto'

real    0m0.863s
user    0m0.749s
sys     0m0.192s
(base) jovyan@0fd491481b74:~$ ls /opt/julia/compiled/v1.9/Pluto/
OJqMt_j7gqT.ji  OJqMt_j7gqT.so

success!

And fine on x86_64 too!

➜ docker run -it -p 8888:8888 quay.io/jupyter/julia-notebook:latest /bin/bash
(base) jovyan@f092b817b671:~$ uname -m
x86_64
(base) jovyan@f092b817b671:~$ time julia -e 'import Pluto'

real    0m9.723s
user    0m8.765s
sys     0m1.203s
(base) jovyan@f092b817b671:~$ ls /opt/julia/compiled/v1.9/Pluto/
OJqMt_kStVT.ji  OJqMt_kStVT.so

yay!

I think this means this is ready to merge, @mathbunnyru.

@mathbunnyru
Copy link
Member

mathbunnyru commented Nov 29, 2023

That's a great find, thanks @benz0li!

While newer architectures do exist in cpuid, I want to stick to whatever the official Julia build is built with. Primarily because we don't know what other options are needed - both sandybridge and haswell have options about what is included and what is missing, and we don't know what that would be for newer architectures. So if this passes the test of me downloading the built image and testing it locally, I'd like to stick to what's in the julia-buildkite repo (which is what produces the official binaries, I think?). How does that sound, @mathbunnyru?

Sounds good to me.

@yuvipanda
Copy link
Contributor Author

The compliment in #2044 (comment) made my day brighter, thanks @benz0li :)

@mathbunnyru
Copy link
Member

Ok, testing on aarch64:

❯ docker run -it -p 8888:8888 quay.io/jupyter/julia-notebook:latest /bin/bash
(base) jovyan@0fd491481b74:~$ time julia -e 'import Pluto'

real    0m0.863s
user    0m0.749s
sys     0m0.192s
(base) jovyan@0fd491481b74:~$ ls /opt/julia/compiled/v1.9/Pluto/
OJqMt_j7gqT.ji  OJqMt_j7gqT.so

success!

And fine on x86_64 too!

➜ docker run -it -p 8888:8888 quay.io/jupyter/julia-notebook:latest /bin/bash
(base) jovyan@f092b817b671:~$ uname -m
x86_64
(base) jovyan@f092b817b671:~$ time julia -e 'import Pluto'

real    0m9.723s
user    0m8.765s
sys     0m1.203s
(base) jovyan@f092b817b671:~$ ls /opt/julia/compiled/v1.9/Pluto/
OJqMt_kStVT.ji  OJqMt_kStVT.so

yay!

I think this means this is ready to merge, @mathbunnyru.

I also tested on my M2 MacBook Air, and it only takes 0m1.359s (it takes around 40 seconds in the current images).

Amazing work both @yuvipanda and @benz0li, thank you!
Happy to merge this.

@mathbunnyru mathbunnyru merged commit 99d3064 into jupyter:main Nov 29, 2023
64 checks passed
@yuvipanda
Copy link
Contributor Author

Thanks for the review and merge, @mathbunnyru. In addition, thanks for setting up the build process in such a way I could actually test the built images without having to merge these!

@yuvipanda
Copy link
Contributor Author

I also wrote a quick blog post about this so this information is easier to find for the next soul that goes searching: https://words.yuvi.in/post/pre-compiling-julia-docker/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Pluto Notebook problem
3 participants