-
-
Notifications
You must be signed in to change notification settings - Fork 563
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide binaries for ARM #111
Comments
Very cool, wow! |
I think I built AppImageKit using the qemu-arm inside docker trick (https://resin.io/blog/building-arm-containers-on-any-x86-machine-even-dockerhub/). Here is my Dockerfile (ericfont/armv7hf-debian-qemu-AppImageKit@1c2b696). Maybe someone can take a look at this log of output from running and tell me if something looks suspicious: There were 3 libs and unionsfs-fuse missing in the AppImageKit/binary-dependencies dir...I manually added using: apt-get install fuse libglade2-0 libvte9 ruby-vte unionfs-fuse cp /usr/lib/libglade-2.0.so.0 AppImageKit-5/binary-dependencies/armv7l/ cp /usr/lib/libvte.so.9 AppImageKit-5/binary-dependencies/armv7l/ cp /usr/lib/ruby/vendor_ruby/1.8/arm-linux-eabihf/vte.so AppImageKit-5/binary-dependencies/armv7l/ cp /usr/bin/unionfs-fuse AppImageKit-5/binary-dependencies/armv7l/ My question for @probonopd is if there is anything specific about the libs that are incuded in binary-dependencies (for instance, were they built with a special compiler)? Is what I did is OK? Why does your build script not grab these dependencies in addition to the ones it already grabs? Oh, and I think python2 should be added as a dependency. |
Looks awesome @ericfont! Do you think it could be integrated with our Travis, just by referencing the docker image in .travis like we do for 32bit builds? Maybe by creating a simplified Dockerfile, that sets up the dependencies but doesn't actually run the build? Regarding the binary dependencies, there's a WIP in the experimental branch to build these dependencies, but for now in master they should be included in the repo. If we create a PR on top of 32bit_builds_docker, we can integrate the binaries in the PR for now, and later on, in another PR, remove all the binary dependencies, from all architectures, and either build them or fetch them on the fly from the build machine. |
yes, I think that is wise and can easily be done. (as it is now, the base docker image includes a bunch of musescore dependencies...I can create a fresh docker file that just does apt-get for the minimum that is needed). |
Regarding this simplified Dockerfile, ideally it should use the same dependency list as the recipe so that you don't have to maintain the list in two places. Perhaps modify build.sh so that you can pass in an optional argument to fetch dependencies but not actually build. |
Hmmm ideally that argument shouldn't be needed, the dependencies should be the same on all architectures; it may happen that some dependency is already installed on some architecture, but the script should make sure all needed dependencies are installed. In this case, we're talking only about (Please note: Right now I'm only talking about dependencies fetched from the repos, not about the stuff in the |
But the dependencies might change over time, and then it would be a pain to update them in both build.sh and the Dockerfile. That's why @probonopd's Dockerfiles simply run the recipe and then clean up the build afterwards. I'm suggesting you save doing the build by having a "--fetch-dependencies-only" argument to the recipe (or in this case build.sh). Edit: I was talking about repo dependencies too, but all of them, not just python. |
Hmmm, I agree dependencies shouldn't be duplicated in the Dockerfile, and reside only in |
The Dockerfile is there to create a Docker Image, and it is the image that gets used during the build on Travis. The Dockerfile itself is not used at all during the build on Travis. As you yourself said above:
The best way to set up those dependencies is to run the apt-get lines from build.sh. Then, when it's time to do the actual build on Travis, it will run quicker because the Docker image that it fetches will already contain the dependencies. |
Oh! Ok. Though the docker image was being created on the fly based on the Dockerfile. Now everything makes sense. Thanks! |
I thought the same first time I saw it! ;) |
No, nothing special, but use the oldest ones you can find that work for you (so that they run even on older distros). |
ok, well Debian Wheezy contains the oldest ARM ones I can find. (There is currently no such thing as CentOS 6 arm binaries I can find on internet). I've forked your repo and I'm adding a "--fetch-dependencies-only" argument to AppImageKit ./build.sh, which will simply exit build.sh after grabbing dependencies (oh, and I'm adding python to apt-get...maybe I should add a python to yum and pacman as well?): feature/32bit_builds_docker...ericfont:origin/feature/arm_builds_docker Now I'm going to make a minmal docker image that contains these fetched dependencies, based off debian wheezy using resin's qemu-arm. |
here is minimal wheezy armv7 w/qemu-arm docker image that contains AppImageKit dependencies: https://hub.docker.com/r/ericfont/armv7hf-debian-qemu-appimagekit/builds/bzjsvmqzi8z9u3dro8qew9r/ Regarding:
Note that docker images can be created "on the fly" if link your github to docker and create a new "automated" docker build via https://hub.docker.com/add/automated-build/your-docker-username/github/orgs/ and add a github repositority which contains just one file, "Dockerfile". Then whenever you push a commit of that repo to your github repo, then the docker build will run, and the "latest" tag will reflect the result of the latest build. And if you push to a specific branch, then then docker image will be assigned a tag according to the name of the branch. In my case, my docker file is https://github.com/ericfont/armv7hf-debian-qemu-AppImageKit/blob/AppImageKit-fetch-dependencies-only/Dockerfile and so when I push that, it will build, and then can later grab the already-built docker image tagged with "AppImageKit-fetch-dependencies-only" in the future. (The reason I'm writing this is because it took me a little bit of time to figure that all out). |
How do i build AppImageKit for ARM (Raspberry Pi)? |
are you asking me? I had done it in docker automated build using https://github.com/ericfont/armv7hf-debian-qemu-AppImageKit/blob/master/Dockerfile (although being "debian armhf", those produced binaries are armv7 and won't work on Raspbian which is armv6). Also would still need to extract the AppImageKit binary out of that produced docker image. Would you like me to provide a travis build script which basically runs the AppImageKit build inside the qemu docker, gets the produced binaries out of the docker image, and uploads them to something like bintray? For all debian-supported architectures armel, armhf, arm64? |
(I want to also note that there instead of docker-qemu route, there is also the native cross-compiler to arm route...I'm still trying to get musescore working with native cross-compiler to arm using something like this https://github.com/ericfont/MuseScore/blob/compile-armhf/build/Linux%2BBSD/jessie-crosscompile-armhf.Dockerfile and this https://github.com/ericfont/MuseScore/blob/compile-armhf/build/Linux%2BBSD/jessie-crosscompile-armhf.cmake) |
@ericfont The word "awesome" would not even come close to describing how cool this would be 👍 |
ok, I'll try to do that in the next couple days. I think I should use the /feature/build_3rd_party_dependencies branch to compile those libraries statically (I believe), instead of copying from debian repo as I did in my previous docker image. I'll just compile those 3rd_party_dependnecies once for each ISA, and then stick them in a pre-ready docker image, which the travis will use, so don't builld those dependencies on every travis run. |
Docker build just got merged in master, so I integrated 3rd party libraries build on it (#120) There's a catch: I need to use CMake >= 3.1 because some dependencies source code is only found in .tar.xz, but at least the CentOS docker images come with an older version, so I download new official CMake binaries, but in CMake website only x86 and x86_64 are supported, not ARM. If you rebase against |
well...to be honest, the CentOS doesn't really support arm (latest centOS support armv7, but that's it). Debian on the other hand, supports tons of architectures. I suggest using Debian for dealing with cross architecture stuff. |
No, no, CentOS is only for 64bit and 32bit builds, nothing to do with ARM, I was just warning that for that particular distro I had to tweak CMake, and that may or may not be a problem for ARM, it's just something to look out for. ARM builds should be done in another container that sets up everything so that build.sh runs out of the box and generates ARM binaries, whichever container fits best, regardless of the containers used for x86. BTW, if we've already got the ARM builds running on Qemu, is cross-compiling really worth the hassle? What's the benefit? And for building 3rd_party_dependnecies only once? Building them on every build should be simpler and more flexible, and the build time shouldn't be a problem, even if it's a 10 minutes build per architecture (just saying a big number, I don't know how long the actual build takes). |
Understood about CentOS. Regarding your second paragraph, would not be "cross-compilng". But you still need to have a docker image that is in that architecutre. So when I used the resin.io trick, I was basically using a debian jessie armhf image that had a special statically compiled version of qemu. The qemu is slow, but for AppImageKit being a relatively small code base, it is fine, and I was saying I can go ahead and setup qmeu docker images of each architecutre for AppImageKit building. But for a larger project, for example musescore, then qemu is way too slow. |
(To clarify a bit further how the resin.io trick works: The statically compiled version of qemu is something that runs natively in x86-64 docker hub, and this special docker image is setup so that qemu intercepts all calls to execute binary programs. All the binary programs in the docker image are of your target architecture.) |
I see! The problem with cross-compiling is usually the dependencies, which are usually easier to fetch on a native system or VM (that's why I aborted trying to cross-compile AppImageKit for 32bit from x86_64, using a container was way simpler). Then there's other stuff, like for instance, you may want to copy over all the .so files you depend on, which are usually easy to fetch with How long would it take to build AppImageKit with all its dependencies on Qemu/ARM? |
Right, I understand the issue about dependencies. But do keep in mind that debian has very good support for cross compiling...you basically apt add architecutre, and add the cross tool chain to /etc/sources, and then you can apt-get the individual libraries using their same debian package name but append ":armhf" to it...like "apt-get qt-base5-dev:armhf", and it puts all those libraries of particular architecutre in a particular subfolder like /usr/lib/gnueabi-armhf. Then in your cmake toolchain file (https://cmake.org/Wiki/CMake_Cross_Compiling#The_toolchain_file) you specify those directorys. (aside: Rasperry 1's arch is not fully supported by official debian cross toolchain because debian's armel doesn't use hardware floating point...but armel will run in Raspberry 1.) Regarding AppImageKit, I haven't looked closely at your dependnecies. I count 11 dependices, is that correct. (You commented out vte.so and libglade.so in your cmakelist...is there a reason why?). Now usually I expect rough estimate on order of magnitude of ~10x slowdown when emulating, but note that compiling is cpu-intensive so that really stresses the emulating abilitiy. But no need to worry about fitting in a time limit for building those dependneices, since I can use my home computer running over night, for instance. But buidlin AppImageKit itself sans-dependneics wasn't too bad at all (I can't remember exactly, but something around 5 minutes on my computer). |
It usually works fine, but broke for me in the particular case of libfuse.so.2, although I haven't given it much though since #91 (comment)
Hard to say, specially in the Python application, which currently doesn't run properly on my Archlinux, because building a portable pygtk app is a bitch!
Hmmm, you're right, it's not finished. Those dependencies are still fetched from |
AppImageAssistant has been replaced by |
Do you think we can get ARM builds going for the |
Or at least a runtime.c? |
To everyone reading this, if you are interested in ARM builds, check out Open Build Service which can build AppImages for 32-bit and 64-bit Intel and ARM architectures. Instructions: https://github.com/probonopd/AppImageKit/wiki/Using-Open-Build-Service |
thanks, this looks useful. |
Here are some test AppImages for ARM: And here are builds of appimagetool for ARM: |
I'm testing that leafpad on my armv7l C201 chromebook running arch linux...I get: [e@alarm Downloads]$ ./leafpad-latest-armv7l.AppImage [e@alarm Downloads]$ ./leafpad-1328305040.7e5e2e0.glibc2.4-armv7l.AppImage |
downgrading my systems harfbuzz from 1.5.0-1 to 1.4.8-1 allowed me to run your appimage fine. |
So @ericfont you are saying that newer versions of libharfbuzz* are not able to run the AppImage, while older versions do? Unless someone tells me differently, I'd assume that must be a libharfbuzz bug then (since libraries are supposed to be backward compatible)? If it is, we should report it to the HarfBuzz team. |
Yes, that is what I'm saying. |
I should note that running arch distro's current leafpad v0.8.18.1 runs fine with the newer libharfbuzz v1.5.0-1. So for whatever reason the newer libharfbuzz is not being backwards compatible. |
Can you please open a bug report on libharfbuzz and link it here? Maybe also add the specifics to https://github.com/AppImage/AppImageKit/wiki/Desktop-Linux-Platform-Issues. Thank you. |
Reopening this issue since its original intention, namely to provide ARM builds, is still relevant. Here is a solution that apparently even can do without Docker: |
There's two realistic options, basically, one is to cross compile using e.g., the Raspberry Pi cross compiling toolchain, the other one involves a QEMU emulated debootstrap (not a VM but process wise emulation). Cross compiling would produce faster results, but I've had better experiences with the latter option. I'll have a look at that link you provided, @probonopd. |
@ericfont found when cross-compiling @musescore that it would compile ok but would then fail at the linking stage. He got around this by using QEMU to run a Raspberry Pi image just for the linking step. Compile on x86_64 and link on ARM (via QEMU). |
Thanks for the pointer @shoogle |
@shoogle check out this much easier solution:
You just need to install |
A Raspberry Pi (or rather, Raspbian, which most people think is the same) image is only required when building something specifically for the Raspberry Pi, i.e., when the additional libraries are required (i.e., you're working with the GPIOs etc.). Otherwise, a "normal" ARM image will suffice. |
"@ericfont found when cross-compiling @musescore that it would compile ok but would then fail at the linking stage." Specifically what I remember happening is the RPI running out of RAM when linking. That could be solved by adding a swap memory (but that ends up being slow anyway). Also I would have to limit compilation to two threads (thereby not utilizing 2 of the 4 cores), since 4 threads would exceed RAM limit as well. Basically any large compile jobs you're better off using a more powerful machine than a RPi. |
@ericfont, I was talking about cross-compiling the MuseScore ARM AppImage on TravisCI x86_64, not compiling natively on rPi. It seems you had trouble with packaging the AppImage rather than linking. @TheAssassin, I'm sure QEMU is much easier, but cross-compiling is faster, and this matters for MuseScore because its a big program and TravisCI has a limit on build time. I was just pointing out that you can get the best of both worlds by cross-compiling on x86 and only launching QEMU for any subsequent steps that you can't do on x86. |
I wouldn't worry at all about the time limit to be honest, the time loss in comparison to cross compiling won't mean we'll exceed any such limit. |
Probably should target armv6l (Raspberry Pi B+).
As #91 but for ARM. These containers may come handy:
Stuff to check out:
The text was updated successfully, but these errors were encountered: