Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide binaries for ARM #111

Closed
RazZziel opened this issue Mar 11, 2016 · 66 comments
Closed

Provide binaries for ARM #111

RazZziel opened this issue Mar 11, 2016 · 66 comments

Comments

@RazZziel
Copy link

Probably should target armv6l (Raspberry Pi B+).

As #91 but for ARM. These containers may come handy:

  - DOCKER_IMAGE=ryankurte/docker-arm-embedded
  - DOCKER_IMAGE=amardomingo/docker-build-arm
  - DOCKER_IMAGE=philipz/raspberry-pi-crosscompile

Stuff to check out:

@RazZziel RazZziel self-assigned this Mar 11, 2016
@probonopd
Copy link
Member

Very cool, wow!

@ericfont
Copy link
Contributor

I think I built AppImageKit using the qemu-arm inside docker trick (https://resin.io/blog/building-arm-containers-on-any-x86-machine-even-dockerhub/). Here is my Dockerfile (ericfont/armv7hf-debian-qemu-AppImageKit@1c2b696). Maybe someone can take a look at this log of output from running and tell me if something looks suspicious:
sucesfful-build-armv7-in-docker.txt

There were 3 libs and unionsfs-fuse missing in the AppImageKit/binary-dependencies dir...I manually added using:

apt-get install fuse libglade2-0 libvte9 ruby-vte unionfs-fuse
cp /usr/lib/libglade-2.0.so.0 AppImageKit-5/binary-dependencies/armv7l/
cp /usr/lib/libvte.so.9 AppImageKit-5/binary-dependencies/armv7l/
cp /usr/lib/ruby/vendor_ruby/1.8/arm-linux-eabihf/vte.so AppImageKit-5/binary-dependencies/armv7l/
cp /usr/bin/unionfs-fuse AppImageKit-5/binary-dependencies/armv7l/

My question for @probonopd is if there is anything specific about the libs that are incuded in binary-dependencies (for instance, were they built with a special compiler)? Is what I did is OK? Why does your build script not grab these dependencies in addition to the ones it already grabs?

Oh, and I think python2 should be added as a dependency.

@RazZziel
Copy link
Author

Looks awesome @ericfont! Do you think it could be integrated with our Travis, just by referencing the docker image in .travis like we do for 32bit builds? Maybe by creating a simplified Dockerfile, that sets up the dependencies but doesn't actually run the build?

Regarding the binary dependencies, there's a WIP in the experimental branch to build these dependencies, but for now in master they should be included in the repo. If we create a PR on top of 32bit_builds_docker, we can integrate the binaries in the PR for now, and later on, in another PR, remove all the binary dependencies, from all architectures, and either build them or fetch them on the fly from the build machine.

@ericfont
Copy link
Contributor

creating a simplified Dockerfile, that sets up the dependencies but doesn't actually run the build

yes, I think that is wise and can easily be done. (as it is now, the base docker image includes a bunch of musescore dependencies...I can create a fresh docker file that just does apt-get for the minimum that is needed).

@shoogle
Copy link

shoogle commented Mar 13, 2016

Regarding this simplified Dockerfile, ideally it should use the same dependency list as the recipe so that you don't have to maintain the list in two places. Perhaps modify build.sh so that you can pass in an optional argument to fetch dependencies but not actually build.

@RazZziel
Copy link
Author

Hmmm ideally that argument shouldn't be needed, the dependencies should be the same on all architectures; it may happen that some dependency is already installed on some architecture, but the script should make sure all needed dependencies are installed.

In this case, we're talking only about apt-get install python, right?

(Please note: Right now I'm only talking about dependencies fetched from the repos, not about the stuff in the binary-dependencies directory, which is incomplete for both armv6l and armv7l)

@shoogle
Copy link

shoogle commented Mar 13, 2016

But the dependencies might change over time, and then it would be a pain to update them in both build.sh and the Dockerfile. That's why @probonopd's Dockerfiles simply run the recipe and then clean up the build afterwards. I'm suggesting you save doing the build by having a "--fetch-dependencies-only" argument to the recipe (or in this case build.sh).

Edit: I was talking about repo dependencies too, but all of them, not just python.

@RazZziel
Copy link
Author

Hmmm, I agree dependencies shouldn't be duplicated in the Dockerfile, and reside only in build.sh (so they can run inside our outside Docker), but I think I'm not understanding completely: what's the use case for ./build.sh --fetch-dependencies-only, when the dependencies AFAIK are only needed in order to run ./build.sh? From the perspective that AppImageKit is built right now in #112 with docker run -i -v "${PWD}:/AppImageKit" "$DOCKER_IMAGE" /bin/bash -c "/AppImageKit/build.sh"

@shoogle
Copy link

shoogle commented Mar 13, 2016

The Dockerfile is there to create a Docker Image, and it is the image that gets used during the build on Travis. The Dockerfile itself is not used at all during the build on Travis. As you yourself said above:

Maybe by creating a simplified Dockerfile, that sets up the dependencies but doesn't actually run the build?

The best way to set up those dependencies is to run the apt-get lines from build.sh. Then, when it's time to do the actual build on Travis, it will run quicker because the Docker image that it fetches will already contain the dependencies.

@RazZziel
Copy link
Author

Oh! Ok. Though the docker image was being created on the fly based on the Dockerfile. Now everything makes sense. Thanks!

@shoogle
Copy link

shoogle commented Mar 13, 2016

I thought the same first time I saw it! ;)

@probonopd
Copy link
Member

My question for @probonopd is if there is anything specific about the libs that are incuded in binary-dependencies (for instance, were they built with a special compiler)? Is what I did is OK? Why does your build script not grab these dependencies in addition to the ones it already grabs?

No, nothing special, but use the oldest ones you can find that work for you (so that they run even on older distros).

@ericfont
Copy link
Contributor

ok, well Debian Wheezy contains the oldest ARM ones I can find. (There is currently no such thing as CentOS 6 arm binaries I can find on internet). I've forked your repo and I'm adding a "--fetch-dependencies-only" argument to AppImageKit ./build.sh, which will simply exit build.sh after grabbing dependencies (oh, and I'm adding python to apt-get...maybe I should add a python to yum and pacman as well?): feature/32bit_builds_docker...ericfont:origin/feature/arm_builds_docker

Now I'm going to make a minmal docker image that contains these fetched dependencies, based off debian wheezy using resin's qemu-arm.

@ericfont
Copy link
Contributor

here is minimal wheezy armv7 w/qemu-arm docker image that contains AppImageKit dependencies: https://hub.docker.com/r/ericfont/armv7hf-debian-qemu-appimagekit/builds/bzjsvmqzi8z9u3dro8qew9r/

Regarding:

Thought the docker image was being created on the fly based on the Dockerfile. Now everything makes sense.

Note that docker images can be created "on the fly" if link your github to docker and create a new "automated" docker build via https://hub.docker.com/add/automated-build/your-docker-username/github/orgs/ and add a github repositority which contains just one file, "Dockerfile". Then whenever you push a commit of that repo to your github repo, then the docker build will run, and the "latest" tag will reflect the result of the latest build. And if you push to a specific branch, then then docker image will be assigned a tag according to the name of the branch. In my case, my docker file is https://github.com/ericfont/armv7hf-debian-qemu-AppImageKit/blob/AppImageKit-fetch-dependencies-only/Dockerfile and so when I push that, it will build, and then can later grab the already-built docker image tagged with "AppImageKit-fetch-dependencies-only" in the future. (The reason I'm writing this is because it took me a little bit of time to figure that all out).

@probonopd
Copy link
Member

How do i build AppImageKit for ARM (Raspberry Pi)?

@ericfont
Copy link
Contributor

are you asking me? I had done it in docker automated build using https://github.com/ericfont/armv7hf-debian-qemu-AppImageKit/blob/master/Dockerfile (although being "debian armhf", those produced binaries are armv7 and won't work on Raspbian which is armv6). Also would still need to extract the AppImageKit binary out of that produced docker image.

Would you like me to provide a travis build script which basically runs the AppImageKit build inside the qemu docker, gets the produced binaries out of the docker image, and uploads them to something like bintray? For all debian-supported architectures armel, armhf, arm64?

@ericfont
Copy link
Contributor

(I want to also note that there instead of docker-qemu route, there is also the native cross-compiler to arm route...I'm still trying to get musescore working with native cross-compiler to arm using something like this https://github.com/ericfont/MuseScore/blob/compile-armhf/build/Linux%2BBSD/jessie-crosscompile-armhf.Dockerfile and this https://github.com/ericfont/MuseScore/blob/compile-armhf/build/Linux%2BBSD/jessie-crosscompile-armhf.cmake)

@probonopd
Copy link
Member

Would you like me to provide a travis build script which basically runs the AppImageKit build inside the qemu docker, gets the produced binaries out of the docker image, and uploads them to something like bintray? For all debian-supported architectures armel, armhf, arm64?

@ericfont The word "awesome" would not even come close to describing how cool this would be 👍

@ericfont
Copy link
Contributor

ok, I'll try to do that in the next couple days. I think I should use the /feature/build_3rd_party_dependencies branch to compile those libraries statically (I believe), instead of copying from debian repo as I did in my previous docker image. I'll just compile those 3rd_party_dependnecies once for each ISA, and then stick them in a pre-ready docker image, which the travis will use, so don't builld those dependencies on every travis run.

@RazZziel
Copy link
Author

Docker build just got merged in master, so I integrated 3rd party libraries build on it (#120)

There's a catch: I need to use CMake >= 3.1 because some dependencies source code is only found in .tar.xz, but at least the CentOS docker images come with an older version, so I download new official CMake binaries, but in CMake website only x86 and x86_64 are supported, not ARM. If you rebase against feature/build_3rd_party_dependencies, you'd need to make sure the ARM qemu image comes with an up-to-date-enough CMake.

@ericfont
Copy link
Contributor

well...to be honest, the CentOS doesn't really support arm (latest centOS support armv7, but that's it). Debian on the other hand, supports tons of architectures. I suggest using Debian for dealing with cross architecture stuff.

@RazZziel
Copy link
Author

No, no, CentOS is only for 64bit and 32bit builds, nothing to do with ARM, I was just warning that for that particular distro I had to tweak CMake, and that may or may not be a problem for ARM, it's just something to look out for. ARM builds should be done in another container that sets up everything so that build.sh runs out of the box and generates ARM binaries, whichever container fits best, regardless of the containers used for x86.

BTW, if we've already got the ARM builds running on Qemu, is cross-compiling really worth the hassle? What's the benefit? And for building 3rd_party_dependnecies only once? Building them on every build should be simpler and more flexible, and the build time shouldn't be a problem, even if it's a 10 minutes build per architecture (just saying a big number, I don't know how long the actual build takes).

@ericfont
Copy link
Contributor

Understood about CentOS.

Regarding your second paragraph, would not be "cross-compilng". But you still need to have a docker image that is in that architecutre. So when I used the resin.io trick, I was basically using a debian jessie armhf image that had a special statically compiled version of qemu. The qemu is slow, but for AppImageKit being a relatively small code base, it is fine, and I was saying I can go ahead and setup qmeu docker images of each architecutre for AppImageKit building. But for a larger project, for example musescore, then qemu is way too slow.

@ericfont
Copy link
Contributor

(To clarify a bit further how the resin.io trick works: The statically compiled version of qemu is something that runs natively in x86-64 docker hub, and this special docker image is setup so that qemu intercepts all calls to execute binary programs. All the binary programs in the docker image are of your target architecture.)

@RazZziel
Copy link
Author

I see!

The problem with cross-compiling is usually the dependencies, which are usually easier to fetch on a native system or VM (that's why I aborted trying to cross-compile AppImageKit for 32bit from x86_64, using a container was way simpler). Then there's other stuff, like for instance, you may want to copy over all the .so files you depend on, which are usually easy to fetch with ldd, but when cross-compiling, some toolchains have a different ldd, with different parameters and behavior, that makes it pretty difficult to find the actual dependencies, and the whole packaging process needs to be tweaked (it's something I had to deal with at work when porting x86 software to the Raspberry, lots of nightmares...).

How long would it take to build AppImageKit with all its dependencies on Qemu/ARM?

@ericfont
Copy link
Contributor

Right, I understand the issue about dependencies. But do keep in mind that debian has very good support for cross compiling...you basically apt add architecutre, and add the cross tool chain to /etc/sources, and then you can apt-get the individual libraries using their same debian package name but append ":armhf" to it...like "apt-get qt-base5-dev:armhf", and it puts all those libraries of particular architecutre in a particular subfolder like /usr/lib/gnueabi-armhf. Then in your cmake toolchain file (https://cmake.org/Wiki/CMake_Cross_Compiling#The_toolchain_file) you specify those directorys. (aside: Rasperry 1's arch is not fully supported by official debian cross toolchain because debian's armel doesn't use hardware floating point...but armel will run in Raspberry 1.)

Regarding AppImageKit, I haven't looked closely at your dependnecies. I count 11 dependices, is that correct. (You commented out vte.so and libglade.so in your cmakelist...is there a reason why?). Now usually I expect rough estimate on order of magnitude of ~10x slowdown when emulating, but note that compiling is cpu-intensive so that really stresses the emulating abilitiy. But no need to worry about fitting in a time limit for building those dependneices, since I can use my home computer running over night, for instance. But buidlin AppImageKit itself sans-dependneics wasn't too bad at all (I can't remember exactly, but something around 5 minutes on my computer).

@RazZziel
Copy link
Author

But do keep in mind that debian has very good support for cross compiling

It usually works fine, but broke for me in the particular case of libfuse.so.2, although I haven't given it much though since #91 (comment)

I count 11 dependices, is that correct.

Hard to say, specially in the Python application, which currently doesn't run properly on my Archlinux, because building a portable pygtk app is a bitch!

(You commented out vte.so and libglade.so in your cmakelist...is there a reason why?).

Hmmm, you're right, it's not finished. Those dependencies are still fetched from ${binary-dependencies_PREFIX}/${ARCHITECTURE}/

@probonopd
Copy link
Member

AppImageAssistant has been replaced by appimagetool. Use it instead.

@probonopd
Copy link
Member

Do you think we can get ARM builds going for the appimagetool/master branch?

@probonopd
Copy link
Member

Or at least a runtime.c?

@probonopd
Copy link
Member

To everyone reading this, if you are interested in ARM builds, check out Open Build Service which can build AppImages for 32-bit and 64-bit Intel and ARM architectures. Instructions: https://github.com/probonopd/AppImageKit/wiki/Using-Open-Build-Service

@ericfont
Copy link
Contributor

ericfont commented Jun 2, 2017

thanks, this looks useful.

@probonopd
Copy link
Member

@ericfont
Copy link
Contributor

ericfont commented Sep 6, 2017

I'm testing that leafpad on my armv7l C201 chromebook running arch linux...I get:

[e@alarm Downloads]$ ./leafpad-latest-armv7l.AppImage
./leafpad-latest-armv7l.AppImage: symbol lookup error: /usr/lib/libharfbuzz.so.0: undefined symbol: FT_Get_Var_Blend_Coordinates

[e@alarm Downloads]$ ./leafpad-1328305040.7e5e2e0.glibc2.4-armv7l.AppImage
./leafpad-1328305040.7e5e2e0.glibc2.4-armv7l.AppImage: symbol lookup error: /usr/lib/libharfbuzz.so.0: undefined symbol: FT_Get_Var_Blend_Coordinates

@ericfont
Copy link
Contributor

ericfont commented Sep 6, 2017

downgrading my systems harfbuzz from 1.5.0-1 to 1.4.8-1 allowed me to run your appimage fine.

@probonopd
Copy link
Member

probonopd commented Sep 7, 2017

So @ericfont you are saying that newer versions of libharfbuzz* are not able to run the AppImage, while older versions do? Unless someone tells me differently, I'd assume that must be a libharfbuzz bug then (since libraries are supposed to be backward compatible)?

If it is, we should report it to the HarfBuzz team.

@ericfont
Copy link
Contributor

ericfont commented Sep 7, 2017

Yes, that is what I'm saying.

@ericfont
Copy link
Contributor

ericfont commented Sep 7, 2017

I should note that running arch distro's current leafpad v0.8.18.1 runs fine with the newer libharfbuzz v1.5.0-1. So for whatever reason the newer libharfbuzz is not being backwards compatible.

@probonopd
Copy link
Member

So for whatever reason the newer libharfbuzz is not being backwards compatible.

Can you please open a bug report on libharfbuzz and link it here? Maybe also add the specifics to https://github.com/AppImage/AppImageKit/wiki/Desktop-Linux-Platform-Issues. Thank you.

@probonopd
Copy link
Member

Reopening this issue since its original intention, namely to provide ARM builds, is still relevant.

Here is a solution that apparently even can do without Docker:
https://github.com/mmatyas/pegasus-frontend/blob/master/.travis.yml

@TheAssassin
Copy link
Member

There's two realistic options, basically, one is to cross compile using e.g., the Raspberry Pi cross compiling toolchain, the other one involves a QEMU emulated debootstrap (not a VM but process wise emulation). Cross compiling would produce faster results, but I've had better experiences with the latter option. I'll have a look at that link you provided, @probonopd.

@shoogle
Copy link

shoogle commented Nov 11, 2017

@ericfont found when cross-compiling @musescore that it would compile ok but would then fail at the linking stage. He got around this by using QEMU to run a Raspberry Pi image just for the linking step. Compile on x86_64 and link on ARM (via QEMU).

@probonopd
Copy link
Member

Thanks for the pointer @shoogle

@TheAssassin
Copy link
Member

@shoogle check out this much easier solution:

> qemu-debootstrap 
W: Target architecture is the same as host architecture; disabling QEMU support
I: Running command: debootstrap --arch amd64
I: usage: [OPTION]... <suite> <target> [<mirror> [<script>]]
I: Try `debootstrap --help' for more information.
E: You must specify a suite and a target.
> qemu-debootstrap --arch armhf jessie armhf-root http://deb.debian.org/debian/
[ the usual debootstrap spam... ]
> sudo chroot armhf-root
[ now you're in the chroot, all commands are transparently emulated with qemu-system-arm ]

You just need to install qemu-user-static (and perhaps the Debian keyring), and you're good to go. A chroot command can launch anything other than a login shell as well. This way, it's pretty easy to call a script or anything else.

@TheAssassin
Copy link
Member

A Raspberry Pi (or rather, Raspbian, which most people think is the same) image is only required when building something specifically for the Raspberry Pi, i.e., when the additional libraries are required (i.e., you're working with the GPIOs etc.). Otherwise, a "normal" ARM image will suffice.

@ericfont
Copy link
Contributor

"@ericfont found when cross-compiling @musescore that it would compile ok but would then fail at the linking stage." Specifically what I remember happening is the RPI running out of RAM when linking. That could be solved by adding a swap memory (but that ends up being slow anyway). Also I would have to limit compilation to two threads (thereby not utilizing 2 of the 4 cores), since 4 threads would exceed RAM limit as well. Basically any large compile jobs you're better off using a more powerful machine than a RPi.

@shoogle
Copy link

shoogle commented Nov 11, 2017

@ericfont, I was talking about cross-compiling the MuseScore ARM AppImage on TravisCI x86_64, not compiling natively on rPi. It seems you had trouble with packaging the AppImage rather than linking.

@TheAssassin, I'm sure QEMU is much easier, but cross-compiling is faster, and this matters for MuseScore because its a big program and TravisCI has a limit on build time. I was just pointing out that you can get the best of both worlds by cross-compiling on x86 and only launching QEMU for any subsequent steps that you can't do on x86.

@TheAssassin
Copy link
Member

I wouldn't worry at all about the time limit to be honest, the time loss in comparison to cross compiling won't mean we'll exceed any such limit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants