Skip to content
This repository has been archived by the owner on Nov 4, 2021. It is now read-only.

add ARM cross builds? #46

Open
edbordin opened this issue Jul 27, 2020 · 16 comments
Open

add ARM cross builds? #46

edbordin opened this issue Jul 27, 2020 · 16 comments

Comments

@edbordin
Copy link
Collaborator

I've had raspberry pi support requested twice now. I wasn't anticipating huge demand for this but given some of the support is still there in the scripts I thought I should at least add it to the backlog

@umarcor
Copy link
Contributor

umarcor commented Oct 6, 2020

You might find dbhi/qus and dbhi/docker inspiring for this task. They showcase how to build ARM/ARM64 artifacts on x86 CI services (GitHub Actions or Travis CI), using arm32v6/arm32v7/arm64v8 containers and QEMU (as an alternative to cross-compilation). GHDL and Gtkwave are included in dbhi/docker images already.

@vmedea
Copy link
Contributor

vmedea commented Oct 6, 2020

I had a shot at this today, but unfortunately I didn't get anywhere. I started with an Ubuntu 20.04 image. Basic cross-compiling is easy, just install the g++-aarch64-linux-gnu package. so far so good.

However, dependencies such as boost are the problem. I added the arm64 multi-arch using dpkg --add-architecture arm64. This ran into a whole lot of errors in apt update. The reason is that the ports are hosted somewhere else. Well, theoretically that was a matter of changing sources.list to look at different repositories for different architectures:

deb [arch=amd64,i386] http://archive.ubuntu.com/ubuntu focal main restricted universe multiverse
deb [arch=amd64,i386] http://archive.ubuntu.com/ubuntu focal-updates main restricted universe multiverse
deb [arch=amd64,i386] http://security.ubuntu.com/ubuntu focal-security main restricted universe multiverse

deb [arch=arm64,armhf] http://ports.ubuntu.com/ubuntu-ports focal main restricted universe multiverse
deb [arch=arm64,armhf] http://ports.ubuntu.com/ubuntu-ports focal-updates main restricted universe multiverse
deb [arch=arm64,armhf] http://ports.ubuntu.com/ubuntu-ports focal-security main restricted universe multiverse

Now, at least apt update passed. It was finding the packages for arm64. It was possible to install a few dependency packages like libgmp-dev:arm64 after this.

However… it seems that other packages are giving trouble. Particularly libboost-python-dev:arm64 which depends on libpython3-dev:arm64 which in turn depends in python3:arm64. This package conflicts with the host's python so cannot be installed without (likely) causing a lot of trouble.

That's where I gave up on this approach 😞 .

They showcase how to build ARM/ARM64 artifacts on x86 CI services (GitHub Actions or Travis CI), using arm32v6/arm32v7/arm64v8 containers and QEMU

Right! User mode emulation might be a more straightforward approach. If it is fast enough. The reason I'm cross-compiling in the first place is that my ARM64 board is too slow and doesn't have enough memory to build some parts of trellis and nextpnr in particular, there are some huge generated files.

@edbordin
Copy link
Collaborator Author

edbordin commented Oct 6, 2020

I mentioned this in chat but for completeness I will put it here too:

@sajattack started working on an approach using a gentoo cross build toolchain: https://github.com/sajattack/fpga-toolchain/tree/wip_arm

"I also started a docker container with the gcc toolchains, but didn't add all the deps yet https://hub.docker.com/layers/sajattack/rpi-crossdev/latest/images/sha256-52e4efefc63a683b792f8989921c70d28df4b272233a0b967da8808a1d53f6c2?context=repo"

@sajattack
Copy link

sajattack commented Oct 7, 2020

I chose gentoo because it is much easier to cross-compile dependencies, as everything is source, and there is an armv7a cross-compiling variant of the package manager, armv7a-unknown-linux-gnueabihf-emerge.

@umarcor
Copy link
Contributor

umarcor commented Oct 8, 2020

Right! User mode emulation might be a more straightforward approach. If it is fast enough.

User mode emulation is 5x to 15x slower than "native" execution. However, the meaning of "native" is relative. For CPU bound operations, an ARM board might already be 0.25x to 0.5x as fast as a workstation/server. Hence comparison of native execution on ARM boards and QEMU on a workstation is closer to 2.5x - 7.5x. See e.g. https://github.com/dbhi/docker/actions/runs/285207100. Not ideal, but neither unacceptable.

Another issue with QEMU is support for specific features and signals. Not all tools can be executed, although support for ARM has improved in the last years. If you hit this, it might be difficult to work around.

The reason I'm cross-compiling in the first place is that my ARM64 board is too slow and doesn't have enough memory to build some parts of trellis and nextpnr in particular, there are some huge generated files.

The point about using QEMU and Docker is executing the containers on workstations and/or CI services, which are more powerful than the target device. In fact, I had issues trying to build some of the tools on a RaspberryPi. Other devices, such as Pynq would crash straightaway. However, with dbhi/qus I can build the tools in GitHub Actions, Travis, etc. Then, on the RPi I use the containers directly (even with Raspbian -32bits- you can execute 64 bit containers). On Pynq, I extract the artifacts from the container. Since the container and the OS used on the Pynq match, binaries are compatible. More precisely, GHDL from aptman/dbhi:bionic-arm works on Pynq v2.3, v2.4 or v2.5.

The point is that using dbhi/qus in Travis might be tricky, because of the 50min limit. However, in GitHub Actions each work can take up to 6h. I'm not sure about Azure Pipelines.

I chose gentoo because it is much easier to cross-compile dependencies, as everything is source, and there is an armv7a cross-compiling variant of the package manager, armv7a-unknown-linux-gnueabihf-emerge.

For performance reasons, I'd say that supporting aarch64 is as important as armv7a. Latest RPis, Pine64, Xilinx's UltraScale, etc. are armv8.

Anyway, I agree that gentoo or archlinux might be the most sensible options for a cross-compilation approach.

@edbordin
Copy link
Collaborator Author

edbordin commented Oct 8, 2020

Probably the main issue to avoid is making the build take >6 hours as that is the limit of the free CI plan we're on - it pretty commonly takes about 1.5 hrs natively at the moment. You could of course do more in parallel, but we're also limited to at most 10 parallel jobs. Though in the past I was offered a bit of monetary support if I needed, I'd be hesitant to move to a paid CI plan just to accommodate a single long build. Other than that I'm happy to support any contributions.

Another possibility to consider is creating another progtools package in the interim to at least allow people to upload bitstreams from their raspberry-pi-like devices.

@umarcor
Copy link
Contributor

umarcor commented Oct 8, 2020

@edbordin, does 1.5hrs include running the test suites that each tool might have? Or is it build time only? E.g., GHDL takes a few minutes to build. But testing on Windows takes +30min compared to Linux (talking about amd64 only now).

Another possibility to consider is creating another progtools package in the interim to at least allow people to upload bitstreams from their raspberry-pi-like devices.

That sounds good. Indeed, it'd be an equivalent of ghdl/synth:prog: https://github.com/ghdl/docker#-cache-5-jobs--max-4--11-images-weekly. However, the Docker image includes iceprog and openocd only yet.

@edbordin
Copy link
Collaborator Author

edbordin commented Oct 8, 2020

You can see how long everything generally takes in the latest build log: https://dev.azure.com/open-tool-forge/fpga-toolchain/_build/results?buildId=450&view=logs&j=ce85e32b-1860-5976-4f43-ed494d37cb9e&t=ce7e54df-c406-50b2-11bf-5d4d2006586c

Actually looking at it now, the most recent build took just over an hour for the main part on linux (and some of that is installing packages before the build starts) I have noticed it sometimes takes longer though, so it would be better if we still had a good amount of time to spare.

I'm not currently running any unit tests, I have a really simple set of test scripts that just check the tools run OK in a clean environment. It probably would be good to run unit tests but I can imagine it would add a lot of time.

edit: in fact even if it isn't used for the build I think using qemu to run the test scripts would make a lot of sense

@edbordin
Copy link
Collaborator Author

I found some time to look at this. I've set up a cross toolchain using buildroot here: https://github.com/open-tool-forge/buildroot-arm
Since it's a lot of compiling I don't think this needs to be run nightly - WIP CI build for the toolchain is running separately here: https://dev.azure.com/open-tool-forge/buildroot-arm/_build?definitionId=3

Locally I've managed to build most of the tools already using @sajattack's work on the scripts as a starting point. Probably the major thing left to check is whether ghdl builds ok with the llvm backend (the default mcode backend only works on x86). Adding LLVM adds a couple of hours to the toolchain build so I'm yet to test it.

@umarcor
Copy link
Contributor

umarcor commented Nov 20, 2020

@edbordin, GHDL is known to work on ARM using LLVM backend. In fact, that's the only backend I've used, both on armv7 and on armv8. Note that you can build it with either GCC or clang.

@edbordin
Copy link
Collaborator Author

@umarcor I don't suppose you can point me to an example of a cross build? I'm currently trying to work out how to pass --sysroot so that it links against the correct llvm libs.

@umarcor
Copy link
Contributor

umarcor commented Nov 21, 2020

@edbordin, unfortunately, I cannot think of a reference off the top of my head. I use containers for that, so the build scripts I use are "native".

The closest I can think of is https://github.com/hackfin/ghdl-cross.mk, but that's for a cross-compiling GHDL. E.g., run GHDL on aarch64 and generate binaries for mingw32.

@umarcor
Copy link
Contributor

umarcor commented Nov 21, 2020

Both Debian and Fedora provide GHDL for ARM. I guess that those are cross-compiling somehow:

However, those builds are far from being easy to understand.

edbordin added a commit that referenced this issue Nov 21, 2020
edbordin added a commit that referenced this issue Nov 21, 2020
edbordin added a commit that referenced this issue Nov 22, 2020
edbordin added a commit that referenced this issue Nov 22, 2020
edbordin added a commit that referenced this issue Nov 22, 2020
edbordin added a commit that referenced this issue Nov 22, 2020
edbordin added a commit that referenced this issue Nov 22, 2020
@controversial
Copy link

darwin/arm64 cross-builds would also be super userful for people working on newer ARM macs.

@edbordin
Copy link
Collaborator Author

edbordin commented Mar 8, 2021

@controversial I agree that would be good, but at the moment Azure Pipelines doesn't offer M1 cloud instances or a build agent that can run on privately managed M1 hardware. With those constraints it would need to be done as a cross build or with a different CI service (if an appropriate one exists yet), both of which are fairly major projects. Happy for someone to step up and take it on but I probably won't be that person any time soon.

@umarcor
Copy link
Contributor

umarcor commented Mar 9, 2021

FTR, ghdl/ghdl#1669.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants