Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatically testing cross-compiled code with this qemu approach #35

Closed
MartinLoeper opened this issue Jun 4, 2021 · 8 comments
Closed

Comments

@MartinLoeper
Copy link

I am wondering if it is feasible to run test suites for cross-compiled binaries and installers in the future.
I am currently developing the ProxySuite which has an installer for the raspbian/raspios armhf images.

I would like to add a GitHub workflow which spins up a dockerpi qemu emulator, runs the installer inside, and checks if everything is correctly linked and the binary is loadable. That way, I could make sure that the installer works correctly and the cross-compiled binary is probably runnable on a real pi once installed.

What are your thoughts on this approach?
Is something like executing scripts inside already running qemu machines and getting back the return code on the roadmap upstream?

@lukechilds
Copy link
Owner

You should be able to run commands and check their return code inside the VM via SSH.

@MartinLoeper
Copy link
Author

Thanks for the hint @lukechilds!!
It worked quite well and I have some basic tests up and running using GitHub actions and dockerpi being a sidecar container (i.e. "service" container).

A few things I noticed in case somebody wants to do similar things:

  • I had to use rpi2 instead of rpi3 for newer raspios images. I do not know why that is the case. The qemu binary just hangs when I use rpi3 with e.g. raspios_lite_armhf-2021-05-28.
  • I had to add an empty ssh file to the boot partition to enable the sshd service on boot. I uploaded the resulting image and the script used to create it as GitHub release.

Anyway, thanks for all the work you put into building this nice project @lukechilds!

@lukechilds
Copy link
Owner

Thanks for the update @MartinLoeper!

@carlosperate
Copy link

Thanks for sharing @MartinLoeper!

Could you expand a little bit more how you did this? Looking at the workflow I'm not too sure I 100% understand the process: https://github.com/nesto-software/ProxySuite/blob/dc48fd5e0e4b26982d5dc96129e85c79da468957/.github/workflows/proxy-suite-tests.yml

Are all the tests contained in the https://github.com/nesto-software/dockerpi repository? And these are basically jest spec files accessing the serviced RPi docker image via SSH?

@MartinLoeper
Copy link
Author

Hi @carlosperate!

Sure, let me elaborate on this:

  • The dockerpi image is started in a sidecar container using the services key in the github actions workflow.
  • The dockerpi image used for the test is built in the nesto-software/dockerpi gh actions workflow. The main difference to luke's version being that I am using one of the latest raspios images because I want the distro under test to align as closely as possible to what we ship with our iot devices.
  • The tests are written using jest and reside in the dockerpi repository. I consider moving them into the ProxySuite repo in the future.
  • The test suite (a) waits for the ssh service in the sidacar container to be up and running, (b) waits a random time interval (ugly workaround), and (c) executes some commands over ssh.
  • I check whether the exit code of each command is zero.
  • There is one test suite for each component to be tested because I do not want changes from multiple tests to cause side effects. Thus, each test suite runs in its own github job (i.e. in its own container) with its own sidecar container. I literally boot up 4 separate qemulated pis each night using gh actions. This is amazing! :)
  • Things work pretty well, but there are sometimes rage conditions with dpkg locks for which I have to figure out how to prevent them (see failed ProxySuite workflow runs).

@carlosperate
Copy link

Ahhh, I see now, thank you so much for the additional explanation!

I went a very similar route (also creating custom images and hosting containers on GH) but I end up running the test steps in the workflow via an SSH Action: https://github.com/mu-editor/mu/blob/3e6632ce51ca929c02acf13d9348377f972ad517/.github/workflows/test.yml#L92-L168

Before I didn't quite see how the tests were separated between repos and run inside the container, but now I understand better your separation between the different repos, thanks!
It also makes sense to have something the SSH utils (the wait utility and the error code checker) in the dockerpi fork repo, as that is useful for all the different projects.

I literally boot up 4 separate qemulated pis each night using gh actions. This is amazing! :)

Yeah, it's pretty crazy the amount of value we get for free with GH Actions. And thanks to dockerpi we can easily emulate other archiectures as well, it still blows my mind 🤯 Thanks @lukechilds and all contributors!

but there are sometimes rage conditions with dpkg locks for which I have to figure out how to prevent them (see failed ProxySuite workflow runs).

Is that when doing an apt update? Or at a installing step?
I have not seen any dpkg locks in my CI environment yet, but we only install a single package via apt.

@MartinLoeper
Copy link
Author

MartinLoeper commented Jun 15, 2021

I went a very similar route (also creating custom images and hosting containers on GH) but I end up running the test steps in the workflow via an SSH Action [...]

The ssh-action approach looks very nice! How did you accomplish to wait for the pi os image to boot up?

Is that when doing an apt update? Or at a installing step?

It looks like there is some apt command running during first pi image boot.
I have to look into this. I do not know why my jest tests that install packages via apt-get, interleave with any service which runs during boot.

@carlosperate
Copy link

The ssh-action approach looks very nice! How did you accomplish to wait for the pi os image to boot up?

Unfortunately is a very hacky sleep for a given amount of time: https://github.com/mu-editor/mu/blob/3e6632ce51ca929c02acf13d9348377f972ad517/.github/workflows/test.yml#L107
Ideally I'd need to find a way to do something similar to your solution where it tries to connect and retries until it succeeds (or times out).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants