Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatic recipe testing #200

Open
jkbecker opened this issue Oct 3, 2020 · 11 comments
Open

Automatic recipe testing #200

jkbecker opened this issue Oct 3, 2020 · 11 comments

Comments

@jkbecker
Copy link
Contributor

jkbecker commented Oct 3, 2020

Realizing that I've found some random issues here and there in the last days... Would it make sense to set up an automatic testing mechanism that could crunch through all recipes in the repo and report any failed builds? I could whip something up, we could figure out if that's something that should automatically run regularly (probably too computationally heavy & brute-force) or something that could periodically be run as a good housekeeping script.

Could be easily built on top of gnuradio/pybombs-docker (in which case it would only test ubuntu 20.04 as reference) or the gnuradio/pybombs testing containers, which would be more comprehensive, but would also require some refactoring (those containers already auto-install stuff so they are not a good clean slate for testing individual recipes as is). The latter would also increase runtime by the number of distro variants...

Thoughts?

@argilo

@argilo
Copy link
Member

argilo commented Oct 3, 2020

Agreed that it would be great to have some automated testing of recipes, even if it's only on Ubuntu 20.04 to start.

@jkbecker
Copy link
Contributor Author

jkbecker commented Oct 3, 2020

Cool. I have a concrete idea of how to set it up, but not sure when I'll have the time... In case something fails, what would we typically want to know about the build environment & failure?

  • exact OS version
  • python version
  • PyBOMBS version
  • PyBOMBS install transcript (run with -v)

...anything else? Thinking about a script that would compile all of that and automatically create issues (or at least, prepare copy text with all the relevant info that can conveniently be turned into an issue manualy)...

@jkbecker
Copy link
Contributor Author

jkbecker commented Oct 3, 2020

Sidenote: implementing this properly depends on gnuradio/pybombs-docker#4.

@jkbecker
Copy link
Contributor Author

Ok, sorry for the messy issue creation / removal above, I had to test the final things "live" because GitHub CLI doesn't have a sandbox mode to my knowledge.

Long story short, I made an "autotest" tool that runs against all (or just a single) recipe in the repo based on the current pybombs-docker. If the installation succeeds it just moves on, but if the installation fails, it automatically creates an issue with all (hopefully? please let me know if anything is missing) relevant details (install output, CMakeError.log, CMakeOutput.log, system details). If an issue is already in the repo for that exact recipe/commit/pybombs/python configuration, it will not create a duplicate issue.

Using the tool looks something like this:

$ bash .autotest.sh
From github.com:gnuradio/gr-recipes
 * branch            master     -> FETCH_HEAD
Already up to date.
Running install tests on all files in /home/johannes/Code/gnuradio/gr-recipes...
    Testing installation of airspyhf (airspyhf.lwr @ 0775598) ... FAILED
            airspyhf@0775598 (PyBOMBS v2.3.4, Python vPython 3.8.5) installation error
            Issue is known:
              -> Issue 209	OPEN	Error installing airspyhf@0775598 (PyBOMBS v2.3.4, Python vPython 3.8.5) [autotest:9fc3c]		2020-10-20 02:10:28 +0000 UTC

    Testing installation of airspy (airspy.lwr @ 8b4b07d) ... OK

    Testing installation of alsa (alsa.lwr @ 034e104) ... OK

    Testing installation of apache-thrift (apache-thrift.lwr @ c65ba97) ... OK

    Testing installation of armadillo (armadillo.lwr @ f8eb85e) ... OK

    Testing installation of atk (atk.lwr @ 677d5e1) ... OK

    Testing installation of bison (bison.lwr @ 16b7672) ... OK

    Testing installation of bladeRF (bladeRF.lwr @ 03b560f) ... OK

    Testing installation of blas (blas.lwr @ 72f1ac5) ... OK

    Testing installation of bokeh (bokeh.lwr @ 3718467) ... FAILED
            bokeh@3718467 (PyBOMBS v2.3.4, Python vPython 3.8.5) installation error
              -> Issue created: https://github.com/gnuradio/gr-recipes/issues/210

# ... and so forth

The tool currently lives in the autotest branch and pulls merges master into autotest every time it is used. We can either leave it there and keep autotest as some kind of special-purpose testing branch, or we can move the testing tool into master. In order to not confuse anyone, the script is a hidden file (.autotest.sh) so it won't show up visibly in anyone's repo and I assume PyBOMBS should not pick it up when it parses the recipe inventory (?).

Thoughts?

This was referenced Oct 20, 2020
@jkbecker
Copy link
Contributor Author

... Alright, this whole testing approach was a bit spammy, sorry...

@noc0lour approached me about integrating these tests into the GNURadio Buildbot infrastructure, which would be a much cleaner way of doing things. I will stop creating any further automated issues the way I did above (the remaining ones are all valid and should be taken care off, but they are really a bit spammy).

I will be reading up on how to create tasks for buildbot, not just for gr-recipes but also for pybombs - then we can run pybombs install tests against pybombs PRs, recipe install tests against gr-recipe and gr-etcetera PRs, and so forth..

What I currently don't understand is why we have

  • GNURadio Buildbot infrastructure for gnuradio generally
  • and Travis CI setup for PyBOMBS

...who set that up? Is the PyBOMBS Travis CI a legacy system, or a valid parallel setup that exists for good reason? I kind of assumed it was a legacy system simply because it seems to run Ubuntu 16.04, but I haven't really inspected it thoroughly...

What's your take on this way forward, @argilo?

@noc0lour
Copy link
Member

From a general point of view travis and similar services more accessible to outside/new contributors since everything is described in a dotfile within the repository.
For GNU Radio unfortunately the build time limitations & number of parallel builds of these public services are prohibiting the use and thus a custom setup with a large compile cache for GNU Radio, more dedicated VMs can crunch build times and also allow a very customized workflow.
I think for this large scale testing the public services would also not provide enough build time/resources.

@jkbecker
Copy link
Contributor Author

Yeah, that makes sense. Just wondering if the Travis setup is some kind of paid plan that we should wind down when we switch over? It's unclear to me who is managing that.

@noc0lour
Copy link
Member

noc0lour commented Oct 23, 2020

travis-ci, circleci and appveyor (edit: and github actions) have free plans for FOSS projects on GitHub thus having them additionally for one feature or the other does not hurt

@jkbecker
Copy link
Contributor Author

Ah, thats good to know. 👍

@curtcorum
Copy link

This is a great idea...would have caught the stale git.osmocom.org links.

@jkbecker
Copy link
Contributor Author

@curtcorum yeah I'm currently very tied up in some urgent responsibilities... I hope I'll get to setting this up properly soon™...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants