-
Notifications
You must be signed in to change notification settings - Fork 416
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Suggestion: Improve CI pipeline #1652
Comments
Hi Carlo - and thanks for reaching out! I think consolidating all the CI pipeline would be a great idea - we got where we are today because the various pieces were added piecemeal over the last few years. A well meaning person would contribute some config for their favorite CI pipeline to solve a specific problem, then usually move on leaving me to maintain something I had no understanding of ;) Keeping it all in one place sounds like a great idea, but I have limited experience in the CI side of things, and as you have seen, when things break, it is a scramble to fix them. So, with the following points in mind, if it is a good fit, I would be happy to build a single pipeline, and I am comfortable with the choice of Git if it makes the most sense. The ideal pipeline would:
Even consolidating the semaphore and azure pipelines would be a win in my opinion. However, there are some important non-technical considerations too:
In terms of CI process, I think what we have at the moment works well:
So, given the above do you think Git CI would be a fit, and would yo use willing to help? If so, what would you suggest for next steps? I know little about this subject so would really want you to help build the pipeline from scratch to agreed specs, then ideally be around afterwards to help out if anything goes wrong, if not, I will have just exchanged one set of arcane incomprehensible systems for another :) One advantage of your proposal is, as you said that it is all under one roof so hopefully it will be simpler to understand and maintain which would be a win in its own right. Let me know what you think - I do like this idea a lot! |
I can understand your frustration, being an open source project maintainer and having a changing sets of hands with the coming and going of each volunteer can be problematic. I know this sounds a bit cliche, but a well-managed CI pipeline reduces the need for human intervention, since each step is documented as code in the pipeline definition, thereby reducing the need for manual steps in the development of an application. This could be helpful in the maintenance aspects of this project, for instance reducing the need for manually publishing release artifacts and documentation. Ideal candidateI think Github Actions might be ideal since less third-party solutions there are, less things can break at the wrong time, plus at the same time it is all consolidated under the same roof, as you said.
Next stepsI can help out in the implementation. I cannot guarantee my 100% availability (aren't we all volunteers after all? 😉 ), but I will do my best. P.S.: Does this project have any unit testing? ( |
OK, sounds good - I am looking forward to seeing how things progress. And of course I don't expect 100% availability - I just hope that you might respond to an email or on discord within a day or two if I find an issue I can't figure out :) Although as you say, if its 1 consistent pipeline I stand a better chance of getting familiar with it and needing less help. As far as unit testing goes - no we don't have any. It's another thing on my long list to become an expert at ... I did at one point make a start writing my own framework for this but I got bogged down. If you can recommend a framework, and at least show me how to get started I can add this into the mix as well, as it is a good idea, and something I have been meaning to get to at some point. |
I have no familiarity at all with the AppDaemon code, so proposing one framework vs another might be hazardous, since I don't know the complexity and intricacies of the code we are dealing with. |
Ok, presuming that pytest can easily be integrated with the new pipeline I’ll take a look – thanks.
|
Looks like the build succeeded but the push to Docker failed, I did set up the secrets in GutHub, but maybe not correctly. Here is the end of the log:
|
Looks like we are close though :) |
I fixed it! The action was using $github.repository as the tag name, which resolves to "appdaemon/appdaemon", however it needed to be "acockburn/appdaemon" for docker hub. I hard coded that value in the ENV var, as I couldn't see an easy way to get it out of GitHub variables. Now for an encore I am going to try and add the arm images to the build ... |
The 4 platform build fails identically to how it did on Semaphore, so it doesn't seem like a CI issue, but a problem with buildx. Would be nice to find a solution at some point but that is outside the scope of this exercise. So, we have docker builds, pre-commit runs, PyPi automation (still to be tested). I think you have covered most things apart from a few details to be discussed on the PyPi piece - once again thanks for diving in and helping with this I am really happy with the result, and more importantly I think I understand it to the point where I can do a better job of maintaining it moving forwards. |
Once we have finalized the PyPi piece I'll do a mini-re;ease to test it all - I have a minor addition to the Admin UI I'd like to get out there to gather info on proposed changes to the HASS plugin. |
Glad that everything seems to be improving nicely! Line 26 in 7f834a1
Are all those packages needed? Excpecially cargo seems to cause issues when building the images, from my understanding it is a package mangaer for rust, but we are working with Python here! 😉 |
They were added at some point presumably to help with build problems ... I'll do some work on the docker build over the next few days. I have the new PR to merge and then I can look at your suggestions as well. I set up my IDE to build the docker image which is something I hadn't;t been setup to do for a while after I moved platforms. |
Locally I tested by removing all these $ docker image ls
appdaemon without-apk 9ed8d3b55d59 2 hours ago 118MB
appdaemon with-apk 4cdacc10d194 2 hours ago 1.22GB
|
That may well be the cause of the build problems then – what even is APK?
|
It's the |
Ok, cool, which dod you settle on as being necessary and we’ll start with that as a base.
|
OK, looking at the build pipeline, seems like it will build any branch:
Can I change it to:
To only build the dev branch when it changes and no others? That better suits the way we work - most branches outside of dev and master are experimental or temporary |
My reasoning behind that change is that this way every time there is a change it triggers a Python build and a Docker image build, without actually publishing anything to PyPI or Docker Hub. |
ok, got it - it's built but not published, sounds good :) |
OK, docs now b building cleanly as well |
OK, I removed support for python 3.7 and upgraded flake8 to 6.0.0 via dependabot. I also need to keep the flake8 and black versions at the same level for the pre-commit hooks - is there a way to pull this info from project.tml instead of manually keeping them in sync? |
What are your thoughts on including a step in the GitHub lint action to install and run pre-commit, as an alternative to explicitly running just flake8? That would pull black in too which I think is desirable. In addition, I am looking at ruff as an alternative to flake8 and black, and if I make that change the CI pipeline should just follow along. Can you think of a reason to avoid doing that? |
Wow, you have lots more patience than me! 😉
The
At the moment the pre-commit hooks are meant to run locally on a dev machine, before a commit is made, to reformat/lint the code.
For this I think we should check if |
Great - I think we are definitely on the same page. My concern with the pre-commit is that someone who didn't install pre-commit could still create a PR so I would want it checked when the PR is created. I like the idea of keeping pre-commit tools separate and having a single source of truth. I have found out that pre-commit has an update function you can run to grab the latest tool versions. This is manual but not really onerous to do once in a while, plus you can run pre-commit manually against the entire repo (I did it last night and found a ton of stuff that needed fixing despite having this in place for a while - also could have been than the new flake8 is more strict). Then the cool part is the the pipeline would just follow along if that pre-commit.ci/lite works how we think it should. At that point, I think we will have done everything I can think of and I am confident I can maintain this new pipeline since you have documented it extremely well, and I have been involved in the decision making. With everything setup this way, if I do decide to look at ruff the pipeline will update automatically - but I'll leave ruff for another day after some more research. I think, it is interesting because it is super fast and also has plugins for multiple tools, some of which I am interested in like pyupdate and a few others. I got the tip from Frank - homeassistant is looking at using it. I would like to compliment you on your thorough and considered approach, your clear communication, and commenting/documenting of the code you put together - you are a true professional and I am deeply grateful for all the help! So, if you can help me figure out how to replace the current CI listing steps with the pre-commit stuff, I think we will be pretty much done :) |
…troduce pre-commit Github action AppDaemon#1652
That last fix for the docker image worked! We are now back to full multi-arch. Very cool! I also fixed the docs, so all is looking good. |
I suggest we keep an eye out when releasing the next version of AppDaemon with this Docker image, since previously the build dependencies ( Now we have optimized the way we handle the Docker build, avoiding all the additional bloat coming from the compile-time dependencies. This however means that anyone that may have been using third-party Python packages that needs to be compiled may suddenly see their appdaemon container fail to start, since the build tools are no longer present on startup. This might sound like a breaking change, but truth be told it was the previous Docker build process that was not completely correct, since it introduced this unexpected side effect. |
Ok let me think on that but I tend to agree with you - to be honest the docker image has just been at the mercy of whatever anyone who made a PR wanted, so perhaps it is time ti get stricter. On another note, GitHub went crazy for a while this evening and stopped building stuff. Maybe related is that we had run out of cache space, so I cleared it down a little. Also, pip-update was behaving sketchily so I temporarily disabled it as a pre-commit hook. It was fighting with itself when commuting vs running on the command line then again when committed to the repository, and also slowing my commits down significantly. I'll use it on the command line for a while as required until I get used to it and figure it out before maybe adding it back. |
I saw that too when locally committing, I think it is because pip-compile tries to resolve all dependencies from scratch to be sure of their versions, but this may slow down significantly the dev workflow. Maybe Github had an hiccup for a few moments,or there is something in our end in the workflow config that did not trigger the pipeline. Let's see whether it happens again. |
Yep, we will keep an eye on it but it seemed to fix itself last night before I went to bed - I will also be interested what dependably makes of the new setup. I'll see if there is a way we can steer it towards pyproject.toml and away from requirements.txt |
OK, now dependably weighed in - it seems to understand the multiple requirements files, but left pyproject.toml alone. I didn't see an obvious way to change that behavior. So what to do - merge the changes and update pyproject.toml, or close them and manually update with pyproject.toml then run pip-compile? |
Yeah I saw the PR from the bot touched only the |
Sounds good
Message ID: ***@***.***>
|
The issue pipeline seems to run OK - I'll take a few more passes through to identify bugs and enhancements then I'll let it loose and see what happens :) |
Still some issues with dependabot - well kind of. It is highlighting version upgrades in derived dependencies. When I run pip-compile, nothing new is picked up, presumably because the packages that pulled in the derived dependencies are still using older versions of the package. I'll wait and see if dependably is still looking out for project.toml, and if so, I'll run pip-compile whenever a dependency in there get upgraded and ignore the derived dependencies in the 3 requirements.txt file. Does that sound sensible? |
Yeah I whink currently dependabot only cares about the |
Makes sense – I suspect it will try and keep pyproject.toml up to date – it has done that already and I don’t think it’s smart enough to pick and choose, it just goes with anything it thinks has dependencies. From now on I’ll accept the PRs that update pyproject.toml and run pip-compile, and just close the ones that try to update dependencies.txt directly.
|
Ok, seems reasonable. |
Introduction
Thanks for this amazing project!
I have been using it for quite some time to easily integrate an external device (a TP-link router) in Home Assistant.
After seeing the reported issues in the Discord chat about the docker builds failing, I would like to chime in and give an helping hand, having had some previous experience with CI systems (Gitlab CI and Github Actions) and Docker containers.
After a quick glance at the project, I have some questions regarding its development cycle:
Continuous integration platform
It appears that multiple CI platforms are used:
dev
branch the configuration has been removed, as seen from this commit: 363f165Question
Is there a reason why different CI platforms are needed? Wouldn't it be better to use a single one to consolidate the efforts and provide a single interface to see the latest build status?
Since the project is already using Github as its hosting platform, I am inclined to proposed Github Actions as the CI provider, since it intuitively integrates with the rest of the ecosystem (github issues, pull request, releases, docker registry, etc..).
Proposed changes
Implementation plan
General pipeline configuration:
The following is an outline of the ideal pipeline, summarized from the comments below:
dependabot
is used)The text was updated successfully, but these errors were encountered: