Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestion: Improve CI pipeline #1652

Open
8 of 10 tasks
mion00 opened this issue Feb 27, 2023 · 37 comments
Open
8 of 10 tasks

Suggestion: Improve CI pipeline #1652

mion00 opened this issue Feb 27, 2023 · 37 comments
Labels
work_in_progress Isse or PR that is evolving and shouldn't;t be aged out

Comments

@mion00
Copy link
Contributor

mion00 commented Feb 27, 2023

Introduction

Thanks for this amazing project!
I have been using it for quite some time to easily integrate an external device (a TP-link router) in Home Assistant.
After seeing the reported issues in the Discord chat about the docker builds failing, I would like to chime in and give an helping hand, having had some previous experience with CI systems (Gitlab CI and Github Actions) and Docker containers.
After a quick glance at the project, I have some questions regarding its development cycle:

Continuous integration platform

It appears that multiple CI platforms are used:

Question

Is there a reason why different CI platforms are needed? Wouldn't it be better to use a single one to consolidate the efforts and provide a single interface to see the latest build status?
Since the project is already using Github as its hosting platform, I am inclined to proposed Github Actions as the CI provider, since it intuitively integrates with the rest of the ecosystem (github issues, pull request, releases, docker registry, etc..).

Proposed changes

  • Open a discussion to decide if multiple CI platforms are indeed needed and preferable. In case a single one is preferred:
    • Consolidate the workflow on a single platform to do automated testing and deployment
  • Define the automated steps that should happen in the CI pipeline (e.g.: should we run tests for every git push? create docker images on git tags?)
  • Integrate the changes proposed in State of the Project: Releases for the small things #1646 in a CI configuration (automatically generate release notes and release artifacts as part of the Github project)

Implementation plan

General pipeline configuration:

  • Build dev as commits are pushed
  • Build releases whenever a TAG is created

The following is an outline of the ideal pipeline, summarized from the comments below:

  • Build and deploy docker multi arch images
  • Documentation (built using readthedocs)
  • Update dependencies (currently dependabot is used)
  • Automatic release notes
@acockburn
Copy link
Member

Hi Carlo - and thanks for reaching out!

I think consolidating all the CI pipeline would be a great idea - we got where we are today because the various pieces were added piecemeal over the last few years. A well meaning person would contribute some config for their favorite CI pipeline to solve a specific problem, then usually move on leaving me to maintain something I had no understanding of ;)

Keeping it all in one place sounds like a great idea, but I have limited experience in the CI side of things, and as you have seen, when things break, it is a scramble to fix them.

So, with the following points in mind, if it is a good fit, I would be happy to build a single pipeline, and I am comfortable with the choice of Git if it makes the most sense. The ideal pipeline would:

  • Handle Pre commit hooks (black & flake) (Azure used for this now)
  • Build and deploy docker multi arch images (semaphore currently)
  • Documentation (Readthe docs currently in use, I would probably expect to keep that as I am not aware of any alternative)
  • Update dependencies (currently dependably, not sure if there is an alternative)
  • Some automation of release notes etc. (we have none at the moment)

Even consolidating the semaphore and azure pipelines would be a win in my opinion. However, there are some important non-technical considerations too:

  • AppDaemon has zero funding of any sort so would need to use whatever free tiers of tools are available
  • Whatever the chosen pipeline is, it would be very helpful if the original designer was available to help out if things went wrong, ideally interactively over discord

In terms of CI process, I think what we have at the moment works well:

  • Build dev as commits are pushed
  • Build releases whenever a TAG is created

So, given the above do you think Git CI would be a fit, and would yo use willing to help? If so, what would you suggest for next steps? I know little about this subject so would really want you to help build the pipeline from scratch to agreed specs, then ideally be around afterwards to help out if anything goes wrong, if not, I will have just exchanged one set of arcane incomprehensible systems for another :) One advantage of your proposal is, as you said that it is all under one roof so hopefully it will be simpler to understand and maintain which would be a win in its own right.

Let me know what you think - I do like this idea a lot!

@mion00
Copy link
Contributor Author

mion00 commented Feb 27, 2023

I can understand your frustration, being an open source project maintainer and having a changing sets of hands with the coming and going of each volunteer can be problematic.
For this reason I think a shared design and (a bit of) documentation on the chosen CI solution is fundamental, since in that case anyone willing to work on the implemented solution can understand what has been built and give its contribution, especially when things go wrong.

I know this sounds a bit cliche, but a well-managed CI pipeline reduces the need for human intervention, since each step is documented as code in the pipeline definition, thereby reducing the need for manual steps in the development of an application. This could be helpful in the maintenance aspects of this project, for instance reducing the need for manually publishing release artifacts and documentation.
I think the Docker container errors encountered the other day could be avoided by simply building the Docker image automatically as part of the pipeline, thereby easily catching any change that may introduce errors in the build process!

Ideal candidate

I think Github Actions might be ideal since less third-party solutions there are, less things can break at the wrong time, plus at the same time it is all consolidated under the same roof, as you said.
It ticks the following boxes:

  • free for open-source projects like this
  • tightly integrated with github issues and pull requests, so the results of the pipeline are easily integrated as part of the development process
  • can be easily customized without the need of an external service (for instance now semaphore and azure pipelines are configured via their external dashboard, with their own user access and secret management controls).
    With Github Actions every options is here inside the Github repository.

Next steps

I can help out in the implementation. I cannot guarantee my 100% availability (aren't we all volunteers after all? 😉 ), but I will do my best.
To start, I can work on automating the setup of the dependencies when checking out the project, as part of a basic CI step.
Starting from there the sky is the limit! :)

P.S.: Does this project have any unit testing? (pytest or the like of it?)

@acockburn
Copy link
Member

OK, sounds good - I am looking forward to seeing how things progress. And of course I don't expect 100% availability - I just hope that you might respond to an email or on discord within a day or two if I find an issue I can't figure out :) Although as you say, if its 1 consistent pipeline I stand a better chance of getting familiar with it and needing less help.

As far as unit testing goes - no we don't have any. It's another thing on my long list to become an expert at ... I did at one point make a start writing my own framework for this but I got bogged down. If you can recommend a framework, and at least show me how to get started I can add this into the mix as well, as it is a good idea, and something I have been meaning to get to at some point.

@mion00
Copy link
Contributor Author

mion00 commented Feb 27, 2023

As far as unit testing goes - no we don't have any. It's another thing on my long list to become an expert at ... I did at one point make a start writing my own framework for this but I got bogged down. If you can recommend a framework, and at least show me how to get started I can add this into the mix as well, as it is a good idea, and something I have been meaning to get to at some point.

I have no familiarity at all with the AppDaemon code, so proposing one framework vs another might be hazardous, since I don't know the complexity and intricacies of the code we are dealing with.
In some other projects I used with great success pytest. It seems to have lots of popularity in the Python world, with plugins and extensions behind it for various use cases.

@acockburn
Copy link
Member

acockburn commented Feb 27, 2023 via email

@acockburn
Copy link
Member

Looks like the build succeeded but the push to Docker failed, I did set up the secrets in GutHub, but maybe not correctly. Here is the end of the log:

621
#22 exporting to image
[622](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:626)
#22 exporting layers
[623](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:627)
#22 exporting layers 58.4s done
[624](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:628)
#22 exporting manifest sha256:dac1fc9cce3a670f6c42fba94c14d17dbe5d2e8c6e750d83ed12590ec563c47b 0.0s done
[625](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:629)
#22 exporting config sha256:d7a3ff999ba4adeabb683e1f5a9aab33861c61414b8736a7016e8ee882da1ef0 done
[626](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:630)
#22 exporting attestation manifest sha256:08cfe0dcabe58fdcb74572564f723eb89ed78e7076a2301c7d20a7f49f846b24 done
[627](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:631)
#22 exporting manifest sha256:9d6495fdd15b604e385b2f06bda08d13789a8fcfc9cfeadc956931c3f390724e done
[628](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:632)
#22 exporting config sha256:eb838c1fa96d919b608664bcf941574fa9711e0945b8e5957b781d386c7a4000 done
[629](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:633)
#22 exporting attestation manifest sha256:246d03399e2ea4d7ad4c65d26f08366771ffc72771ecde826732cd0fda33d3f2 done
[630](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:634)
#22 exporting manifest list sha256:9af23b690cf5b584a4090e63f92a356f604b393453afafa3b8685aef9b64b8a2 done
[631](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:635)
#22 pushing layers
[632](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:636)
#22 ...
[633](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:637)

[634](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:638)
#23 [auth] appdaemon/appdaemon:pull,push token for registry-1.docker.io
[635](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:639)
#23 DONE 0.0s
[636](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:640)

[637](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:641)
#22 exporting to image
[638](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:642)
#22 ...
[639](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:643)

[640](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:644)
#24 [auth] appdaemon/appdaemon:pull,push token for registry-1.docker.io
[641](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:645)
#24 DONE 0.0s
[642](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:646)

[643](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:647)
#25 [auth] appdaemon/appdaemon:pull,push token for registry-1.docker.io
[644](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:648)
#25 DONE 0.0s
[645](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:649)

[646](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:650)
#22 exporting to image
[647](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:651)
#22 ...
[648](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:652)

[649](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:653)
#26 [auth] appdaemon/appdaemon:pull,push token for registry-1.docker.io
[650](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:654)
#26 DONE 0.0s
[651](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:655)

[652](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:656)
#22 exporting to image
[653](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:657)
#22 pushing layers 1.0s done
[654](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:658)
#22 ERROR: failed to push appdaemon/appdaemon:dev: server message: insufficient_scope: authorization failed
[655](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:659)

[656](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:660)
#27 [auth] appdaemon/appdaemon:pull,push token for registry-1.docker.io
[657](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:661)
#27 DONE 0.0s
[658](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:662)

[659](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:663)
#28 [auth] appdaemon/appdaemon:pull,push token for registry-1.docker.io
[660](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:664)
#28 DONE 0.0s
[661](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:665)
------
[662](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:666)
 > exporting to image:
[663](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:667)
------
[664](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:668)
ERROR: failed to solve: failed to push appdaemon/appdaemon:dev: server message: insufficient_scope: authorization failed
[665](https://github.com/AppDaemon/appdaemon/actions/runs/4298011420/jobs/7491897073#step:6:669)
Error: buildx failed with: ERROR: failed to solve: failed to push appdaemon/appdaemon:dev: server message: insufficient_scope: authorization failed

@acockburn
Copy link
Member

Looks like we are close though :)

@acockburn
Copy link
Member

I fixed it!

The action was using $github.repository as the tag name, which resolves to "appdaemon/appdaemon", however it needed to be "acockburn/appdaemon" for docker hub. I hard coded that value in the ENV var, as I couldn't see an easy way to get it out of GitHub variables.

Now for an encore I am going to try and add the arm images to the build ...

@acockburn
Copy link
Member

The 4 platform build fails identically to how it did on Semaphore, so it doesn't seem like a CI issue, but a problem with buildx. Would be nice to find a solution at some point but that is outside the scope of this exercise.

So, we have docker builds, pre-commit runs, PyPi automation (still to be tested). I think you have covered most things apart from a few details to be discussed on the PyPi piece - once again thanks for diving in and helping with this I am really happy with the result, and more importantly I think I understand it to the point where I can do a better job of maintaining it moving forwards.

@acockburn
Copy link
Member

Once we have finalized the PyPi piece I'll do a mini-re;ease to test it all - I have a minor addition to the Admin UI I'd like to get out there to gather info on proposed changes to the HASS plugin.

@mion00
Copy link
Contributor Author

mion00 commented Mar 1, 2023

Glad that everything seems to be improving nicely!
I saw the Docker file has these install step

RUN apk add --no-cache build-base gcc libffi-dev openssl-dev musl-dev cargo rust curl \

Are all those packages needed? Excpecially cargo seems to cause issues when building the images, from my understanding it is a package mangaer for rust, but we are working with Python here! 😉

@acockburn
Copy link
Member

They were added at some point presumably to help with build problems ... I'll do some work on the docker build over the next few days. I have the new PR to merge and then I can look at your suggestions as well. I set up my IDE to build the docker image which is something I hadn't;t been setup to do for a while after I moved platforms.

@mion00
Copy link
Contributor Author

mion00 commented Mar 1, 2023

Locally I tested by removing all these apk add and I was able to build successfully a Docker image.
Including the apk dependencies produces a Docker image of a whooping 1.22GB in size, versus 118MB:

$ docker image ls
appdaemon             without-apk       9ed8d3b55d59   2 hours ago   118MB
appdaemon             with-apk          4cdacc10d194   2 hours ago   1.22GB

@acockburn
Copy link
Member

acockburn commented Mar 1, 2023 via email

@mion00
Copy link
Contributor Author

mion00 commented Mar 1, 2023

It's the apt-get equivalent in the Alpine linux distribution

@acockburn
Copy link
Member

acockburn commented Mar 1, 2023 via email

@acockburn
Copy link
Member

OK, looking at the build pipeline, seems like it will build any branch:

on:
  push:
    branches: ["**"]
    tags: ["*"]
  pull_request:
    branches: ["dev"]

Can I change it to:

on:
  push:
    branches: ["dev"]
    tags: ["*"]
  pull_request:
    branches: ["dev"]

To only build the dev branch when it changes and no others? That better suits the way we work - most branches outside of dev and master are experimental or temporary

@mion00
Copy link
Contributor Author

mion00 commented Mar 1, 2023

My reasoning behind that change is that this way every time there is a change it triggers a Python build and a Docker image build, without actually publishing anything to PyPI or Docker Hub.
This allow to always test that the project can build correctly, and any change introduced surfaces immediately the problem, so it can be quickly fixed.
Otherwise we may end up like the other day, when you were releasing the new version and you scrambled to fix the Docker builds. Now if something breaks we are immediately notified, without having to wait the day of the release. 😉

@acockburn
Copy link
Member

ok, got it - it's built but not published, sounds good :)

@acockburn
Copy link
Member

OK, docs now b building cleanly as well

@acockburn
Copy link
Member

acockburn commented Mar 1, 2023

OK, I removed support for python 3.7 and upgraded flake8 to 6.0.0 via dependabot. I also need to keep the flake8 and black versions at the same level for the pre-commit hooks - is there a way to pull this info from project.tml instead of manually keeping them in sync?

@acockburn
Copy link
Member

What are your thoughts on including a step in the GitHub lint action to install and run pre-commit, as an alternative to explicitly running just flake8? That would pull black in too which I think is desirable. In addition, I am looking at ruff as an alternative to flake8 and black, and if I make that change the CI pipeline should just follow along. Can you think of a reason to avoid doing that?

@mion00
Copy link
Contributor Author

mion00 commented Mar 2, 2023

OK, docs now b building cleanly as well

Wow, you have lots more patience than me! 😉

OK, I removed support for python 3.7 and upgraded flake8 to 6.0.0 via dependabot. I also need to keep the flake8 and black versions at the same level for the pre-commit hooks - is there a way to pull this info from project.tml instead of manually keeping them in sync?

The pre-commit hooks run in their own separate environment from the project itself, as you can see from its CLI output the first time you initialize it. I added flake and black to the pyproject.toml yaml just for convenience, to be be readily available in the local dev environment by directly invoking them without the pre-commit middle-man.
Since however both tools are invoked primarily via pre-commit and this has its own configuration file, we could remove them from pyproject.toml, to avoid this confusion and have a single source of truth for versioning them.

What are your thoughts on including a step in the GitHub lint action to install and run pre-commit, as an alternative to explicitly running just flake8?

At the moment the pre-commit hooks are meant to run locally on a dev machine, before a commit is made, to reformat/lint the code.
Since the CI pipeline uses its own copy of the Git repo to do its things (via the actions/checkout action), if we run pre-commit as part of the Github action the changes done to the local file are lost, since they are not pushed back to the main repo on Github.
Well well, hear me out. I just found out this: https://pre-commit.ci/lite.html, a Github Action created by the author of pre-commit, that automatically runs pre-commit in the pipeline, suggesting fixes directly inside the PR! Maybe we should give it a try..

In addition, I am looking at ruff as an alternative to flake8 and black, and if I make that change the CI pipeline should just follow along. Can you think of a reason to avoid doing that?

For this I think we should check if ruff nicely integrates with pre-commit, as black and flake8 do.

@acockburn
Copy link
Member

Great - I think we are definitely on the same page. My concern with the pre-commit is that someone who didn't install pre-commit could still create a PR so I would want it checked when the PR is created.

I like the idea of keeping pre-commit tools separate and having a single source of truth. I have found out that pre-commit has an update function you can run to grab the latest tool versions. This is manual but not really onerous to do once in a while, plus you can run pre-commit manually against the entire repo (I did it last night and found a ton of stuff that needed fixing despite having this in place for a while - also could have been than the new flake8 is more strict). Then the cool part is the the pipeline would just follow along if that pre-commit.ci/lite works how we think it should.

At that point, I think we will have done everything I can think of and I am confident I can maintain this new pipeline since you have documented it extremely well, and I have been involved in the decision making. With everything setup this way, if I do decide to look at ruff the pipeline will update automatically - but I'll leave ruff for another day after some more research. I think, it is interesting because it is super fast and also has plugins for multiple tools, some of which I am interested in like pyupdate and a few others. I got the tip from Frank - homeassistant is looking at using it.

I would like to compliment you on your thorough and considered approach, your clear communication, and commenting/documenting of the code you put together - you are a true professional and I am deeply grateful for all the help!

So, if you can help me figure out how to replace the current CI listing steps with the pre-commit stuff, I think we will be pretty much done :)

mion00 added a commit to mion00/appdaemon that referenced this issue Mar 2, 2023
@acockburn
Copy link
Member

That last fix for the docker image worked! We are now back to full multi-arch. Very cool!

I also fixed the docs, so all is looking good.

@mion00
Copy link
Contributor Author

mion00 commented Mar 2, 2023

I suggest we keep an eye out when releasing the next version of AppDaemon with this Docker image, since previously the build dependencies (gcc, rust, cargo & company) used for the compilation of the python wheels on the old arm architectures where not removed at the end of the build process and were included in the final image. This meant that the size of the final AppDaemon image skyrocketed. You can see here under compressed size: it was 400 MB, vs the now much more friendly 40MB !.
As a side effect of the previous build, there were this native tools installed in the AppDaemon container when an end-user fired up the final container with docker run acockburn/appdaemon.
I suppose they may have come in handy for anyone using additional Python dependencies for their AppDaemon scripts that needed to compile C extensions (as orjson does when we install it ourselves), since they already had most of the build tools necessary (for instance gcc and most library headers useful for compilation) baked inside the AppDaemon container.

Now we have optimized the way we handle the Docker build, avoiding all the additional bloat coming from the compile-time dependencies. This however means that anyone that may have been using third-party Python packages that needs to be compiled may suddenly see their appdaemon container fail to start, since the build tools are no longer present on startup.
I think we should document this change appropriately in the docs, guiding this particular set of users to install what additional system packages they may need (using system_packages.txt for instance), if precedently they required building C extensions.
An additional solution, as is tradition in the Docker world, is that the end-user can derive their custom-made Docker image with all the dependencies they need, by using a locally made Dockerfile starting with FROM acockburn/appdaemon:<appdaemon_version>, in which they do all the required installation and build steps for their particular use case.

This might sound like a breaking change, but truth be told it was the previous Docker build process that was not completely correct, since it introduced this unexpected side effect.
I am in favor for keeping our official Docker image small and fast, free of these build-time dependencies, since we don't know what kind of build tools our end-user might need, and covering all the different sets of requirements that might pop up would be a nightmare, so better leave the option to customize the Docker image (and/or the system_packages) at the final end-user, providing a clear and stable base image ourselves.

@acockburn
Copy link
Member

Ok let me think on that but I tend to agree with you - to be honest the docker image has just been at the mercy of whatever anyone who made a PR wanted, so perhaps it is time ti get stricter.

On another note, GitHub went crazy for a while this evening and stopped building stuff. Maybe related is that we had run out of cache space, so I cleared it down a little.

Also, pip-update was behaving sketchily so I temporarily disabled it as a pre-commit hook. It was fighting with itself when commuting vs running on the command line then again when committed to the repository, and also slowing my commits down significantly. I'll use it on the command line for a while as required until I get used to it and figure it out before maybe adding it back.

@mion00
Copy link
Contributor Author

mion00 commented Mar 3, 2023

I saw that too when locally committing, I think it is because pip-compile tries to resolve all dependencies from scratch to be sure of their versions, but this may slow down significantly the dev workflow.
We can just run it manually whenever there is a change in pyoroject toml, with the three commands outlined in each requirements file.

Maybe Github had an hiccup for a few moments,or there is something in our end in the workflow config that did not trigger the pipeline. Let's see whether it happens again.

@acockburn
Copy link
Member

acockburn commented Mar 3, 2023

Yep, we will keep an eye on it but it seemed to fix itself last night before I went to bed - I will also be interested what dependably makes of the new setup. I'll see if there is a way we can steer it towards pyproject.toml and away from requirements.txt

@acockburn
Copy link
Member

OK, now dependably weighed in - it seems to understand the multiple requirements files, but left pyproject.toml alone. I didn't see an obvious way to change that behavior. So what to do - merge the changes and update pyproject.toml, or close them and manually update with pyproject.toml then run pip-compile?

@acockburn acockburn added the work_in_progress Isse or PR that is evolving and shouldn't;t be aged out label Mar 3, 2023
@mion00
Copy link
Contributor Author

mion00 commented Mar 3, 2023

OK, now dependably weighed in - it seems to understand the multiple requirements files, but left pyproject.toml alone. I didn't see an obvious way to change that behavior. So what to do - merge the changes and update pyproject.toml, or close them and manually update with pyproject.toml then run pip-compile?

Yeah I saw the PR from the bot touched only the requirements.txts. I digged a bit in the dependabot issue thread and found out a similar use case than ours: dependabot/dependabot-core#1475. The bot does not seems to understand that those requirements file are generated from pyproject, instead it treats them individually.
For the moment I would suggest that whenever a dependency needs to update, we should always refer to the pyproject file as our source of truth and run pip-compile manually to "pin" the updated version. The bot for now is trying to update the doc dependencies, failing since it causes conflicts between interdependent dependencies: #1677 #1678.

@acockburn
Copy link
Member

acockburn commented Mar 3, 2023 via email

@acockburn
Copy link
Member

The issue pipeline seems to run OK - I'll take a few more passes through to identify bugs and enhancements then I'll let it loose and see what happens :)

@acockburn
Copy link
Member

acockburn commented Mar 4, 2023

Still some issues with dependabot - well kind of. It is highlighting version upgrades in derived dependencies. When I run pip-compile, nothing new is picked up, presumably because the packages that pulled in the derived dependencies are still using older versions of the package. I'll wait and see if dependably is still looking out for project.toml, and if so, I'll run pip-compile whenever a dependency in there get upgraded and ignore the derived dependencies in the 3 requirements.txt file. Does that sound sensible?

@acockburn acockburn reopened this Mar 4, 2023
@mion00
Copy link
Contributor Author

mion00 commented Mar 4, 2023

Still some issues with dependabot - well kind of. It is highlighting version upgrades in derived dependencies. When I run pip-compile, nothing new is picked up, presumably because the packages that pulled in the derived dependencies are still using older versions of the package. I'll wait and see if dependably is still looking out for project.toml, and if so, I'll run pip-compile whenever a dependency in there get upgraded and ignore the derived dependencies in the 3 requirements.txt file. Does that sound sensible?

Yeah I whink currently dependabot only cares about the requirements.txt, but since they are auto-generated, I would avoid touching them manually and instead rely on pip-compile.

@acockburn
Copy link
Member

acockburn commented Mar 4, 2023 via email

@mion00
Copy link
Contributor Author

mion00 commented Mar 4, 2023

Ok, seems reasonable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
work_in_progress Isse or PR that is evolving and shouldn't;t be aged out
Projects
None yet
Development

No branches or pull requests

2 participants