Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set up CI/CD pipeline #43

Closed
deitch opened this issue Dec 19, 2019 · 10 comments
Closed

Set up CI/CD pipeline #43

deitch opened this issue Dec 19, 2019 · 10 comments

Comments

@deitch
Copy link
Contributor

deitch commented Dec 19, 2019

As discussed here with @jasmingacic and @jacobsmith928 , we should have a proper CI/CD pipeline.

@deitch
Copy link
Contributor Author

deitch commented Dec 19, 2019

@jasmingacic wrote:

Hey @deitch should we get together and go over what @jacobsmith928 suggested?

Sure. Ping me on Packet Slack and we can coordinate.

@deitch
Copy link
Contributor Author

deitch commented Dec 19, 2019

We have a few options. I generally steer people off of Travis since the private equity takeover and a lot of core engineers have left. Github Actions is great, but is unavailable to the packethost organization for various reasons. I moved ccm and csi to public drone cloud. They are free for OSS, and have full support for architectures amd64 and arm64, although as this is all just go compiled code, that doesn't matter much.

Can you outline here what steps would be in a proper CI pipeline, I would be happy to set up a drone.yml for it. If it is all just make targets, that is fine, list them out?

@jasmingacic
Copy link
Contributor

One of the questions is whether we should run integration tests before we do the build.
Also I would recommend that each time when a git tag is created that the CI triggers the build. After successful build the binaries should be uploaded to the release like this https://github.com/packethost/packet-cli/releases/tag/0.0.5.

What do you think?

@deitch
Copy link
Contributor Author

deitch commented Dec 19, 2019

run integration tests before we do the build

Other way around, I think. The build itself can highlight issues, save time on the integration tests, which usually take longer.

After successful build the binaries should be uploaded to the release like this

I assume you are planning on using git tags to cut releases? Sure, you can have actions that happen on tagging. On the other hand, you cannot create "pull requests" for tags (something I never have liked, true for all of github/gitlab/bitbucket), so you are just tagging an existing commit and pushing. We can create actions based on that that push out as releases.

@jasmingacic
Copy link
Contributor

I agree the build will highlight the issues. Maybe integration tests should be left for the GO SDK.

As for the flow we simply merge into master, and manually tag the release (commit) and let CI take over. We'd only have to have a rule in place who can create commit tags.

@deitch
Copy link
Contributor Author

deitch commented Dec 20, 2019

we simply merge into master, and manually tag the release (commit) and let CI take over

Yes, agreed. I do wish tags were immutable, and that you could have a "Tag Request" like a "Pull Request" (just last week I was blocked from a large enterprise SDLC because this doesn't exist), but certainly good enough for here.

@displague
Copy link
Member

I took a look at shot at setting up a Github Workflow for the CLI but that was detoured when make test did not pass locally. The existing tests are geared toward integration patterns, which are difficult (time-consuming, external-factor dependent) to test.

#33 stands out as a reason why testing is hard in the current state, global variables are reused between commands, defining arguments and the client connection, and the commands cannot be isolated (each uses init() to define configure itself).

#66 offers a way to split up the commands, avoid the global state, and get unit tests introduced. (only volume has been refactored).

@displague
Copy link
Member

displague commented Aug 12, 2020

I have a little update for this issue. We now have GitHub Workflows running on each PR and triggering builds/releases when tags are pushed.

  • The tags must still be manually pushed.
  • We are not running E2E or high-coverage tests on PRs, yet.

On the topic of E2E tests, I would prefer high-coverage tests that use mocks rather than the real API for pull-requests. In this project, our goal is to have a working CLI, not a working API.

I believe what should be tested here is narrowly:

  • given some CLI parameters,
  • is the correct API request generated

and:

  • given an assumed API response (based on documentations or recorded interactions),
  • was the response handled correctly

The output renderers would be noop'd when testing API responses.
W would test the output implementations separately, given non-API data to render.

If API E2E tests are desired, that could be something run in a separate scheduled workflow. I only feel comfortable with these when we have an all-purpose Packet deleting tool that sweeps all resources in a project. Still some tests and resources are not project bound and we would need E2E test cleanup to handle those resources too.

@displague
Copy link
Member

Arguably, this issue could be closed and we can open new issues focused on test coverage and scheduled e2e tests. Thoughts?

@displague
Copy link
Member

displague commented May 25, 2021

The tags must still be manually pushed.

Tags can now be introduced from the GitHub Releases tab.

If API E2E tests are desired, that could be something run in a separate scheduled workflow.

I don't think this offers the most value. Mocked responses are faster to test and create and ensure that the client is working properly according to the presumptive API. It is not the job of the CLI to ensure that the API has not introduced breaking changes. I would defer those concerns to #106. #22 previously addressed E2E concerns but these were not well maintained, in part over the added complexity. IMHO, contributors to the project shouldn't be expected to spend money to test the software.

I only feel comfortable with these when we have an all-purpose Packet deleting tool that sweeps all resources in a project.

https://github.com/displague/metal-sweeper-action

Arguably, this issue could be closed and we can open new issues focused on test coverage and scheduled e2e tests. Thoughts?

Closing this in favor of a new issue to improve test coverage (using a CI test that ensures coverage is improved in each PR).
#123

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants