New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
atc: behaviour: add ArchivePipeline endpoint #5346
Conversation
617853e
to
585fbdf
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jomsie and I did acceptance on this and it does what it says it does!
We docker-compose up -d --build
and set a pipeline. We then connected to the db:
$ psql -h localhost -p 6543 -U dev -W -d concourse
and checked the pipelines table immediately after setting the pipeline:
$ select name,archived from pipelines;
name | archived
-------+----------
test | f
(1 rows)
then we archived the pipeline by running fly -t local curl api/v1/teams/main/pipelines/test2/archive "" -- -X PUT
and checked the db again:
$ select name,archived from pipelines;
name | archived
-------+----------
test | t
(1 rows)
It did what it says it would do! yay!
One little nit about one of the tests, otherwise this seems good to go!
atc/integration/archiving_test.go
Outdated
}) | ||
|
||
It("can archive pipelines", func() { | ||
atcURL := fmt.Sprintf("http://localhost:%v", cmd.BindPort) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I'm missing something but it looks like this is set in the BeforeEach
already
atcURL := fmt.Sprintf("http://localhost:%v", cmd.BindPort) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hahaha @aoldershaw he caught us. we noticed this in a later PR
@taylorsilva @jomsie nice thorough acceptance. Since this depends on #5329 I think it would be enough to check the results of EDIT: I can't request changes on my own PR. |
#5315 - as per #5346 (comment), the ability to archive pipelines is now behind an api flag `enable-archive-pipeline`. Signed-off-by: Aidan Oldershaw <aoldershaw@pivotal.io> Co-authored-by: James Thomson <jthomson@pivotal.io>
#5315 - as per #5346 (comment), the ability to archive pipelines is now behind an api flag `enable-archive-pipeline`. Signed-off-by: Aidan Oldershaw <aoldershaw@pivotal.io> Co-authored-by: James Thomson <jthomson@pivotal.io>
#5315 Driving this change with tests had some pain points. We decided to work outside-in, starting with an ATC integration test. It was difficult to move from the outer TDD loop to the inner - I was surprised by the number of seemingly-far-flung changes that were required to move from one failure to the next. These included: * adding an entry to the auditor to overcome a panic * modifying the `atc/api/present` package to link the DB entity to the API entity both of these things feel so easy to forget, and it might be nice if they could be described by a change in the same part of the codebase. Signed-off-by: Jamie Klassen <cklassen@pivotal.io> Co-authored-by: Bishoy Youssef <byoussef@pivotal.io> Co-authored-by: Aidan Oldershaw <aoldershaw@pivotal.io>
#5315 - as per #5346 (comment), the ability to archive pipelines is now behind an api flag `enable-archive-pipeline`. Signed-off-by: Aidan Oldershaw <aoldershaw@pivotal.io> Co-authored-by: James Thomson <jthomson@pivotal.io>
Not sure how that got in there! Signed-off-by: Aidan Oldershaw <aoldershaw@pivotal.io>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
loooking goood, added some comments. great job!
response, err = client.Do(request) | ||
Expect(err).NotTo(HaveOccurred()) | ||
}) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the same suite there are quite a bit of tests that also validate what happens when an unauthorized users tries to reach the corresponding endpoint
~/workspace/concourse/atc/api $ ag "401" ./pipelines_test.go
415: It("returns 401", func() {
509: It("returns 401", func() {
589: It("returns 401", func() {
867: It("returns 401 Unauthorized", func() {
946: It("returns 401", func() {
1066: It("returns 401 Unauthorized", func() {
1146: It("returns 401 Unauthorized", func() {
1224: It("returns 401 Unauthorized", func() {
1335: It("returns 401 Unauthorized", func() {
1538: It("returns 401 Unauthorized", func() {
1616: It("returns 401 Unauthorized", func() {
1644: It("returns 401", func() {
1830: It("returns 401", func() {
e.g.
concourse/atc/api/pipelines_test.go
Lines 409 to 417 in 004e924
Context("when not authenticated", func() { | |
BeforeEach(func() { | |
fakeaccess.IsAuthenticatedReturns(false) | |
}) | |
It("returns 401", func() { | |
Expect(response.StatusCode).To(Equal(http.StatusUnauthorized)) | |
}) | |
}) |
wdyt of adding tests for this one too? it seems to me that the big purpose of this more "end-to-end" approach of testing at the api
level would be to catch these things 🤔 please correct me if i'm wrong! 😁
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm, we tried to drive out this feature by pretty rigorously practicing TDD. While pairing with @aoldershaw and @YoussB on different occasions, I recall weighing the decision to add these kinds of tests, and they felt redundant since we were already adding the table test entries for accessor itself. Indeed I think if we followed the boilerplate of the other tests, the newly-added tests would never actually go red. However, leaving them out is really only safe if we trust the other components we are integrating with and we have a solid contract with them. For example, I think there is a component with a name like 'api auth wrappa' that is conceptually 'responsible' for the behaviours you are describing, but we don't have a unit test anywhere saying "this code delegates to the api auth wrappa", and maybe we should? After all, if we ended up switching away from using the api auth wrappa, there would be nothing ensuring this "sensible default" behaviour.
http.Error(w, "endpoint is not enabled", http.StatusForbidden) | ||
return | ||
} | ||
s.logger.Debug("archive-pipeline") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh was this intentionally left here? if one wants to get a sense of "this endpoint was hit", one could leverage the audit logs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that I find myself reading other people's logs (more on that below), I feel motivated to make the system more verbose by default, just in case. This line is actually enforced by a unit test, because I don't want to rely on other people to turn on auditing if they're then going to be needing my advice on the state of their cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, I do get the feeling :/
I don't want to rely on other people to turn on auditing if they're then going to be needing my advice on the state of their cluster.
although this does rely on the fact that they turned debug
on, right? (which, sure, can be turned on at runtime, which is quite useful, but is also super expensive to have it on AFAIR).
if the same was true for audit (assuming the pain-point is quickly turning the system into a more verbose mode), would you think this log message in this particular endpoint would not be needed anymore?
. "github.com/onsi/gomega" | ||
) | ||
|
||
//go:generate counterfeiter code.cloudfoundry.org/lager.Logger |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this caught my attention, it's something we seem to only be doing here
$ ~/workspace/concourse/atc $ ag -Q "lager.Logger" | grep counter
api/pipelineserver/archive_test.go:16://go:generate counterfeiter code.cloudfoundry.org/lager.Logger
do you think this is something we should adopt more? I'm curious about the rationale to testing the log statements here, and would challenge is a bit: if we were adding a someMetric.Add(1)
(i.e., if we were incrementing a counter for something we want to measure), and even more, we had a trace that we wanted to capture, should we test those three things? log, metric, and trace?
(please don't take the above as criticism 😅 curious to know more about it)
thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've spent a lot of time inspecting logs from Concourse with no access to the cluster. Based on this, I'm shocked by how often I really wish Concourse was logging something and I find that it's not. In general I haven't yet felt like it's necessary to unit-test every log message, but there are times and places where logging feels like a really desirable feature with real business value and I want a test to verify that it will definitely be happening - in such cases I'd be pretty upset if the log stopped happening haha.
Specifically for the pipelines/:pipeline_name/archive endpoint Signed-off-by: Bishoy Youssef <byoussef@pivotal.io> Co-authored-by: Taylor Silva <tsilva@pivotal.io>
#5315 - as per #5346 (comment), the ability to archive pipelines is now behind an api flag `enable-archive-pipeline`. Signed-off-by: Aidan Oldershaw <aoldershaw@pivotal.io> Co-authored-by: James Thomson <jthomson@pivotal.io>
closing this pr as its commits are covered in #5387. |
#5315 - as per #5346 (comment), the ability to archive pipelines is now behind an api flag `enable-archive-pipeline`. Signed-off-by: Aidan Oldershaw <aoldershaw@pivotal.io> Co-authored-by: James Thomson <jthomson@pivotal.io>
#5315 - as per #5346 (comment), the ability to archive pipelines is now behind an api flag `enable-archive-pipeline`. Signed-off-by: Aidan Oldershaw <aoldershaw@pivotal.io> Co-authored-by: James Thomson <jthomson@pivotal.io>
#5315 - as per #5346 (comment), the ability to archive pipelines is now behind an api flag `enable-archive-pipeline`. Signed-off-by: Aidan Oldershaw <aoldershaw@pivotal.io> Co-authored-by: James Thomson <jthomson@pivotal.io>
#5315 - as per #5346 (comment), the ability to archive pipelines is now behind an api flag `enable-archive-pipeline`. Signed-off-by: Aidan Oldershaw <aoldershaw@pivotal.io> Co-authored-by: James Thomson <jthomson@pivotal.io>
#5315 - as per #5346 (comment), the ability to archive pipelines is now behind an api flag `enable-archive-pipeline`. Signed-off-by: Aidan Oldershaw <aoldershaw@pivotal.io> Co-authored-by: James Thomson <jthomson@pivotal.io>
#5315 - as per #5346 (comment), the ability to archive pipelines is now behind an api flag `enable-archive-pipeline`. Signed-off-by: Aidan Oldershaw <aoldershaw@pivotal.io> Co-authored-by: James Thomson <jthomson@pivotal.io>
Existing Issue
Based on #5329, which should be merged first.
Fixes #5315.
Changes proposed in this pull request
archived
column to thepipelines
tableArchivePipeline
endpoint strictly updates this field totrue
Contributor Checklist
Reviewer Checklist
--help
text).note - we will have to add the flags to bosh and helm, but that won't be part of this PR.