New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(core): add support for batch updating pipelines #2651
feat(core): add support for batch updating pipelines #2651
Conversation
...ont50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/BatchUpdatePipelinesTask.java
Outdated
Show resolved
Hide resolved
pipelineModelMutators.stream().filter(m -> m.supports(pipeline)).forEach(m -> m.mutate(pipeline)); | ||
} | ||
|
||
Response response = front50Service.batchUpdatePipelines(pipelines); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
batchUpdatePipelines
could just return List<Map>
... no need to explicitly object mapper it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I copied most of this logic from SavePipelineTask
(https://github.com/spinnaker/orca/blob/master/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/SavePipelineTask.java#L100) and I think the reason we need the entire Response returned from batchUpdatePipelines
is we read the status off it below to determine the returned TaskResult
's status.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally, we've been using Response
incorrectly in most spots.
Retrofit is going to throw an exception on 4xx/5xx errors which should be sufficient (unless we somehow need to differentiate between a 200 and 204, etc.).
List<Map> updatedPipelines = (List<Map>) objectMapper.readValue( | ||
response.getBody().in(), List.class | ||
); | ||
outputs.put("updatedPipelines", updatedPipelines); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having an execution that contains all pipeline json for an application is probably excessive?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed - this will get super verbose - maybe just a map of application names and execution names?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aaaaaactually, it doesn't look like Front50 returns anything:
https://github.com/spinnaker/front50/blob/master/front50-web/src/main/groovy/com/netflix/spinnaker/front50/controllers/PipelineController.groovy#L116
Also, I think we will need to change this in Front50:
@PreAuthorize("@fiatPermissionEvaluator.isAdmin()")
or only admins will be able to reorder pipeline configs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah good catch -- I'm assuming we'll need to add a @PreAuthorize
expression similar to that of save
but checking each pipeline (https://github.com/spinnaker/front50/blob/master/front50-web/src/main/groovy/com/netflix/spinnaker/front50/controllers/PipelineController.groovy#L94)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that looks correct. There's only one application in scope for these bulk ops right?
} | ||
|
||
@Override | ||
public long getBackoffPeriod() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This stage probably doesn't need to be retryable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ajordens will calls to Front50 retry automatically via something in retrofit if they fail?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I copied SavePipelineTask
as a starting point for BatchUpdatePipelinesTask
and assumed since it was retryable this should be too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Retrofit will not automatically retry ... the general pattern is not to retry on writes unless we're confident that the calls are idempotent (which in hindsight we probably are here).
...ont50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/BatchUpdatePipelinesTask.java
Outdated
Show resolved
Hide resolved
...ont50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/BatchUpdatePipelinesTask.java
Outdated
Show resolved
Hide resolved
bc76d5d
to
04fe360
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still not sold on exposing the batch endpoints via orca ... what would the feelings been on an alternative reordering-specific API in front50?
pipelineModelMutators.stream().filter(m -> m.supports(pipeline)).forEach(m -> m.mutate(pipeline)); | ||
} | ||
|
||
Response response = front50Service.batchUpdatePipelines(pipelines); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally, we've been using Response
incorrectly in most spots.
Retrofit is going to throw an exception on 4xx/5xx errors which should be sufficient (unless we somehow need to differentiate between a 200 and 204, etc.).
} | ||
|
||
@Override | ||
public long getBackoffPeriod() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Retrofit will not automatically retry ... the general pattern is not to retry on writes unless we're confident that the calls are idempotent (which in hindsight we probably are here).
@ajordens a reordering-specific API in Front50 sounds good to me! |
Previously, Deck made one request per pipeline to be re-indexed after re-ordering or deleting a pipeline, which could flood the Tasks panel when an application contained many pipelines. This change enables support for batch updating pipelines.
Deck PR: spinnaker/deck#6519
Gate PR: spinnaker/gate#723