Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(export): export artifacts from clusters #1071

Merged
merged 3 commits into from
Apr 29, 2020

Conversation

emjburns
Copy link
Contributor

This PR adds an endpoint to export artifacts given a cluster. The existing cluster export will now return a reference provider with a reference to the name of the artifact instead of the old style full artifact (which is no longer supported)

These endpoints should be used together to produce the artifact and cluster that are both needed to put a cluster correctly in a delivery config.

A GET /export/artifact/{provider}/{account}/{cluster} will return something like:

{
  "name": "deb-sample-app-server",
  "reference": "deb-sample-app-server",
  "vmOptions": {
    "baseLabel": "RELEASE",
    "baseOs": "xenial",
    "regions": [
      "us-east-1",
      "us-west-2"
    ],
    "storeType": "EBS"
  },
  "statuses": [],
  "type": "deb"
}

Notes:

  • Name and reference will be the same, and reference can be excluded or changed.
  • (deb) I'm guessing the baseOs from netflix naming conventions. Awesome Hackery. Turns out the image details from clouddriver do not tell us what OS an image is.
  • (deb) No idea how to calculate the store type. This is defaulted.
  • (deb) No idea how to calculate the statuses. This might need to be a user prompt.
  • (docker) I'm trying to determine the type of tag versioning strategy to use based on the running tag. This won't work if the cluster is a one-off. I throw an exception in the case I can't determine the strategy, because I don't know how else to signal that the output wouldn't be correct.

@@ -413,7 +413,10 @@ object ActiveServerGroupTest : ModelParsingTestSupport<CloudDriverService, Activ
image = ActiveServerGroupImage(
imageId = ami,
appVersion = "$app-3.16.0-h205.121d4ac",
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's an example of what the data looks like for debs. To find the baseOS, I look for either "xenial" or "bionic" as strings in the name, description, and imageLocation fields. It's excellent garbage hackery. For netflix, these files are autogenerated by the bakery so it's reliable.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Won't that mean we need to make code updates to keel to support new base OS names? Doesn't seem great.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes we definitely will. I agree, not great.

Do you have any ideas if the operating system is present in any field in any clouddriver response? I checked the server group and image response. I don't see the data surfaced anywhere in either of those or the spinnaker UI, or the internal edda link for the image. So, I'm thinking we just don't have the data.

If we don't have the data, do you have any ideas on how to do this better?

@@ -491,7 +509,7 @@ class TitusClusterHandler(
capacity = capacity,
capacityGroup = capacityGroup,
constraints = constraints,
container = generateContainer(container, location.account),
container = ReferenceProvider(container.repository()),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now when we export clusters we return them with a reference provider.

@@ -45,7 +46,7 @@ class ExportController(
"securitygroup" to "security-group",
"securitygroups" to "security-group",
"cluster" to "cluster",
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pretty sure this was a typo.

@robfletcher
Copy link
Contributor

These endpoints should be used together to produce the artifact and cluster that are both needed to put a cluster correctly in a delivery config.

This is something we expect Newt to do internally?

@robfletcher
Copy link
Contributor

I'm guessing the baseOs from netflix naming conventions. Awesome Hackery. Turns out the image details from clouddriver do not tell us what OS an image is.

You should be able to reverse engineer it from the base_ami_version tag

@robfletcher
Copy link
Contributor

No idea how to calculate the store type. This is defaulted.

I guess the presence of ebs in the image name indicates a store type of EBS. I'm not sure what other options would look like. Doesn't seem super-reliable though.

@emjburns
Copy link
Contributor Author

@robfletcher
(1) yes, I'm building this endpoint so that corey can do this from newt
(2) the base_ami_version tag only tells what version of the netflix base package it is, not the os type.
(3) hmm interesting - how do you feel if I default to ebs now and we fix it when we find a good example of someone doing something else?

@luispollo
Copy link
Contributor

Have you considered looking into pipeline bake stages instead of clouddriver? The info you’re missing is available there, but I wasn’t sure the use case is only for new migrations or if Cory wants export to work with already-managed clusters.

If you think it’s worth considering, there’s code you can reuse from that PoC I did:

https://github.com/luispollo/keel/blob/579bb14f46debb86333174808424271acb32e33b/keel-web/src/main/kotlin/com/netflix/spinnaker/keel/services/ExportService.kt#L145-L169

Also, #1055 introduced some code to find pipelines based on entity tags from clouddriver.

Just a thought.

@emjburns
Copy link
Contributor Author

@luispollo thanks for the suggestion. Right now I don't think that it is worth it to try and find the pipeline bake stage to discover OS - users typically understand what OS their package needs to run on, and I think we can ask them if we don't have the information.

If it turns out that this is a major migration pain point I think we can add more logic to how we find these things.

@emjburns emjburns merged commit 9cb5c4b into spinnaker:master Apr 29, 2020
return DebianArtifact(
name = artifactName,
vmOptions = VirtualMachineOptions(regions = serverGroups.keys, baseOs = guessBaseOsFrom(base.image)))
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lol that method name :)

@gal-yardeni
Copy link
Contributor

@emjburns sorry for coming late to the party... was wondering few things:

  1. what is the flow we expect newt to be doing? do we expect it to call this endpoint for each resource the user wants to export and then merge everything to one delivery config?
  2. what was the behavior prior to this change? did we infer the artifact info in a different way? you wrote it's not supported anymore, but I'm curious to learn what we used to do. We can talk about it offline if you prefer!

@gal-yardeni
Copy link
Contributor

answered in private, thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants