Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rebuild filesystem map on config change #95

Merged

Conversation

gabemontero
Copy link
Contributor

No description provided.

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 8, 2019
@openshift-ci-robot openshift-ci-robot added the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label Feb 8, 2019
@gabemontero
Copy link
Contributor Author

the latest e2e-aws-operator run uncovered some more short comings in the e2e test ... it minimally needs to wait for the skip list change to be fully processed by the operator ... in the last run, it became apparent it was not

will be adding fixes to this PR

@openshift-ci-robot openshift-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Feb 8, 2019
@gabemontero
Copy link
Contributor Author

image-eco: another incredibly slow dancer-mysql build wrt downloading dancer-mysql dependencies from various github and other internet locations

@gabemontero
Copy link
Contributor Author

and the first clean aws-operator in a bit

@gabemontero
Copy link
Contributor Author

/test e2e-aws-image-ecosystem

@gabemontero
Copy link
Contributor Author

running operator e2e again

/test e2e-aws-operator

@gabemontero
Copy link
Contributor Author

aws-operator passes again

image-eco flakes again with the same dancer slowness ....

@bparees ptal at the changes

I'll initiate more test runs, but if these slowness flakes persist, having to dive into them and possible disabling tests or increasing timeouts of these tests might be needed .... I think the delays are upstream internet issues, but a second or third set of eyes may not hurt if it comes to that

@gabemontero
Copy link
Contributor Author

/test e2e-aws-image-ecosystem

@gabemontero
Copy link
Contributor Author

/test e2e-aws-operator

imagestream = tagInPayload("v4.0", "IMAGE_AGENT_NODEJS", imagestream)
if g.h != nil {
// if we are not skipping jenkins imagestream, tag in images from payload
if _, ok := g.h.imagestreamFile["jenkins"]; ok {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't need to handle the 2 agent imagestream names separately?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was inclined to think that customers would not be looking to manage the agent streams since they don't come from known samples content from openshift/library

But sure perhaps that is an over reaching assumption?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we allow people to skip an imagestream by name i think we have to assume they might use that to skip the jenkins agent imagestreams in addition to the master imagestream. (and conversely, just because they skip jenkins doesn't mean they want to skip the agents)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this will go away in next push

case imagestream.Name == "jenkins-agent-nodejs":
imagestream = tagInPayload("v4.0", "IMAGE_AGENT_NODEJS", imagestream)
if g.h != nil {
// if we are not skipping jenkins imagestream, tag in images from payload
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why does this need to be special cased?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

file processing can occur regardless of actual imagestream upserting

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll add a comment

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but why not just always do this?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(ie always update the imagestream w/ the payload pullspec, even if the user is "skipping" the imagestream).

If anything it seems safer than returning an un-mutated imagestream that contains content we know is wrong.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there will be cases when users want to import their own jenkins images ... at least at the beginning of 4.0 dev @jupierce relayed to me his team would need this

The agent streams yeah maybe it just makes sense to always mutate

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

though per the other comment thread maybe we do allow manage separately, and allow to skip :-)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I was trying to get at is that I see calling this function as orthogonal to whether the imagestream is being skipped or not.

This function's job(as i understand it) is to return an imagestream for a given filepath. (and in the case of jenkins, handle mutating the imagestream as needed, though maybe this isn't the right place to do the mutation since presumably the other imagestream mutations, such as to change the registry hostname, are done elsewhere).

It shouldn't care whether a given imagestream is in the skipped list or not, that should be up to the caller to filter out.

So making this function aware of the skiplist, specifically to handle jenkins mutation, seems awkward.

What would happen if this function always mutated the jenkins imagestream before returning it, regardless of the skiplist?

Alternatively, can we put the mutation in the same place other imagestream mutation is done?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Admittedly the path of least resistance was here. I'll revisit isolating the mutation, placing it elsewhere, and we can compare the awkwardness, etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah it will be cleaner ... part of next push

newStreamMap[si] = true
}
h.skippedImagestreams = newStreamMap
h.skippedTemplates = newTempMap

}

func (h *Handler) buildFileMaps(cfg *v1.Config) error {
func (h *Handler) buildFileMaps(cfg *v1.Config, cfgChange bool) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe rename this to forceRebuild ? That seems to be its intent.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I went with the "why" vs. "intent" etc.

But there may be new reasons to force in the future ... I'll change

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

part of next push

bld skip filters off of Status.Skipped*
don't tag payload for jenkins* when jenkins in skip list
make sure Status.Skipped* updated in TestSkipped
@gabemontero
Copy link
Contributor Author

updates pushed @bparees ... ptal

@bparees
Copy link
Contributor

bparees commented Feb 11, 2019

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Feb 11, 2019
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: bparees, gabemontero

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [bparees,gabemontero]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

1 similar comment
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: bparees, gabemontero

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [bparees,gabemontero]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-robot openshift-merge-robot merged commit 68fdae1 into openshift:master Feb 11, 2019
@gabemontero gabemontero deleted the clear-map-cfg-change branch February 11, 2019 18:31
@gabemontero
Copy link
Contributor Author

cool thx @bparees

and the "ben 1" commit will live in posterity :-)

@bparees
Copy link
Contributor

bparees commented Feb 11, 2019

doh. i swear github was only showing me one commit. oh well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants