Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase timeout for console route #1143

Closed

Conversation

sallyom
Copy link
Contributor

@sallyom sallyom commented Jan 29, 2019

I'm seeing this issue when installing with AWS locally, and it's also been causing CI failures.

@openshift-ci-robot openshift-ci-robot added the size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. label Jan 29, 2019
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: sallyom
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approver: smarterclayton

If they are not already assigned, you can assign the PR to them by writing /assign @smarterclayton in a comment when ready.

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@abhinavdahiya
Copy link
Contributor

#1132 is adding the wait for available cluster version. THat would mean all operators are installed and ready.

After that I think this timeout change should not be required. As the situation this is trying to solve that installer needs to wait for a bit longer should not exist after #1132

@sallyom
Copy link
Contributor Author

sallyom commented Jan 29, 2019

@abhinavdahiya ok sounds good, I'll leave this open for now while we see if that change resolves the timeout failures. thanks

@ironcladlou
Copy link
Contributor

Is there any reason not to merge this now while #1132 is in progress? We're still losing time to this.

@crawford
Copy link
Contributor

Will increasing the timeout actually improve our failure rate or will it just take longer to fail? I'm fine with this change if it will help with CI, but of the failures I've personally seen, none of them would be helped by this PR.

@ironcladlou
Copy link
Contributor

@crawford

Will increasing the timeout actually improve our failure rate or will it just take longer to fail? I'm fine with this change if it will help with CI, but of the failures I've personally seen, none of them would be helped by this PR.

You might be right. Looking back over prior discussion it's not clear to me whether the failure mode is an intermittent delayed creation or a total lack of creation. If the latter, this PR wouldn't help.

@ironcladlou
Copy link
Contributor

On the other hand, when's the last time you saw a PR pass tests so easily? :)

@wking
Copy link
Member

wking commented Jan 31, 2019

#1132 should have addressed this, so...

/close

But please comment if you still see timeouts :).

@openshift-ci-robot
Copy link
Contributor

@wking: Closed this PR.

In response to this:

#1132 should have addressed this, so...

/close

But please comment if you still see timeouts :).

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants