Skip to content
This repository has been archived by the owner on Feb 6, 2024. It is now read-only.

Use gcloud builds, and :latest-$USER #217

Merged
merged 3 commits into from
Nov 10, 2018
Merged

Conversation

fejta
Copy link
Contributor

@fejta fejta commented Nov 8, 2018

The UX for gcloud changed

@chrislovecnm
Copy link
Contributor

/lgtm
/approve

@chrislovecnm
Copy link
Contributor

@fejta well I tried to approve 🙄

@chrislovecnm
Copy link
Contributor

chrislovecnm commented Nov 8, 2018

@fejta WTH is

Error from server: context canceled

That is what e2e failed on. I have seen the error multiple times

@fejta
Copy link
Contributor Author

fejta commented Nov 8, 2018

/retest
it is what @smukherj1 has been trying to resolve. No clue what it means...

@fejta
Copy link
Contributor Author

fejta commented Nov 8, 2018

/approve

@k8s-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: chrislovecnm, fejta

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@smukherj1
Copy link
Contributor

Fro the quota logs before attempting the service delete, we see the address usage isn't even close to the limit. So IP address quota doesn't seem to be the issue
Log link: https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/bazelbuild_rules_k8s/217/pull-rules-k8s-e2e/384

IP Quota before delete:

  • limit: 8.0
    metric: STATIC_ADDRESSES
    usage: 0.0
    --
  • limit: 16.0
    metric: IN_USE_ADDRESSES
    usage: 2.0
    --
  • limit: 5.0
    metric: GLOBAL_INTERNAL_ADDRESSES
    usage: 0.0

@smukherj1
Copy link
Contributor

/retest

@fejta
Copy link
Contributor Author

fejta commented Nov 8, 2018

well I guess that answers that..

@fejta
Copy link
Contributor Author

fejta commented Nov 8, 2018

Searching for "context canceled" kubernetes seems to suggest this has something to do with flakes pulling images

@nlopezgi
Copy link
Contributor

nlopezgi commented Nov 8, 2018

/retest

@chrislovecnm
Copy link
Contributor

/test pull-rules-k8s-e2e

@chrislovecnm
Copy link
Contributor

@fejta the command to do a single test is not working as well

/retest

@smukherj1
Copy link
Contributor

/retest

@smukherj1
Copy link
Contributor

smukherj1 commented Nov 8, 2018

Last failure was a unit test flake
Test output for //k8s:resolver_test: Traceback (most recent call last): File "/root/.cache/bazel/_bazel_prow/0996f4cdf646bf91686374917a22c2ea/sandbox/processwrapper-sandbox/2/execroot/io_bazel_rules_k8s/bazel-out/k8-fastbuild/bin/k8s/resolver_test.runfiles/io_bazel_rules_k8s/k8s/resolver_test.py", line 20, in <module> import unittest File "/usr/lib/python2.7/unittest/__init__.py", line 64, in <module> from .main import TestProgram, main File "/usr/lib/python2.7/unittest/main.py", line 7, in <module> from . import loader, runner ValueError: bad marshal data (tuple size out of range)

@chrislovecnm
Copy link
Contributor

Seen this flake before as well

Error from server: grpc: the client connection is closing

  • echo FAILED, cleaning up...
    FAILED, cleaning up..

/retest

@chrislovecnm
Copy link
Contributor

chrislovecnm commented Nov 8, 2018

The last test seems to be our archilles heel. Should we remove the last test? What value is it??

@fejta
Copy link
Contributor Author

fejta commented Nov 8, 2018

/retest

@fejta
Copy link
Contributor Author

fejta commented Nov 8, 2018

I'm cursed!

@smukherj1
Copy link
Contributor

@fejta Worry not. Your savior has arrived with #219. Approve it and thine curse shall begone

@k8s-ci-robot k8s-ci-robot removed the lgtm label Nov 9, 2018
@chrislovecnm
Copy link
Contributor

/lgtm

@chrislovecnm chrislovecnm merged commit 02bef97 into bazelbuild:master Nov 10, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants