New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable petsets in origin #9972

Merged
merged 8 commits into from Aug 4, 2016

Conversation

Projects
None yet
6 participants
@smarterclayton
Member

smarterclayton commented Jul 21, 2016

[test]

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Jul 21, 2016

Member

Has a mostly green run, would like to get this in @liggitt

Member

smarterclayton commented Jul 21, 2016

Has a mostly green run, would like to get this in @liggitt

@deads2k

This comment has been minimized.

Show comment
Hide comment
@deads2k

deads2k Jul 21, 2016

Contributor

That integration flake is killing us.

Contributor

deads2k commented Jul 21, 2016

That integration flake is killing us.

@deads2k deads2k self-assigned this Jul 21, 2016

app: mysql
annotations:
pod.alpha.kubernetes.io/initialized: "true"
pod.alpha.kubernetes.io/init-containers: '[

This comment has been minimized.

@deads2k

deads2k Jul 21, 2016

Contributor

SCC checks these, right?

@deads2k

deads2k Jul 21, 2016

Contributor

SCC checks these, right?

name: ist
args:
- --defaults-file=/etc/mysql/my-galera.cnf
- --user=root

This comment has been minimized.

@deads2k

deads2k Jul 21, 2016

Contributor

yeah, this doesn't look promising. Does the example work with unmodified openshift?

@deads2k

deads2k Jul 21, 2016

Contributor

yeah, this doesn't look promising. Does the example work with unmodified openshift?

This comment has been minimized.

@smarterclayton

smarterclayton Jul 21, 2016

Member

As a someone with access to anyuid, yes.

@smarterclayton

smarterclayton Jul 21, 2016

Member

As a someone with access to anyuid, yes.

@deads2k

This comment has been minimized.

Show comment
Hide comment
@deads2k

deads2k Jul 21, 2016

Contributor

Update our hack script for tests to pull in this example each time so we don't lose it

Contributor

deads2k commented Jul 21, 2016

Update our hack script for tests to pull in this example each time so we don't lose it

Name: PetSetControllerRoleName,
},
Rules: []authorizationapi.PolicyRule{
// PetSetController.podCache.ListWatch

This comment has been minimized.

@deads2k

deads2k Jul 21, 2016

Contributor

This means its not using a shared informer. Either you wired it wrong or we need an upstream fix.

@deads2k

deads2k Jul 21, 2016

Contributor

This means its not using a shared informer. Either you wired it wrong or we need an upstream fix.

This comment has been minimized.

@smarterclayton

smarterclayton Jul 21, 2016

Member

It didn't require this to work, I'm again just following the pattern in this file which is agnostic of shared informer.

@smarterclayton

smarterclayton Jul 21, 2016

Member

It didn't require this to work, I'm again just following the pattern in this file which is agnostic of shared informer.

Verbs: sets.NewString("list", "watch"),
Resources: sets.NewString("petsets"),
},
// PetSetController.petClient

This comment has been minimized.

@deads2k

deads2k Jul 21, 2016

Contributor

I'm not convinced that we should split out rules like this. We never come backto them and they're hard to read. How about one rule per resource.

@deads2k

deads2k Jul 21, 2016

Contributor

I'm not convinced that we should split out rules like this. We never come backto them and they're hard to read. How about one rule per resource.

This comment has been minimized.

@smarterclayton

smarterclayton Jul 21, 2016

Member

I was just following the pattern, happy to collapse them.

@smarterclayton

smarterclayton Jul 21, 2016

Member

I was just following the pattern, happy to collapse them.

},
// PetSetController.eventRecorder
{
Verbs: sets.NewString("create", "update", "patch"),

This comment has been minimized.

@deads2k

deads2k Jul 21, 2016

Contributor

update usually implies get

@deads2k

deads2k Jul 21, 2016

Contributor

update usually implies get

This comment has been minimized.

@deads2k

deads2k Aug 4, 2016

Contributor

apigroup

@deads2k

deads2k Aug 4, 2016

Contributor

apigroup

// This is an escalating client and we must admission check the petset
{
Verbs: sets.NewString("get", "create"), // future "delete"
Resources: sets.NewString("persistentvolumeclaims"),

This comment has been minimized.

@deads2k

deads2k Jul 21, 2016

Contributor

@eparis since petsets can create PVCs and you're trying to find a way to secure the binding, you need to make sure that this isn't an escalation path.

@deads2k

deads2k Jul 21, 2016

Contributor

@eparis since petsets can create PVCs and you're trying to find a way to secure the binding, you need to make sure that this isn't an escalation path.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Jul 21, 2016

Member

Update our hack script for tests to pull in this example each time so we don't lose it

I think I'm going to curate the examples list and focus on one or two that work, and point people to upstream.

Member

smarterclayton commented Jul 21, 2016

Update our hack script for tests to pull in this example each time so we don't lose it

I think I'm going to curate the examples list and focus on one or two that work, and point people to upstream.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Jul 21, 2016

Member

I'm going to enable these for read write for admin/edit now, rather than later, because there may not be a later and I don't want to hold QE up. Admins can always deny them if they have a problem.

Member

smarterclayton commented Jul 21, 2016

I'm going to enable these for read write for admin/edit now, rather than later, because there may not be a later and I don't want to hold QE up. Admins can always deny them if they have a problem.

@deads2k

This comment has been minimized.

Show comment
Hide comment
@deads2k

deads2k Jul 21, 2016

Contributor

I'm going to enable these for read write for admin/edit now, rather than later, because there may not be a later and I don't want to hold QE up. Admins can always deny them if they have a problem.

You know we're only getting to do this because he's on vacation. I think you should now merge something crazy of mine :)

Contributor

deads2k commented Jul 21, 2016

I'm going to enable these for read write for admin/edit now, rather than later, because there may not be a later and I don't want to hold QE up. Admins can always deny them if they have a problem.

You know we're only getting to do this because he's on vacation. I think you should now merge something crazy of mine :)

@liggitt

This comment has been minimized.

Show comment
Hide comment
@liggitt

liggitt Jul 21, 2016

Contributor

I hate you both

Contributor

liggitt commented Jul 21, 2016

I hate you both

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Jul 21, 2016

Member

I don't anticipate pet set problems, and if we get the "insecure role" I'll move them there.

Member

smarterclayton commented Jul 21, 2016

I don't anticipate pet set problems, and if we get the "insecure role" I'll move them there.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Jul 22, 2016

Member

Trying the examples out is mostly a DNS problem at this point (the upstream DNS code is opaquely tested, so I'm not even sure what format they are expecting, or whether they don't even work on centos for some reason).

Member

smarterclayton commented Jul 22, 2016

Trying the examples out is mostly a DNS problem at this point (the upstream DNS code is opaquely tested, so I'm not even sure what format they are expecting, or whether they don't even work on centos for some reason).

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Jul 22, 2016

Member

DNS is broken in Kubernetes (ish), sorting out in 29420

Member

smarterclayton commented Jul 22, 2016

DNS is broken in Kubernetes (ish), sorting out in 29420

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Jul 26, 2016

Member

Good news! DNS was broken on our side. Fixing. And adding mor tests.

Member

smarterclayton commented Jul 26, 2016

Good news! DNS was broken on our side. Fixing. And adding mor tests.

@deads2k

This comment has been minimized.

Show comment
Hide comment
@deads2k

deads2k Jul 26, 2016

Contributor

@smarterclayton What are the precise rules on hostname annotations? If you enable them here, it may have an impact on the service serving cert signer.

@liggitt There were issues with the first upstream implementations of this hostname thing. Is it good now?

Contributor

deads2k commented Jul 26, 2016

@smarterclayton What are the precise rules on hostname annotations? If you enable them here, it may have an impact on the service serving cert signer.

@liggitt There were issues with the first upstream implementations of this hostname thing. Is it good now?

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Jul 26, 2016

Member

Rules are they have to be set on the pod. So create/update on pods allows you to add public DNS names under the controlling service.

Member

smarterclayton commented Jul 26, 2016

Rules are they have to be set on the pod. So create/update on pods allows you to add public DNS names under the controlling service.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Jul 26, 2016

Member

I'm not aware of any security issues w.r.t. hostname annotation upstream in the current design.

Member

smarterclayton commented Jul 26, 2016

I'm not aware of any security issues w.r.t. hostname annotation upstream in the current design.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Jul 27, 2016

Member

Hells yeah! going to make status work real quick.

On Tue, Jul 26, 2016 at 11:17 PM, OpenShift Bot notifications@github.com
wrote:

continuous-integration/openshift-jenkins/test SUCCESS (
https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/6957/)


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#9972 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_py2Wi9XqtOUZEQ4BceQPCu8lhgbpks5qZs3NgaJpZM4JRYA_
.

Member

smarterclayton commented Jul 27, 2016

Hells yeah! going to make status work real quick.

On Tue, Jul 26, 2016 at 11:17 PM, OpenShift Bot notifications@github.com
wrote:

continuous-integration/openshift-jenkins/test SUCCESS (
https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/6957/)


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#9972 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_py2Wi9XqtOUZEQ4BceQPCu8lhgbpks5qZs3NgaJpZM4JRYA_
.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Jul 27, 2016

Member

Ok, petset is now no longer in an embarrassing state in Origin. DNS is now conformant with Kube (tim's tests do not test what he thinks they test), the galera example worked for me with no modification except adding host path volumes to my system, and oc status is not terrible:

oc status
In project default on server https://10.1.2.2:8443

svc/galera (headless):3306
  petset/mysql manages erkules/galera:basic created 5 days ago - 3 pods

svc/kubernetes - 172.30.0.1 ports 443, 53->8053, 53->8053

I made a structural change to the graph to prepare for more than one controller (replaced ManagedByRC with ManagedByControllers, cleaned up some abstractions slightly) although I did not add petset vs RC fighting. I'll open an issue to replace our complicated "this manages this" with "this is covered by this controller" as the idle PR covers.

Member

smarterclayton commented Jul 27, 2016

Ok, petset is now no longer in an embarrassing state in Origin. DNS is now conformant with Kube (tim's tests do not test what he thinks they test), the galera example worked for me with no modification except adding host path volumes to my system, and oc status is not terrible:

oc status
In project default on server https://10.1.2.2:8443

svc/galera (headless):3306
  petset/mysql manages erkules/galera:basic created 5 days ago - 3 pods

svc/kubernetes - 172.30.0.1 ports 443, 53->8053, 53->8053

I made a structural change to the graph to prepare for more than one controller (replaced ManagedByRC with ManagedByControllers, cleaned up some abstractions slightly) although I did not add petset vs RC fighting. I'll open an issue to replace our complicated "this manages this" with "this is covered by this controller" as the idle PR covers.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton
Member

smarterclayton commented Jul 30, 2016

[test]

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Aug 1, 2016

Member

Registry blip [test]

--- FAIL: TestTriggers_imageChange_nonAutomatic (38.30s)
    deploy_trigger_test.go:262: Waiting for image stream mapping to be reflected in the image stream status...
    deploy_trigger_test.go:275: Still waiting for latest tag status update on imagestream "test-image-stream"
    deploy_trigger_test.go:297: Waiting for the initial deploymentconfig update in response to the imagestream update
    deploy_trigger_test.go:320: timed out waiting for the image update to happen
FAIL
Member

smarterclayton commented Aug 1, 2016

Registry blip [test]

--- FAIL: TestTriggers_imageChange_nonAutomatic (38.30s)
    deploy_trigger_test.go:262: Waiting for image stream mapping to be reflected in the image stream status...
    deploy_trigger_test.go:275: Still waiting for latest tag status update on imagestream "test-image-stream"
    deploy_trigger_test.go:297: Waiting for the initial deploymentconfig update in response to the imagestream update
    deploy_trigger_test.go:320: timed out waiting for the image update to happen
FAIL
@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Aug 1, 2016

Member

[test] secrets flake

Member

smarterclayton commented Aug 1, 2016

[test] secrets flake

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Aug 1, 2016

Member
Member

smarterclayton commented Aug 1, 2016

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Aug 2, 2016

Member

@liggitt or @ncdc or @pmorie please review.

Most significant changes are bringing our DNS inline with Kube - basically, during 1.3 Kube "fixed" their DNS to be consistent, which was different than how our DNS worked. For us to be compatible with Kube we need to "fix" our DNS to be equivalent. This may be impactful to clients using DNS, but we really have no options.

Changes to DNS:

  1. dig SRV <headless|clustered> no longer returns a list of all available ports (as DNS records like _http._tcp.NAME) as records like _<portname>._<protocol>.<headless|clustered>, but returns:
    1. clustered returns 1 <hashedip>.NAME pointing to the clusterIP
    2. headless returns 1-N <endpoints1-N>.NAME pointing to each backend. The endpoint name is either hostname (on the endpoint record) or loaded from the annotation
  2. dig SRV _<portname>._<protocol>.<headless|clustered> returns either 0 records (if no port with port name or protocol is defined), or a record like 1 above but with the port prefix and port info
  3. dig <endpoint>.<headless|clustered> should return 0 records if that endpoint name doesn't exist, or the IP of the endpoint (cluster IP or individual endpoint IP if headless).
Member

smarterclayton commented Aug 2, 2016

@liggitt or @ncdc or @pmorie please review.

Most significant changes are bringing our DNS inline with Kube - basically, during 1.3 Kube "fixed" their DNS to be consistent, which was different than how our DNS worked. For us to be compatible with Kube we need to "fix" our DNS to be equivalent. This may be impactful to clients using DNS, but we really have no options.

Changes to DNS:

  1. dig SRV <headless|clustered> no longer returns a list of all available ports (as DNS records like _http._tcp.NAME) as records like _<portname>._<protocol>.<headless|clustered>, but returns:
    1. clustered returns 1 <hashedip>.NAME pointing to the clusterIP
    2. headless returns 1-N <endpoints1-N>.NAME pointing to each backend. The endpoint name is either hostname (on the endpoint record) or loaded from the annotation
  2. dig SRV _<portname>._<protocol>.<headless|clustered> returns either 0 records (if no port with port name or protocol is defined), or a record like 1 above but with the port prefix and port info
  3. dig <endpoint>.<headless|clustered> should return 0 records if that endpoint name doesn't exist, or the IP of the endpoint (cluster IP or individual endpoint IP if headless).
// AddAllManagedByRCPodEdges calls AddManagedByRCPodEdges for every ServiceNode in the graph
func AddAllManagedByRCPodEdges(g osgraph.MutableUniqueGraph) {
// AddAllManagedByControllerPodEdges calls AddManagedByControllerPodEdges for every node in the graph
// TODO: should do this through an interface (selects pods)

This comment has been minimized.

@deads2k

deads2k Aug 4, 2016

Contributor

This won't do long term. Things outside the legacy API group manage pods. In fact, if we kept the API clean, this would be broken now since petsets aren't in the legacy API group.

@deads2k

deads2k Aug 4, 2016

Contributor

This won't do long term. Things outside the legacy API group manage pods. In fact, if we kept the API clean, this would be broken now since petsets aren't in the legacy API group.

This comment has been minimized.

@smarterclayton

smarterclayton Aug 4, 2016

Member

By interface I meant, graph nodes should expose "i match these pods" interface.

@smarterclayton

smarterclayton Aug 4, 2016

Member

By interface I meant, graph nodes should expose "i match these pods" interface.

ManagedByRCEdgeKind = "ManagedByRC"
// ManagedByControllerEdgeKind goes from Pod to ReplicationController when the Pod satisfies the ReplicationController's label selector
// TODO: rename to ManagedByController, make generic to controller ref
ManagedByControllerEdgeKind = "ManagedByRC"

This comment has been minimized.

@deads2k

deads2k Aug 4, 2016

Contributor

Let's change the constant now.

@deads2k

deads2k Aug 4, 2016

Contributor

Let's change the constant now.

type PetSetSpecNode struct {
osgraph.Node
*kapps.PetSetSpec

This comment has been minimized.

@deads2k

deads2k Aug 4, 2016

Contributor

non-anonymous please.

@deads2k

deads2k Aug 4, 2016

Contributor

non-anonymous please.

This comment has been minimized.

@smarterclayton

smarterclayton Aug 4, 2016

Member

Just following bad patterns set by the person before me. Will fix most of
them.

On Thu, Aug 4, 2016 at 2:42 PM, David Eads notifications@github.com wrote:

In pkg/api/kubegraph/nodes/types.go
#9972 (comment):

+func (n PetSetNode) UniqueName() osgraph.UniqueName {

  • return PetSetNodeName(n.PetSet)
    +}

+func (*PetSetNode) Kind() string {

  • return PetSetNodeKind
    +}

+func PetSetSpecNodeName(o *kapps.PetSetSpec, ownerName osgraph.UniqueName) osgraph.UniqueName {

  • return osgraph.UniqueName(fmt.Sprintf("%s|%v", PetSetSpecNodeKind, ownerName))
    +}

+type PetSetSpecNode struct {

  • osgraph.Node
  • *kapps.PetSetSpec

non-anonymous please.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/openshift/origin/pull/9972/files/fdbb46f05fe687c7d2791edf8f685f181e355e91#r73579653,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_pxR2YUC979ZEYGjeH2KqseVOPMTmks5qcjKJgaJpZM4JRYA_
.

@smarterclayton

smarterclayton Aug 4, 2016

Member

Just following bad patterns set by the person before me. Will fix most of
them.

On Thu, Aug 4, 2016 at 2:42 PM, David Eads notifications@github.com wrote:

In pkg/api/kubegraph/nodes/types.go
#9972 (comment):

+func (n PetSetNode) UniqueName() osgraph.UniqueName {

  • return PetSetNodeName(n.PetSet)
    +}

+func (*PetSetNode) Kind() string {

  • return PetSetNodeKind
    +}

+func PetSetSpecNodeName(o *kapps.PetSetSpec, ownerName osgraph.UniqueName) osgraph.UniqueName {

  • return osgraph.UniqueName(fmt.Sprintf("%s|%v", PetSetSpecNodeKind, ownerName))
    +}

+type PetSetSpecNode struct {

  • osgraph.Node
  • *kapps.PetSetSpec

non-anonymous please.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/openshift/origin/pull/9972/files/fdbb46f05fe687c7d2791edf8f685f181e355e91#r73579653,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_pxR2YUC979ZEYGjeH2KqseVOPMTmks5qcjKJgaJpZM4JRYA_
.

@deads2k

This comment has been minimized.

Show comment
Hide comment
@deads2k

deads2k Aug 4, 2016

Contributor

This pull got a lot bigger while my back was turned.

Contributor

deads2k commented Aug 4, 2016

This pull got a lot bigger while my back was turned.

},
// PetSetController.podClient
{
Verbs: sets.NewString("get", "create", "delete", "update"),

This comment has been minimized.

@deads2k

deads2k Aug 4, 2016

Contributor

api group

@deads2k

deads2k Aug 4, 2016

Contributor

api group

// PetSetController.petClient (PVC)
// This is an escalating client and we must admission check the petset
{
Verbs: sets.NewString("get", "create"), // future "delete"

This comment has been minimized.

@deads2k

deads2k Aug 4, 2016

Contributor

apigroup

@deads2k

deads2k Aug 4, 2016

Contributor

apigroup

kubernetes.io/created-by: |
{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"PetSet","namespace":"default","name":"mysql","uid":"3900c985-4f5b-11e6-b8a1-080027242396","apiVersion":"apps","resourceVersion":"6790"}}
openshift.io/scc: anyuid
pod.alpha.kubernetes.io/init-container-statuses: '[{"name":"install","state":{"terminated":{"exitCode":0,"reason":"Completed","startedAt":"2016-07-27T02:41:42Z","finishedAt":"2016-07-27T02:41:42Z","containerID":"docker://2538c65f65557955c02745ef4021181cf322c8dc0db62144dd1e1f8ea9f7fa54"}},"lastState":{},"ready":true,"restartCount":0,"image":"gcr.io/google_containers/galera-install:0.1","imageID":"docker://sha256:56ef857005d0ce479f2db0e4ee0ece05e0766ebfa7e79e27e1513915262a18ec","containerID":"docker://2538c65f65557955c02745ef4021181cf322c8dc0db62144dd1e1f8ea9f7fa54"},{"name":"bootstrap","state":{"terminated":{"exitCode":0,"reason":"Completed","startedAt":"2016-07-27T02:41:44Z","finishedAt":"2016-07-27T02:41:45Z","containerID":"docker://4df7188d37033c182e675d45179941766bd1e6a013469038f43fa3fecc2cc06d"}},"lastState":{},"ready":true,"restartCount":0,"image":"debian:jessie","imageID":"docker://sha256:1b088884749bd93867ddb48ff404d4bbff09a17af8d95bc863efa5d133f87b78","containerID":"docker://4df7188d37033c182e675d45179941766bd1e6a013469038f43fa3fecc2cc06d"}]'

This comment has been minimized.

@deads2k

deads2k Aug 4, 2016

Contributor

seems weird that our test data would have statuses.

@deads2k

deads2k Aug 4, 2016

Contributor

seems weird that our test data would have statuses.

This comment has been minimized.

@smarterclayton

smarterclayton Aug 4, 2016

Member

In this case the describer uses it and since there is no kubelet running we
don't get the "official" output without it.

On Thu, Aug 4, 2016 at 2:48 PM, David Eads notifications@github.com wrote:

In test/testdata/app-scenarios/petset.yaml
#9972 (comment):

  •  restartCount: 0
    
  •  state:
    
  •    running:
    
  •      startedAt: 2016-07-27T02:41:16Z
    
  • hostIP: 10.0.2.15
  • phase: Running
  • podIP: 172.17.0.2
  • startTime: 2016-07-27T02:41:09Z
    +- apiVersion: v1
  • kind: Pod
  • metadata:
  • annotations:
  •  kubernetes.io/created-by: |
    
  •    {"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"PetSet","namespace":"default","name":"mysql","uid":"3900c985-4f5b-11e6-b8a1-080027242396","apiVersion":"apps","resourceVersion":"6790"}}
    
  •  openshift.io/scc: anyuid
    
  •  pod.alpha.kubernetes.io/init-container-statuses: '[{"name":"install","state":{"terminated":{"exitCode":0,"reason":"Completed","startedAt":"2016-07-27T02:41:42Z","finishedAt":"2016-07-27T02:41:42Z","containerID":"docker://2538c65f65557955c02745ef4021181cf322c8dc0db62144dd1e1f8ea9f7fa54"}},"lastState":{},"ready":true,"restartCount":0,"image":"gcr.io/google_containers/galera-install:0.1","imageID":"docker://sha256:56ef857005d0ce479f2db0e4ee0ece05e0766ebfa7e79e27e1513915262a18ec","containerID":"docker://2538c65f65557955c02745ef4021181cf322c8dc0db62144dd1e1f8ea9f7fa54"},{"name":"bootstrap","state":{"terminated":{"exitCode":0,"reason":"Completed","startedAt":"2016-07-27T02:41:44Z","finishedAt":"2016-07-27T02:41:45Z","containerID":"docker://4df7188d37033c182e675d45179941766bd1e6a013469038f43fa3fecc2cc06d"}},"lastState":{},"ready":true,"restartCount":0,"image":"debian:jessie","imageID":"docker://sha256:1b088884749bd93867ddb48ff404d4bbff09a17af8d95bc863efa5d133f87b78","containerID":"docker://4df7188d37033c182e675d45179941766bd1e6a013469038f43fa3fecc2cc06d"}]'
    

seems weird that our test data would have statuses.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/openshift/origin/pull/9972/files/fdbb46f05fe687c7d2791edf8f685f181e355e91#r73580857,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_p6u3uQME8Okm2-7Sn028Ex3IE85_ks5qcjQZgaJpZM4JRYA_
.

@smarterclayton

smarterclayton Aug 4, 2016

Member

In this case the describer uses it and since there is no kubelet running we
don't get the "official" output without it.

On Thu, Aug 4, 2016 at 2:48 PM, David Eads notifications@github.com wrote:

In test/testdata/app-scenarios/petset.yaml
#9972 (comment):

  •  restartCount: 0
    
  •  state:
    
  •    running:
    
  •      startedAt: 2016-07-27T02:41:16Z
    
  • hostIP: 10.0.2.15
  • phase: Running
  • podIP: 172.17.0.2
  • startTime: 2016-07-27T02:41:09Z
    +- apiVersion: v1
  • kind: Pod
  • metadata:
  • annotations:
  •  kubernetes.io/created-by: |
    
  •    {"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"PetSet","namespace":"default","name":"mysql","uid":"3900c985-4f5b-11e6-b8a1-080027242396","apiVersion":"apps","resourceVersion":"6790"}}
    
  •  openshift.io/scc: anyuid
    
  •  pod.alpha.kubernetes.io/init-container-statuses: '[{"name":"install","state":{"terminated":{"exitCode":0,"reason":"Completed","startedAt":"2016-07-27T02:41:42Z","finishedAt":"2016-07-27T02:41:42Z","containerID":"docker://2538c65f65557955c02745ef4021181cf322c8dc0db62144dd1e1f8ea9f7fa54"}},"lastState":{},"ready":true,"restartCount":0,"image":"gcr.io/google_containers/galera-install:0.1","imageID":"docker://sha256:56ef857005d0ce479f2db0e4ee0ece05e0766ebfa7e79e27e1513915262a18ec","containerID":"docker://2538c65f65557955c02745ef4021181cf322c8dc0db62144dd1e1f8ea9f7fa54"},{"name":"bootstrap","state":{"terminated":{"exitCode":0,"reason":"Completed","startedAt":"2016-07-27T02:41:44Z","finishedAt":"2016-07-27T02:41:45Z","containerID":"docker://4df7188d37033c182e675d45179941766bd1e6a013469038f43fa3fecc2cc06d"}},"lastState":{},"ready":true,"restartCount":0,"image":"debian:jessie","imageID":"docker://sha256:1b088884749bd93867ddb48ff404d4bbff09a17af8d95bc863efa5d133f87b78","containerID":"docker://4df7188d37033c182e675d45179941766bd1e6a013469038f43fa3fecc2cc06d"}]'
    

seems weird that our test data would have statuses.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/openshift/origin/pull/9972/files/fdbb46f05fe687c7d2791edf8f685f181e355e91#r73580857,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_p6u3uQME8Okm2-7Sn028Ex3IE85_ks5qcjQZgaJpZM4JRYA_
.

@deads2k

This comment has been minimized.

Show comment
Hide comment
@deads2k

deads2k Aug 4, 2016

Contributor

minor comments, didn't review the dns bits, lgtm otherwise.

Contributor

deads2k commented Aug 4, 2016

minor comments, didn't review the dns bits, lgtm otherwise.

smarterclayton added some commits Jun 17, 2016

Show PetSets in oc status
Make a minor change to prepare for generic controller references (since
PetSets and RCs could conflict over pods).
@openshift-bot

This comment has been minimized.

Show comment
Hide comment
@openshift-bot

openshift-bot Aug 4, 2016

Evaluated for origin test up to cc58509

openshift-bot commented Aug 4, 2016

Evaluated for origin test up to cc58509

@openshift-bot

This comment has been minimized.

Show comment
Hide comment
@openshift-bot

openshift-bot Aug 4, 2016

continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/7527/)

openshift-bot commented Aug 4, 2016

continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/7527/)

}
}
matchHostname := len(segments) > 3 && !hasAllPrefixedSegments(segments[3:4], "_")

This comment has been minimized.

@DirectXMan12

DirectXMan12 Aug 4, 2016

Contributor

The godoc at the top of this function should be updated to clarify that we now either return a hash of the IP or a hostname value. Additionally, it would be very useful to people looking at/reviewing this code in the future if there were comments added before each "branch point"/conditional indicating which subquery they were matching (for instance, AFAICT, this one might read "matchHostname indicates we are matching ...endpoints, and not _endpoints...svc")

@DirectXMan12

DirectXMan12 Aug 4, 2016

Contributor

The godoc at the top of this function should be updated to clarify that we now either return a hash of the IP or a hostname value. Additionally, it would be very useful to people looking at/reviewing this code in the future if there were comments added before each "branch point"/conditional indicating which subquery they were matching (for instance, AFAICT, this one might read "matchHostname indicates we are matching ...endpoints, and not _endpoints...svc")

This comment has been minimized.

@smarterclayton

smarterclayton Aug 4, 2016

Member

Will do a follow up PR with comments - saw this after I hit the big red. Keep commenting please.

@smarterclayton

smarterclayton Aug 4, 2016

Member

Will do a follow up PR with comments - saw this after I hit the big red. Keep commenting please.

This comment has been minimized.

@DirectXMan12

DirectXMan12 Aug 5, 2016

Contributor

Also: nit: I think this would be clearer as !strings.HasPrefix(segments[3], "_") (the use of the single-element slice here seems ad bit odd to me...)

@DirectXMan12

DirectXMan12 Aug 5, 2016

Contributor

Also: nit: I think this would be clearer as !strings.HasPrefix(segments[3], "_") (the use of the single-element slice here seems ad bit odd to me...)

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Aug 4, 2016

Member

[merge] talked through DNS stuff with Jordan and will get more review later.

Member

smarterclayton commented Aug 4, 2016

[merge] talked through DNS stuff with Jordan and will get more review later.

@openshift-bot

This comment has been minimized.

Show comment
Hide comment
@openshift-bot

openshift-bot Aug 4, 2016

continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/7527/) (Image: devenv-rhel7_4758)

openshift-bot commented Aug 4, 2016

continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/7527/) (Image: devenv-rhel7_4758)

@openshift-bot

This comment has been minimized.

Show comment
Hide comment
@openshift-bot

openshift-bot Aug 4, 2016

Evaluated for origin merge up to cc58509

openshift-bot commented Aug 4, 2016

Evaluated for origin merge up to cc58509

@openshift-bot openshift-bot merged commit e52a190 into openshift:master Aug 4, 2016

3 checks passed

continuous-integration/openshift-jenkins/merge Passed
Details
continuous-integration/openshift-jenkins/test Passed
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
@@ -187,20 +187,29 @@ func TestDNS(t *testing.T) {
expect: []*net.IP{&headlessIP},
},
{ // specific port of a headless service
dnsQuestionName: "unknown-port-2345.e1.headless.default.svc.cluster.local.",
dnsQuestionName: "_http._tcp.headless.default.svc.cluster.local.",

This comment has been minimized.

@DirectXMan12

DirectXMan12 Aug 5, 2016

Contributor

It looks like none of these exercise the _<protocol>.<svc>.<ns>.svc case (wildcard port match), so there should probably be such a test here. Furthermore, it looks like none of these test the .endpoints.cluster.local queries, and that seems like it would be valuable to test here as well.

@DirectXMan12

DirectXMan12 Aug 5, 2016

Contributor

It looks like none of these exercise the _<protocol>.<svc>.<ns>.svc case (wildcard port match), so there should probably be such a test here. Furthermore, it looks like none of these test the .endpoints.cluster.local queries, and that seems like it would be valuable to test here as well.

This comment has been minimized.

@smarterclayton

smarterclayton Aug 5, 2016

Member

Those are tested in the extended DNS test and the test-cmd DNS test.

On Thu, Aug 4, 2016 at 11:02 PM, Solly Ross notifications@github.com
wrote:

In test/integration/dns_test.go
#9972 (comment):

@@ -187,20 +187,29 @@ func TestDNS(t _testing.T) {
expect: []_net.IP{&headlessIP},
},
{ // specific port of a headless service

  •       dnsQuestionName: "unknown-port-2345.e1.headless.default.svc.cluster.local.",
    
  •       dnsQuestionName: "_http._tcp.headless.default.svc.cluster.local.",
    

It looks like none of these exercise the _...svc case
(wildcard port match), so there should probably be such a test here.
Furthermore, it looks like none of these test the .endpoints queries, and
that seems like it would be valuable to test here as well.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/openshift/origin/pull/9972/files/cc585097d4706f1d49fb9a451760f38e9863a794#r73636324,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_pymEzpOhGu0rsHQnD2TGkhSFO4LAks5qcqeugaJpZM4JRYA_
.

@smarterclayton

smarterclayton Aug 5, 2016

Member

Those are tested in the extended DNS test and the test-cmd DNS test.

On Thu, Aug 4, 2016 at 11:02 PM, Solly Ross notifications@github.com
wrote:

In test/integration/dns_test.go
#9972 (comment):

@@ -187,20 +187,29 @@ func TestDNS(t _testing.T) {
expect: []_net.IP{&headlessIP},
},
{ // specific port of a headless service

  •       dnsQuestionName: "unknown-port-2345.e1.headless.default.svc.cluster.local.",
    
  •       dnsQuestionName: "_http._tcp.headless.default.svc.cluster.local.",
    

It looks like none of these exercise the _...svc case
(wildcard port match), so there should probably be such a test here.
Furthermore, it looks like none of these test the .endpoints queries, and
that seems like it would be valuable to test here as well.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/openshift/origin/pull/9972/files/cc585097d4706f1d49fb9a451760f38e9863a794#r73636324,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_pymEzpOhGu0rsHQnD2TGkhSFO4LAks5qcqeugaJpZM4JRYA_
.

@DirectXMan12

This comment has been minimized.

Show comment
Hide comment
@DirectXMan12

DirectXMan12 Aug 5, 2016

Contributor

Overall, I think the DNS stuff in general looks ok, but it seems like there are a couple of paths missing from the integration tests, and the DNS code really needs better commenting. The flow of the DNS code is somewhat convoluted (not particularly because of this PR, just in general), so I think comments to the effect of // now, we can only have x.y.z and a.b.z queries, so try those would be fairly useful in making this easier to read.

Contributor

DirectXMan12 commented Aug 5, 2016

Overall, I think the DNS stuff in general looks ok, but it seems like there are a couple of paths missing from the integration tests, and the DNS code really needs better commenting. The flow of the DNS code is somewhat convoluted (not particularly because of this PR, just in general), so I think comments to the effect of // now, we can only have x.y.z and a.b.z queries, so try those would be fairly useful in making this easier to read.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment