Skip to content

Conversation

bryk
Copy link
Contributor

@bryk bryk commented Feb 22, 2016

I tested it on my in-cluster installation and it worked.

Fixes #395

Review on Reviewable

I tested it on my in-cluster installation and it worked.

Fixes kubernetes#395
@bryk
Copy link
Contributor Author

bryk commented Feb 22, 2016

@floreks @maciaszczykm Can you review?

@codecov-io
Copy link

Current coverage is 81.62%

Merging #409 into master will not affect coverage as of 347a7b9

@@            master    #409   diff @@
======================================
  Files           75      75       
  Stmts          615     615       
  Branches         0       0       
  Methods          0       0       
======================================
  Hit            502     502       
  Partial          0       0       
  Missed         113     113       

Review entire Coverage Diff as of 347a7b9

Powered by Codecov. Updated on successful CI builds.

@maciaszczykm
Copy link
Member

To test it I've deployed kubernetes-dashboard.yaml on serve:prod and tried to access it. Unfortunately there is following error:

image

Intresting thing is path /usr/local/google/home/bryk/src/...:35, as you can see it's probably coming from yours setup. However, I'm not sure if this test was accurate in this case... @floreks has similar problem with it.

Besides that test, code looks good.

@bryk
Copy link
Contributor Author

bryk commented Feb 22, 2016

Yeah. kubernetes-dashboard.yaml is wrong. Add -amd64 suffix to image name and test.

@floreks
Copy link
Member

floreks commented Feb 22, 2016

Code LGTM. Lets wait for @maciaszczykm to finish tests.


Reviewed 6 of 6 files at r1.
Review status: all files reviewed at latest revision, all discussions resolved.


Comments from the review on Reviewable.io

@maciaszczykm
Copy link
Member

Strange... It tries to connect with wrong API:

image

image

I've modified image to gcr.io/google_containers/kubernetes-dashboard-amd64:canary.

@bryk
Copy link
Contributor Author

bryk commented Feb 22, 2016

Ok, I'll be investigating this tomorrow.

@bryk
Copy link
Contributor Author

bryk commented Feb 23, 2016

Can you try once more? Delete old RC and recreate new one with image: gcr.io/google_containers/kubernetes-dashboard-amd64:canary

@maciaszczykm
Copy link
Member

Yes, but unfortunately result is still the same:

image

image

apiVersion: v1
kind: ReplicationController
metadata:
  labels:
    app: kubernetes-dashboard-canary
    version: canary
  name: kubernetes-dashboard-canary
  namespace: kube-system
spec:
  replicas: 1
  selector:
    app: kubernetes-dashboard-canary
    version: canary
  template:
    metadata:
      labels:
        app: kubernetes-dashboard-canary
        version: canary
    spec:
      containers:
      - name: kubernetes-dashboard-canary
        image: gcr.io/google_containers/kubernetes-dashboard-amd64:canary
        imagePullPolicy: Always
        ports:
        - containerPort: 9090
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: kubernetes-dashboard-canary
  name: dashboard-canary
  namespace: kube-system
spec:
  ports:
  - port: 80
    targetPort: 9090
  selector:
    app: kubernetes-dashboard-canary

@bryk
Copy link
Contributor Author

bryk commented Feb 23, 2016

Strange, but on my cluster it works.

When the UI starts, I see logs:

[bryk@bryk dashboard (yaml-propagate)]$ kubectl logs kubernetes-dashboard-698oa --namespace=kube-system
2016/02/23 08:41:21 Starting HTTP server on port 9090
2016/02/23 08:41:21 Creating API server client for https://10.167.240.1:443
2016/02/23 08:41:21 Creating in-cluster Heapster client

What is your k8s master version? Can you give me logs?

@maciaszczykm
Copy link
Member

@bryk More details:

✔ ~/workspace/dashboard [yaml-propagate|…53⚑ 1] 
10:20 $ kubectl logs kubernetes-dashboard-canary-pu9if --namespace kube-system2016/02/23 09:00:34 Starting HTTP server on port 9090
2016/02/23 09:00:34 Creating API server client for http://localhost:8080
2016/02/23 09:00:34 Creating in-cluster Heapster client
2016/02/23 09:06:49 Incoming HTTP/1.1 GET /api/v1/replicationcontrollers request from 172.17.0.1:38771
2016/02/23 09:06:49 Getting list of all replication controllers in the cluster
2016/02/23 09:06:49 Get http://localhost:8080/api/v1/replicationcontrollers: dial tcp [::1]:8080: getsockopt: connection refused
2016/02/23 09:06:49 Outcoming response to 172.17.0.1:38771 with 500 status code
2016/02/23 09:09:07 Incoming HTTP/1.1 GET /api/v1/replicationcontrollers request from 172.17.0.1:51523
2016/02/23 09:09:07 Getting list of all replication controllers in the cluster
2016/02/23 09:09:07 Get http://localhost:8080/api/v1/replicationcontrollers: dial tcp [::1]:8080: getsockopt: connection refused
2016/02/23 09:09:07 Outcoming response to 172.17.0.1:51523 with 500 status code
2016/02/23 09:22:47 Incoming HTTP/1.1 GET /api/v1/replicationcontrollers request from 172.17.0.1:42779
2016/02/23 09:22:47 Getting list of all replication controllers in the cluster
2016/02/23 09:22:47 Get http://localhost:8080/api/v1/replicationcontrollers: dial tcp [::1]:8080: getsockopt: connection refused
2016/02/23 09:22:47 Outcoming response to 172.17.0.1:42779 with 500 status code
✔ ~/workspace/dashboard [yaml-propagate|…53⚑ 1] 
10:23 $ kubectl version
Client Version: version.Info{Major:"1", Minor:"2+", GitVersion:"v1.2.0-alpha.6.915+3e04a45a95220e", GitCommit:"3e04a45a95220e3d3e23b41582433fa8a6475033", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.2", GitCommit:"3085895b8a70a3d985e9320a098e74f545546171", GitTreeState:"clean"}

@floreks Please also test it to confirm, that it isn't caused by my setup.

@floreks
Copy link
Member

floreks commented Feb 23, 2016

For me it works just fine. I have same results as @bryk

2016/02/23 09:51:36 Starting HTTP server on port 9090
2016/02/23 09:51:36 Creating API server client for https://10.0.0.1:443
2016/02/23 09:51:36 Creating in-cluster Heapster client

My master version is 1.2.0 (./hack/local-up-cluster.sh) from k8s fork

Client Version: version.Info{Major:"1", Minor:"2+", GitVersion:"v1.2.0-alpha.5.398+e108a32b082f2c", GitCommit:"e108a32b082f2c91d853c898f51a02729c911b1c", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2+", GitVersion:"v1.2.0-alpha.5.398+e108a32b082f2c", GitCommit:"e108a32b082f2c91d853c898f51a02729c911b1c", GitTreeState:"clean"}

I've used new yaml file from #412

@maciaszczykm
Copy link
Member

Important detail from my side is, that I'm using gulp local-up-cluster to create cluster and gulp serve:prod to serve dashboard.

@bryk
Copy link
Contributor Author

bryk commented Feb 23, 2016

Important detail from my side is, that I'm using gulp local-up-cluster to create cluster and gulp serve:prod to serve dashboard.

Ah, yeah. Local cluster is broken, as it does not have service accounts. That's why the client falls back to defaults. Can you test this on a "real" cluster?

local-up-cluster will be fixed once kubernetes/kubernetes#21486 is released.

@maciaszczykm
Copy link
Member

@bryk Okay. @floreks tested on local cluster. LGTM

@bryk
Copy link
Contributor Author

bryk commented Feb 23, 2016

Awesome, thank you.

bryk added a commit that referenced this pull request Feb 23, 2016
Propagate apiclient to YAML deploy pipeline
@bryk bryk merged commit a1db480 into kubernetes:master Feb 23, 2016
@bryk bryk deleted the yaml-propagate branch February 23, 2016 10:05
anvithks pushed a commit to anvithks/k8s-dashboard that referenced this pull request Sep 27, 2021
Updated the register alert source feature to support API changes. Added Delete Alert source.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants