Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Service not automatically registered with consul #12

Closed
ervikrant06 opened this issue Oct 5, 2018 · 12 comments
Closed

Service not automatically registered with consul #12

ervikrant06 opened this issue Oct 5, 2018 · 12 comments

Comments

@ervikrant06
Copy link

ervikrant06 commented Oct 5, 2018

Deployed consul on K8 using helm charts, after that added the sync section in values.yaml file and performed the upgrade of helm deployment.

+syncCatalog:
+  enabled: true
+
  • Following consul members are reported after it. I believe vehement-zebu-consul-6rzsj it appeared after adding the syncCatalog section because it was not present earlier.
$ kubectl exec -it vehement-zebu-consul-server-0 consul members
Node                           Address           Status  Type    Build  Protocol  DC   Segment
vehement-zebu-consul-server-0  172.17.0.6:8301   alive   server  1.2.2  2         dc1  <all>
vehement-zebu-consul-server-1  172.17.0.9:8301   alive   server  1.2.2  2         dc1  <all>
vehement-zebu-consul-server-2  172.17.0.3:8301   alive   server  1.2.2  2         dc1  <all>
vehement-zebu-consul-6rzsj     172.17.0.14:8301  alive   client  1.2.2  2         dc1  <default>
  • Created the deployment and service using following manifest.
cat <<EOF | kubectl delete -f -
apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  labels:
    run: my-nginx
  annotations:
    "vehement-zebu-consul-server.default.svc.cluster.local/service-name": nginxservice    
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    run: my-nginx
  type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
spec:
  selector:
    matchLabels:
      run: my-nginx
  replicas: 2
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx
        ports:
        - containerPort: 80
EOF

I changed the service type to NodePort after the deployment.

  • I don't see this service getting registered in the consul UI. Nothing is appearing in the consul client logs.

  • However I can register this service manually.

/ # cat agent1payload.json
{
  "Name": "nginx",
  "address": "my-nginx.default.svc.cluster.local",
  "port": 80,
  "Check": {
     "http": "http://my-nginx.default.svc.cluster.local",
     "interval": "5s"
  }
}
/ # curl --request PUT --data @agent1payload.json http://172.17.0.12:8500/v1/agent/service/register

As per my understanding purpose of consul sync is to automatically register the service in consul as soon it's started in kubernetes. Not sure why it's not getting registered automatically?

@ervikrant06
Copy link
Author

@mitchellh I teared down the minikube setup and created it again by enabling the core-dns instead of kube-dns.

/ # dig @consul-server.default.svc.cluster.local -p 8600 consul.service.consul

; <<>> DiG 9.11.2-P1 <<>> @consul-server.default.svc.cluster.local -p 8600 consul.service.consul
; (3 servers found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19190
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 4
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;consul.service.consul.		IN	A

;; ANSWER SECTION:
consul.service.consul.	0	IN	A	172.17.0.13
consul.service.consul.	0	IN	A	172.17.0.12
consul.service.consul.	0	IN	A	172.17.0.11

;; ADDITIONAL SECTION:
consul.service.consul.	0	IN	TXT	"consul-network-segment="
consul.service.consul.	0	IN	TXT	"consul-network-segment="
consul.service.consul.	0	IN	TXT	"consul-network-segment="

;; Query time: 7 msec
;; SERVER: 172.17.0.11#8600(172.17.0.11)
;; WHEN: Fri Oct 05 17:16:50 UTC 2018
;; MSG SIZE  rcvd: 206

Created the following deployment and NodePort service.

at <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  labels:
    run: my-nginx
  annotations:
    "consul-server.default.svc.cluster.local/service-name": nginxservice
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    run: my-nginx
  type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
spec:
  selector:
    matchLabels:
      run: my-nginx
  replicas: 2
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx
        ports:
        - containerPort: 80
EOF
service "my-nginx" created
deployment "my-nginx" created

I don't see this service getting registered in the consul. Am I missing anything here? There is no proper how to make this work hence I feel that I am doing something silly.

$ kubectl exec -it consul-server-0 consul catalog services
consul

For sync of service from K8 to consul - is stub zone entry required? I took the following CM from hashicorp official blog.

kubectl describe cm kube-dns -n kube-system
Name:         kube-dns
Namespace:    kube-system
Labels:       addonmanager.kubernetes.io/mode=EnsureExists
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","data":{"stubDomains":"{\"consul\": [\"10.103.165.156\"]}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"addonmanage...

Data
====
stubDomains:
----
{"consul": ["10.103.165.156"]}

Events:  <none>

But I don't think stub is working for me. If it was configured properly then it should have shown the output for command / # dig consul.service.consul..

/ # dig @consul-dns.default.svc.cluster.local consul.service.consul

; <<>> DiG 9.11.2-P1 <<>> @consul-dns.default.svc.cluster.local consul.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 28412
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 4
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;consul.service.consul.		IN	A

;; ANSWER SECTION:
consul.service.consul.	0	IN	A	172.17.0.13
consul.service.consul.	0	IN	A	172.17.0.11
consul.service.consul.	0	IN	A	172.17.0.12

;; ADDITIONAL SECTION:
consul.service.consul.	0	IN	TXT	"consul-network-segment="
consul.service.consul.	0	IN	TXT	"consul-network-segment="
consul.service.consul.	0	IN	TXT	"consul-network-segment="

;; Query time: 5 msec
;; SERVER: 10.103.165.156#53(10.103.165.156)
;; WHEN: Fri Oct 05 17:35:21 UTC 2018
;; MSG SIZE  rcvd: 206

/ # dig consul.service.consul

; <<>> DiG 9.11.2-P1 <<>> consul.service.consul
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 21301
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;consul.service.consul.		IN	A

;; AUTHORITY SECTION:
.			30	IN	SOA	a.root-servers.net. nstld.verisign-grs.com. 2018100500 1800 900 604800 86400

;; Query time: 159 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Fri Oct 05 17:35:45 UTC 2018
;; MSG SIZE  rcvd: 125

/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
/ # curl consul.service.consul
curl: (6) Could not resolve host: consul.service.consul

@ervikrant06
Copy link
Author

ervikrant06 commented Oct 5, 2018

I managed to fix the stub zone configuration issue following K8 official documentation. Now the dig is working:

/ # dig consul.service.consul

; <<>> DiG 9.11.2-P1 <<>> consul.service.consul
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46198
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 4
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;consul.service.consul.		IN	A

;; ANSWER SECTION:
consul.service.consul.	0	IN	A	172.17.0.12
consul.service.consul.	0	IN	A	172.17.0.13
consul.service.consul.	0	IN	A	172.17.0.11

;; ADDITIONAL SECTION:
consul.service.consul.	0	IN	TXT	"consul-network-segment="
consul.service.consul.	0	IN	TXT	"consul-network-segment="
consul.service.consul.	0	IN	TXT	"consul-network-segment="

;; Query time: 13 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Fri Oct 05 17:56:42 UTC 2018
;; MSG SIZE  rcvd: 332

I recreated the service after modifying the annotation as per

 annotations:
    "consul.hashicorp.com/service-name": nginxservice

Still it's not registered in consul.

@s3than
Copy link
Contributor

s3than commented Oct 9, 2018

Same issue, DNS provider working as expected however sync not displaying.

No error messages received upsert as expected in the catalog logs, not recording any logs in consul cluster on the attempt to register service.

@adilyse
Copy link
Contributor

adilyse commented Oct 29, 2018

@ervikrant06 Is there a catalog sync pod being created? It would have consul-sync-catalog in its name and its logs might shed some light on the service sync status. Because this is specific to the Kubernetes integration, it won't be listed when using the consul members command.

@ervikrant06
Copy link
Author

@adilyse Nope this POD is not created. I tried multiple times but never managed to see that container. I believe following is the only change which we need to make in values.yaml file.

+syncCatalog:
+  enabled: true
+

@adilyse
Copy link
Contributor

adilyse commented Oct 30, 2018

One thing that I've run into is if a cluster doesn't have enough resources (usually CPU), some of the pods defined by the Helm chart won't get scheduled. This could explain why the catalog sync pod isn't being created for you and without this pod, the catalog sync functionality won't work.

I'll look into updating the documentation to include information about resource requirements.

@ervikrant06
Copy link
Author

Thanks. i don't it's a resource related issue otherwise POD should stuck in Pending state but I don't see any trace for it.. Also the helm chart deployment is completed successfully I believe it should also have shown the failure.

Note: I am using minikube which is single node setup.

@hsmade
Copy link

hsmade commented Nov 13, 2018

I do have the sync pod, and it logs:

2018-11-13T15:25:35.375Z [INFO ] to-k8s/sink: upsert: key=kube-system/consul-ui
2018-11-13T15:25:35.375Z [INFO ] to-k8s/sink: upsert: key=kube-system/traefik-ingress-service
2018-11-13T15:25:35.375Z [INFO ] to-k8s/sink: upsert: key=kube-system/traefik-web-ui
2018-11-13T15:25:35.375Z [INFO ] to-k8s/sink: upsert: key=kube-system/kube-dns

But still, no services in consul (except the consul service).

@adilyse
Copy link
Contributor

adilyse commented Nov 16, 2018

@hsmade Could you provide some additional information about your setup? Specifically, what are the sync settings in the values.yaml and what are the kubernetes service types of the services you are trying to sync?

I'm curious about the helm chart sync settings because the logs you've included start with to-k8s, which are about syncing Consul services into kubernetes as external services. To figure out why kubernetes services aren't showing up in Consul, the to-consul-prefaced logs would be more helpful.

As for the service types, currently we are only able to sync NodePort or LoadBalancer type services (docs), so if your services are defined as ClusterIP services, they won't show up in Consul.

@adilyse
Copy link
Contributor

adilyse commented Nov 16, 2018

@ervikrant06 I just realized that you said you are experiencing this problem after doing an "upgrade of [a] helm deployment". I've run into some issues trying to upgrade a helm chart in place. If you delete the current helm chart and install it fresh with the sync enabled, are you able to get a catalog sync pod?

@adilyse
Copy link
Contributor

adilyse commented Nov 16, 2018

For folks running on Minikube, we've added a new guide for running Consul on Minikube that might have some helpful information.

@adilyse
Copy link
Contributor

adilyse commented May 13, 2019

We've released several versions since this issue was last commented on, the latest being v0.8.1 last week. I'm going to go ahead and close this since it's likely out of date now. If folks run into similar issues with the latest release, please feel free to file a new issue.

@adilyse adilyse closed this as completed May 13, 2019
ndhanushkodi pushed a commit to ndhanushkodi/consul-k8s that referenced this issue Jul 9, 2021
t-eckert pushed a commit that referenced this issue Sep 8, 2022
# This is the 1st commit message:

Add service for terminating-gateways

# This is the commit message #2:

Add gateway-kind:terminating to deployment

# This is the commit message #3:

Add registration path for terminating gateways

# This is the commit message #4:

Add BATS tests

# This is the commit message #5:

Remove registration from terminating gateways deployment

# This is the commit message #6:

Set ports AFAIK in service

# This is the commit message #7:

Begin setting values for endpoints controller

# This is the commit message #8:

Copy values from deployment to endpoints controller (as comment)

# This is the commit message #9:

Use connect-init instead of acl-init

# This is the commit message #10:

Remove guards from term gw service (they will get hit by the deployment)

# This is the commit message #11:

Range over gateways to produce a service for each deployment

# This is the commit message #12:

Add test for multiple gateways

# This is the commit message #13:

Remove the format script

# This is the commit message #14:

Note which parts of the config have been set
jmurret added a commit that referenced this issue Sep 26, 2022
* removing comment out of JSON

* removing definition of flagHCPResourceID
jmurret added a commit that referenced this issue Sep 27, 2022
* removing comment out of JSON

* removing definition of flagHCPResourceID
jmurret added a commit that referenced this issue Sep 27, 2022
* removing comment out of JSON

* removing definition of flagHCPResourceID
jmurret added a commit that referenced this issue Sep 27, 2022
* removing comment out of JSON

* removing definition of flagHCPResourceID
jmurret added a commit that referenced this issue Sep 28, 2022
* removing comment out of JSON

* removing definition of flagHCPResourceID
jmurret added a commit that referenced this issue Sep 28, 2022
* removing comment out of JSON

* removing definition of flagHCPResourceID
jmurret added a commit that referenced this issue Sep 28, 2022
* removing comment out of JSON

* removing definition of flagHCPResourceID
jmurret added a commit that referenced this issue Sep 29, 2022
* removing comment out of JSON

* removing definition of flagHCPResourceID
jmurret added a commit that referenced this issue Oct 5, 2022
* removing comment out of JSON

* removing definition of flagHCPResourceID
wilkermichael added a commit that referenced this issue Oct 13, 2022
wilkermichael added a commit that referenced this issue Oct 14, 2022
wilkermichael added a commit that referenced this issue Oct 17, 2022
wilkermichael added a commit that referenced this issue Oct 18, 2022
wilkermichael added a commit that referenced this issue Oct 18, 2022
wilkermichael added a commit that referenced this issue Oct 18, 2022
wilkermichael added a commit that referenced this issue Oct 18, 2022
wilkermichael added a commit that referenced this issue Oct 19, 2022
ishustava pushed a commit that referenced this issue Feb 18, 2023
…onsul-634-create-windows-tests-args

[CONSUL-634] Create Windows Tests: Args
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants