Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Watching for more than 3 pods doesn't work #454

Closed
ghost opened this issue Oct 2, 2020 · 1 comment
Closed

Watching for more than 3 pods doesn't work #454

ghost opened this issue Oct 2, 2020 · 1 comment

Comments

@ghost
Copy link

ghost commented Oct 2, 2020

Hello.
I found some correlation between pods count and ksync watch status.

Here is a my config:

apikey: ksync
context: "mycontext"
daemonset-namespace: kube-system
docker-root: /var/lib/docker
docker-socket: /var/run/docker.sock
log-level: info
namespace: pr-22572
output: pretty
port: 40322
syncthing-port: 8384

spec:
- name: monolith
  containername: ""
  pod: ""
  selector:
  - app=pr-22572-monolith-monolith
  - ksync-role=web-shared
  namespace: pr-22572
  localpath: mydir/monolith
  remotepath: /project
  reload: true
  localreadonly: false
  remotereadonly: false

When I have 4 pods in deployment, 3 of them are in sync, but one is not:

### Ksync status: ###
    NAME      LOCAL      REMOTE     STATUS                            POD                           CONTAINER
-----------+----------+----------+----------+-----------------------------------------------------+------------
  monolith   monolith   /project
                                   watching   pr-22572-monolith-monolith-web-7df599c968-jllbc
                                   watching   pr-22572-monolith-monolith-web-69bd84785d-lz6zl
                                   watching   pr-22572-monolith-monolith-web-7df599c968-xjgpc
                                   starting   pr-22572-monolith-monolith-web-69bd84785d-bxfmz

### Kubernetes pods status: ###
NAME                                                  READY   STATUS    RESTARTS   AGE
pr-22572-monolith-monolith-web-7df599c968-jllbc       1/1     Running   0          31m
pr-22572-monolith-monolith-web-7df599c968-xjgpc       1/1     Running   0          5m51s
pr-22572-monolith-monolith-web-69bd84785d-bxfmz    1/1     Running   0          31m

If I set 3 pods for deployment, the synchronization works fine, and all 3 pods in watching state.
I tried to reproduce it a few times and always when I have 4 or more pods in deployment, some of them couldn't be in the watching state.

In ksync watch output I can see a lot of:

time="2020-10-02T18:08:32+03:00" level=warning msg="Get \"http://localhost:8384/rest/events?since=11585\": dial tcp [::1]:8384: connect: connection refused"

I tried to find some settings for increasing "parallelization" but didn't find it. Maybe this is some issue with GRPC?

Thanks.

@timfallmk
Copy link
Collaborator

It shouldn't be a limitation. Thoughts @grampelberg ?

@ghost ghost closed this as completed Feb 12, 2021
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant