Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error creating Flexvolume plugin from directory #7592

Closed
zetaab opened this issue Sep 13, 2019 · 9 comments
Closed

Error creating Flexvolume plugin from directory #7592

zetaab opened this issue Sep 13, 2019 · 9 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@zetaab
Copy link
Member

zetaab commented Sep 13, 2019

1. What kops version are you running? The command kops version, will display
this information.

kops 1.15.0-alpha1

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

1.15.3

3. What cloud provider are you using?
openstack

4. What commands did you run? What is the simplest way to reproduce this issue?
created new cluster using kops create cluster

5. What happened after the commands executed?
I see lot of spam (and it really spams a LOT) in my kube-controller-manager logs:

E0913 21:02:51.079994       1 plugins.go:746] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: invalid character '/' after top-level value
E0913 21:02:51.082568       1 driver-call.go:267] Failed to unmarshal output for command: init, output: "2019/09/13 21:02:51 Unix syslog delivery error\n", error: invalid character '/' after top-level value
W0913 21:02:51.082584       1 driver-call.go:150] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: exit status 1, output: "2019/09/13 21:02:51 Unix syslog delivery error\n"
E0913 21:02:51.082689       1 plugins.go:746] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: invalid character '/' after top-level value
E0913 21:02:51.085434       1 driver-call.go:267] Failed to unmarshal output for command: init, output: "2019/09/13 21:02:51 Unix syslog delivery error\n", error: invalid character '/' after top-level value
W0913 21:02:51.085449       1 driver-call.go:150] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: exit status 1, output: "2019/09/13 21:02:51 Unix syslog delivery error\n"

6. What did you expect to happen?
I expect that we do not add flexvolume things to all distros (this is needed only in coreos?). I am running debian buster

I think this issue is coming from PR #6874 @kellanburket do you have idea?

/kind bug

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Sep 13, 2019
@zetaab
Copy link
Member Author

zetaab commented Sep 13, 2019

cc @KashifSaadat

@mazzy89
Copy link
Contributor

mazzy89 commented Sep 14, 2019

I'm trying to tackle this issue here #7545

@mazzy89
Copy link
Contributor

mazzy89 commented Sep 14, 2019

At the moment the support for distros containers-friendly is broken due to this directory loaded in Calico

justinsb added a commit to justinsb/kops that referenced this issue Sep 27, 2019
Per docs/development/instancesizes.md we don't have much cpu on a 1
core machine.  Note that this is only requests, not limits, so calico
can still burst.

At least related to issue kubernetes#7592
@justinsb
Copy link
Member

The upstream issue seems to be this one: https://github.com/projectcalico/pod2daemon/issues/20

Looks like it wasn't backported to calico 3.8 (yet). We may have to live with it for 1.14.

justinsb added a commit to justinsb/kops that referenced this issue Sep 27, 2019
We want to pick up projectcalico/pod2daemon#28 , to address kubernetes#7592 .

This is not ideal, but looking at the commit changes the only
potentially problematic change in the diff is
projectcalico/pod2daemon#21 , which seems like
it shouldn't cause ay skew issues.
@justinsb
Copy link
Member

I think we might get away with just updating the pod2daemon image, as the only significant change in between is projectcalico/pod2daemon#21 . Proposing that in #7689, though not sure whether we should do it for 1.14.0. I'm leaning yes.

mikesplain pushed a commit to mikesplain/kops that referenced this issue Sep 27, 2019
Per docs/development/instancesizes.md we don't have much cpu on a 1
core machine.  Note that this is only requests, not limits, so calico
can still burst.

At least related to issue kubernetes#7592
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 26, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 25, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants