Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl / kubernetes not 100% compatible with YAML 1.2 #34146

Open
chrishiestand opened this issue Oct 5, 2016 · 35 comments
Open

kubectl / kubernetes not 100% compatible with YAML 1.2 #34146

chrishiestand opened this issue Oct 5, 2016 · 35 comments

Comments

@chrishiestand
Copy link
Contributor

@chrishiestand chrishiestand commented Oct 5, 2016

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.): No

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): is:issue yaml in:title version


Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug Report

Kubernetes version (use kubectl version): v1.3.7 (seems to be the same on master)

What happened: I tried to apply a yaml file with kubectl --context=minikube apply -f ./etc/deployment.yaml and got a validation error. However the yaml is valid 1.2.

The error is:
error validating "./etc/deployment.yaml": error validating data: expected type string, for field spec.template.spec.containers[2].command[6], got bool; if you choose to ignore these errors, turn validation off with --validate=false

What you expected to happen: kubectl should apply the deployment file

How to reproduce it (as minimally and precisely as possible):

Try to apply this object file:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      name: redis
  template:
    metadata:
      labels:
        name: redis
    spec:
      containers:
        -
          name: redis
          image: 'redis:3.2'
          command:
            - redis-server
            - '--appendonly'
            - no

Anything else do we need to know:

The "no" is the source of confusion for the program. In previous YAML versions this should be translated to be a boolean value, but in the current, 1.2, it should be a string "no". kubectl is using the older standard and translating this as a bool.

It is not so simple for me as to quote the "no" value like 'no' - this object file is the output of another library which uses the 1.2 standard. We need to be aware that of course people will use various libraries to consume and produce yaml.

I believe the problem can be traced from this https://github.com/ghodss/yaml to this source library here: https://github.com/cloudfoundry-incubator/candiedyaml/blob/master/resolver.go#L42

Should kubernetes only support YAML 1.2? It seems to me that we'd want to deprecate previous yaml support since YAML is not backwards compatible. YAML spec versions at: http://yaml.org/ - I have noticed that several of the online yaml validators also seem to get this "wrong".

@jingxu97

This comment has been minimized.

Copy link
Contributor

@jingxu97 jingxu97 commented Oct 8, 2016

@saad-ali, @caesarxuchao is this related to something we discovered on kubectl version issue

@caesarxuchao

This comment has been minimized.

Copy link
Member

@caesarxuchao caesarxuchao commented Oct 10, 2016

@kubernetes/kubectl

@ghodss

This comment has been minimized.

Copy link
Member

@ghodss ghodss commented Oct 10, 2016

As you've pointed out, Kubernetes uses ghodss/yaml which is just a wrapper for candiedyaml. candiedyaml's README states that it's a "YAML 1.1 parser with support for YAML 1.2 features" ... I would recommend filing an issue with candiedyaml if you think that the behavior should be different.

Alternatively, ghodss/yaml used to be a wrapper around go-yaml, but we switched it to candiedyaml due to licensing concerns. It seems, however, that those concerns were recently addressed and go-yaml changed its license, so if you find that go-yaml has better behavior we could consider switching back.

@fraenkel

This comment has been minimized.

Copy link
Contributor

@fraenkel fraenkel commented Oct 11, 2016

go-yaml has the same issues, but I would recommend you still switch back due to compatibility issues. Kube is using an older version of your code which relies on go-yaml.

@pwittrock

This comment has been minimized.

Copy link
Member

@pwittrock pwittrock commented Oct 18, 2016

@fabianofranz Would you be able to own this or find it an owner?

@fabianofranz

This comment has been minimized.

Copy link
Contributor

@fabianofranz fabianofranz commented Oct 19, 2016

@pwittrock on it. ;)

Before anything else I'll submit a PR to ghodss/yaml to switch back to go-yaml since it seems more reliable and it's now licensed as Apache 2.0 (go-yaml/yaml#160 (comment) and go-yaml/yaml@e4d366f).

@pwittrock

This comment has been minimized.

Copy link
Member

@pwittrock pwittrock commented Oct 19, 2016

SGTM.

@fabianofranz

This comment has been minimized.

Copy link
Contributor

@fabianofranz fabianofranz commented Oct 24, 2016

The YAML 1.2 spec became, as one of the biggest changes, more restrictive about what's supported as booleans, mostly now only recommending true and false (no longer using on/off and yes/no as booleans, but just regular strings).

I opened a go-yaml issue about this, but I'm not sure a fix is going to happen since go-yaml wants to claim to "support most of YAML 1.1 and 1.2", so they will most likely want to keep supporting the specs disjunction instead of becoming more restrictive and leaving 1.1 out.

If we really want to fix this, I mostly see two options: we either a) fully move to 1.2, everywhere (and find a library that does it according to the spec); or b) try to extend go-yaml by proposing more flexibility to choose how to handle the differences between 1.1 and 1.2. In case of b), for example, have a UnmarshallingStrategy structure that would be passed to Unmarshal which carry flags that would affect the unmarshalling behavior, case by case. For example, StrictYAML12Booleans would specifically make booleans strict to the 1.2 spec. In kube that would be global, meaning booleans would always be strict to 1.2, but trying to keep 1.1 and 1.2 disjunction support for everything else as of now.

Thoughts?

@chrishiestand

This comment has been minimized.

Copy link
Contributor Author

@chrishiestand chrishiestand commented Oct 24, 2016

In my limited experience it's usually better to fully support standard(s) for maximum interoperability and less user surprises. From a distance, I'd guess that supporting a disjointed set of standards adds confusion and technical debt to a community. And piling on to standards confusion just because other projects do it doesn't seem right. But this may not reflect the reasons for the current disjointed support.

There might be another option because YAML 1.2 and 1.1 allow version directives like so:

%YAML 1.2
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      name: redis
  template:
    metadata:
      labels:
        name: redis
    spec:
      containers:
        -
          name: redis
          image: 'redis:3.2'
          command:
            - redis-server
            - '--appendonly'
            - no

currently kubectl apply -f redis-above.yaml returns: yaml: found incompatible YAML document

The downside of requiring directives for boolean handling are that we'd still need to find a compatible library and I generally don't see yaml libraries including them in output, so we'd probably need to add version directive printing support to libraries/tools that are missing the option. The upside is that it is 100% standards compliant.

@fraenkel

This comment has been minimized.

Copy link
Contributor

@fraenkel fraenkel commented Oct 24, 2016

If go-yaml exposes an Encoder/Decoder, one could set an option to enable "strict" YAML 1.2 parsing. Booleans are just one change in YAML 1.2, there are others.

There is also a separate issue opened on allowing longer lines to be emitted which could be made possible as well.

@kargakis

This comment has been minimized.

Copy link
Member

@kargakis kargakis commented Jun 24, 2017

/sig cli
/sig api-machinery

@fejta-bot

This comment has been minimized.

Copy link

@fejta-bot fejta-bot commented Dec 30, 2017

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@fejta-bot

This comment has been minimized.

Copy link

@fejta-bot fejta-bot commented Jan 29, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@fejta-bot

This comment has been minimized.

Copy link

@fejta-bot fejta-bot commented Feb 28, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot

This comment has been minimized.

Copy link

@fejta-bot fejta-bot commented Dec 20, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@fejta-bot

This comment has been minimized.

Copy link

@fejta-bot fejta-bot commented Jan 19, 2019

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

@k8s-ci-robot k8s-ci-robot commented Jan 19, 2019

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@wyardley

This comment has been minimized.

Copy link

@wyardley wyardley commented Aug 29, 2019

/remove-lifecycle rotten

thomas-riccardi added a commit to Deepomatic/dmake that referenced this issue Sep 4, 2019
`ruamel.yaml` uses version 1.2 by default, and emits version directive
when forced in 1.1 (`%YAML 1.1` header).

`kubectl` doesn't support explicit `%YAML 1.1` header, nor explicit or
even implicit 1.2, at least regarding the boolean literals issue, see
kubernetes/kubernetes#34146

We now force 1.1 on `ruamel.yaml` and disable emitting the `%YAML 1.1`
version directive.

This fixes `'no'` as container env value in the kubernetes manifests.
thomas-riccardi added a commit to Deepomatic/dmake that referenced this issue Sep 4, 2019
`ruamel.yaml` uses version 1.2 by default, and emits version directive
when forced in 1.1 (`%YAML 1.1` header).

`kubectl` doesn't support explicit `%YAML 1.1` header, nor explicit or
even implicit 1.2, at least regarding the boolean literals issue, see
kubernetes/kubernetes#34146

We now force 1.1 on `ruamel.yaml` and disable emitting the `%YAML 1.1`
version directive.

This fixes `'no'` as container env value in the kubernetes manifests.
thomas-riccardi added a commit to Deepomatic/dmake that referenced this issue Sep 4, 2019
`ruamel.yaml` uses version 1.2 by default, and emits version directive
when forced in 1.1 (`%YAML 1.1` header).

`kubectl` doesn't support explicit `%YAML 1.1` header, nor explicit or
even implicit 1.2, at least regarding the boolean literals issue, see
kubernetes/kubernetes#34146

We now force 1.1 on `ruamel.yaml` and disable emitting the `%YAML 1.1`
version directive.

This fixes `'no'` as container env value in the kubernetes manifests.
thomas-riccardi added a commit to Deepomatic/dmake that referenced this issue Sep 4, 2019
`ruamel.yaml` uses version 1.2 by default, and emits version directive
when forced in 1.1 (`%YAML 1.1` header).

`kubectl` doesn't support explicit `%YAML 1.1` header, nor explicit or
even implicit 1.2, at least regarding the boolean literals issue, see
kubernetes/kubernetes#34146

We now force 1.1 on `ruamel.yaml` and disable emitting the `%YAML 1.1`
version directive.

This fixes `'no'` as container env value in the kubernetes manifests.
thomas-riccardi added a commit to Deepomatic/dmake that referenced this issue Sep 4, 2019
`ruamel.yaml` uses version 1.2 by default, and emits version directive
when forced in 1.1 (`%YAML 1.1` header).

`kubectl` doesn't support explicit `%YAML 1.1` header, nor explicit or
even implicit 1.2, at least regarding the boolean literals issue, see
kubernetes/kubernetes#34146

We now force 1.1 on `ruamel.yaml` and disable emitting the `%YAML 1.1`
version directive.

This fixes `'no'` as container env value in the kubernetes manifests.
@thomas-riccardi

This comment has been minimized.

Copy link
Contributor

@thomas-riccardi thomas-riccardi commented Sep 4, 2019

/reopen

We got bit by this issue with kubectl 1.15.3 and a YAML dumper that defaults to YAML 1.2 (ruamel): forcing it on YAML 1.1 has a side effect of also emitting the %YAML 1.1 version directive, which breaks kubectl:

$ cat test.yaml
%YAML 1.1
---
$ kubectl apply --dry-run -f test.yaml
error: error parsing test.yaml: error converting YAML to JSON: yaml: line 1: did not find expected <document start>

(and with %YAML 1.2:

error: error parsing test.yaml: error converting YAML to JSON: yaml: found incompatible YAML document

)

The only way is to generate a YAML 1.1 file with no version directive, which should not be necessary:
https://yaml.org/spec/1.1/#id895631

A version 1.1 YAML processor should accept documents with an explicit “%YAML 1.1” directive

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

@k8s-ci-robot k8s-ci-robot commented Sep 4, 2019

@thomas-riccardi: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

We got bit by this issue with kubectl 1.15.3 and a YAML dumper that defaults to YAML 1.2 (ruamel): forcing it on YAML 1.1 has a side effect of also emitting the %YAML 1.1 version directive, which breaks kubectl:

$ cat test.yaml
%YAML 1.1
---
$ kubectl apply --dry-run -f test.yaml
error: error parsing test.yaml: error converting YAML to JSON: yaml: line 1: did not find expected <document start>

(and with %YAML 1.2:

error: error parsing test.yaml: error converting YAML to JSON: yaml: found incompatible YAML document

)

The only way is to generate a YAML 1.1 file with no version directive, which should not be necessary:
https://yaml.org/spec/1.1/#id895631

A version 1.1 YAML processor should accept documents with an explicit “%YAML 1.1” directive

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@chrishiestand

This comment has been minimized.

Copy link
Contributor Author

@chrishiestand chrishiestand commented Sep 4, 2019

/reopen

@k8s-ci-robot k8s-ci-robot reopened this Sep 4, 2019
@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

@k8s-ci-robot k8s-ci-robot commented Sep 4, 2019

@chrishiestand: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@fejta-bot

This comment has been minimized.

Copy link

@fejta-bot fejta-bot commented Dec 3, 2019

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@wyardley

This comment has been minimized.

Copy link

@wyardley wyardley commented Dec 3, 2019

/remove-lifecycle stale

@Moser-ss

This comment has been minimized.

Copy link

@Moser-ss Moser-ss commented Dec 30, 2019

Hey, I also have issues with these differences in YAML parsers. I am using a parser that follows the 1.2 convention where value like this 0250097554_37 is a string but when I try to apply the YAML file I will receive the following error message
cannot convert int64 to string

Because is converting the value 0250097554_37 to 25009755437
{\"name\":\"TAG\",\"value\":25009755437}

And the solution to add %YAML 1.2 version directive doesn't work

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
You can’t perform that action at this time.