Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No kind Job is registered for version batch/v1 #6894

Closed
strainovic opened this issue Nov 6, 2019 · 33 comments
Closed

No kind Job is registered for version batch/v1 #6894

strainovic opened this issue Nov 6, 2019 · 33 comments
Assignees
Labels
bug Categorizes issue or PR as related to a bug.
Milestone

Comments

@strainovic
Copy link

After upgrade helm to 2.16.0 I'm unable to install/upgrade charts with kind Job.

Error: no kind "Job" is registered for version "batch/v1" in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"

Output of helm version:

Client: &version.Version{SemVer:"v2.16.0", GitCommit:"e13bc94621d4ef666270cfbe734aaabf342a49bb", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.0", GitCommit:"e13bc94621d4ef666270cfbe734aaabf342a49bb", GitTreeState:"clean"}

Output of kubectl version:

Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.7", GitCommit:"8fca2ec50a6133511b771a11559e24191b1aa2b4", GitTreeState:"clean", BuildDate:"2019-09-18T14:47:22Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.8", GitCommit:"211047e9a1922595eaa3a1127ed365e9299a6c23", GitTreeState:"clean", BuildDate:"2019-10-17T17:18:09Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}

Cloud Provider/Platform (AKS, GKE, Minikube etc.):

On-prem vmware CDK

@den-is
Copy link

den-is commented Nov 6, 2019

same here with helm 2.16.0

helm upgrade --install mon --namespace mon  -f values.yaml  stable/prometheus-operator
Release "mon" does not exist. Installing it now.
Error: no kind "Job" is registered for version "batch/v1" in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"

clean bare-metal setup
kubernetes v1.16.2

@thomastaylor312
Copy link
Contributor

Thanks for reporting this. Starting to look into it.

As a note, this may affect helm 3 as well

@thomastaylor312 thomastaylor312 added the bug Categorizes issue or PR as related to a bug. label Nov 6, 2019
@bacongobbler
Copy link
Member

A community member on the #helm-users slack channel mentioned downgrading to Helm 2.15.2 appears to work around this issue for the time being. This may be due to a breaking change in Kubernetes 1.16's client libraries that affected how we perform schema lookups.

@thomastaylor312
Copy link
Contributor

Ok, I can't duplicate in Helm 3, which is a good thing. Please let me know if anyone sees this in the RC

@thomastaylor312 thomastaylor312 self-assigned this Nov 6, 2019
@thomastaylor312
Copy link
Contributor

So as far as I can tell, we were using the legacyscheme (e.g. k8s.io/kubernetes/pkg/api/legacyscheme) in a few places rather than the normal scheme. Based on what I see in that code, the legacyscheme.Scheme is just an empty schema with nothing added to it. So I switched it out for the normal scheme and am testing now

@thomastaylor312
Copy link
Contributor

Is anyone who is having this issue willing to try out #6897? I especially want to make sure that changing the schema we were referencing doesn't end up causing other issues

@thomastaylor312 thomastaylor312 added this to the 2.16.1 milestone Nov 6, 2019
@sagikazarmark
Copy link
Contributor

sagikazarmark commented Nov 6, 2019

@thomastaylor312 Is there an easy way to build a Tiller docker image for the proposed fix? I'd be happy to try it.

@thomastaylor312
Copy link
Contributor

thomastaylor312 commented Nov 6, 2019

Yep, if you have Go set up, you can checkout my branch and run make bootstrap build. You can then run ./bin/tiller and it will connect to whatever cluster your kubeconfig is set to. Then you can export HELM_HOST=localhost:44134 in another terminal and use ./bin/helm to run your install/upgrade. Then you can bypass building a docker image

@sagikazarmark
Copy link
Contributor

Actually, I kinda wanted to use the docker image to test an entire installation process (which accepts a docker image). But I will just test it with a single chart we had issues with.

@thomastaylor312
Copy link
Contributor

Totally willing to push a binary and docker image if needed, because you'll need both (as it will have a version incompatibility with your local helm binary)

@sagikazarmark
Copy link
Contributor

Oh, forgot about the version incompatibility thing. Yeah, that stuff won't work then, because I actually wanted to test it using the Terraform provider.

@sagikazarmark
Copy link
Contributor

It seems to be working with banzaicloud-stable/cadence.

@thomastaylor312
Copy link
Contributor

@sagikazarmark Thank you for testing! I am going to leave the issue and PR open until my morning so people across timezones can give the PR a whirl as well. Really trying to avoid any other regressions from hidden schema stuff

@thomastaylor312
Copy link
Contributor

Closed by #6897. We will be cutting a patch release soon

@wtx626
Copy link

wtx626 commented Nov 8, 2019

Error: no kind "Job" is registered for version "batch/v1" in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"
helm version:

Client: &version.Version{SemVer:"v2.16.0",
GitCommit:"e13bc94621d4ef666270cfbe734aaabf342a49bb", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.0", GitCommit:"e13bc94621d4ef666270cfbe734aaabf342a49bb", GitTreeState:"clean"}

kubectl version

Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}

@thomastaylor312
Copy link
Contributor

@wtx626 This issue has been resolved. It will be released with 2.16.1

@bekiaris
Copy link

bekiaris commented Nov 8, 2019

Is there any estimation when it wil be realeased?

@bacongobbler
Copy link
Member

bacongobbler commented Nov 8, 2019

Any time from now until end of next week. For the time being, please continue to use Helm 2.14.3 or 2.15.2 to work around this issue.

@bekiaris
Copy link

bekiaris commented Nov 8, 2019

Thanks for your quick answer!!!

@jdfalk
Copy link

jdfalk commented Nov 8, 2019

For those of us where using those versions of helm isn't an option due to it being embedded in other applications, can you give us a better time frame when this will be released?

@grebois
Copy link

grebois commented Nov 9, 2019

@thomastaylor312 all istio charts are failing because of this issue, would you be so kind to release a patch soon? many thanks!

@wtx626
Copy link

wtx626 commented Nov 11, 2019

@thomastaylor312 where is the new release 2.16.1?

@thomastaylor312
Copy link
Contributor

2.16.1 came out yesterday @wtx626!

@masterkain
Copy link

masterkain commented Nov 14, 2019

2.16.1 on both client and server does not seem to fix the issue.. I still cannot install https://github.com/masterkain/errbit-helm properly (dangling running job)
then even with a delete --purge of the release I get on operator reinstall:

│ ts=2019-11-14T07:38:30.601226529Z caller=release.go:216 component=release error="Chart release failed: errbit: &status.statusError{Code:2, Message:\"jobs.batch \\\"errbit-errbit-helm-bootstrap\\\" already exists\", Details:[]*any.Any(nil), XXX_NoU
│ nkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}"
│ ts=2019-11-14T07:38:30.628302488Z caller=release.go:220 component=release info="Deleting failed release: [errbit]"

like it can't remove the job.

and for some reason operator says about my mattermost installation:

component=operator warning="release has been rolled back, skipping" resource=prod:helmrelease/mattermost but nobody touched the deploy since quite some time, it just started saying that after the upgrade, helm list:

mattermost 33 Wed Nov 13 17:37:19 2019 DEPLOYED mattermost-team-edition-3.8.0 5.13.2 prod

kinda tired of random breakages :(

@bacongobbler
Copy link
Member

bacongobbler commented Nov 14, 2019

That looks like a separate issue than what's described above. The errors above describe that the job kind was not registered with the API. Your error indicates a job was found but already exists.

Can you open a new ticket? Thanks!

@miramar-labs
Copy link

miramar-labs commented Nov 16, 2019

I upgraded to 2.16.1 and still get the same error:
Client: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.0", GitCommit:"e13bc94621d4ef666270cfbe734aaabf342a49bb", GitTreeState:"clean"}

fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["helm", "install", "jupyterhub/jupyterhub", "--version=0.9-1743b85", "--name=jupyterhub", "--namespace=cluster", "--timeout=1200", "-f", "deploy/jupyterhub-config.yaml"], "delta": "0:00:10.924091", "end": "2019-11-16 14:09:01.773255", "failed": true, "msg": "non-zero return code", "rc": 1, "start": "2019-11-16 14:08:50.849164", "stderr": "Error: no kind "Job" is registered for version "batch/v1" in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"", "stderr_lines": ["Error: no kind "Job" is registered for version "batch/v1" in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30""], "stdout": "", "stdout_lines": []}

UPDATE:
my helm update wasn't complete ...
running: helm init --upgrade
fixed that... and now things are ok again

@ekristen
Copy link

This is still broken.

@grebois
Copy link

grebois commented Dec 11, 2019

but why?
image

sorry, yes, im getting the same errors, this is still broken and should be re-open @thomastaylor312

@bacongobbler
Copy link
Member

Can you please demonstrate the issue you are experiencing more clearly? Perhaps opening a new ticket would be helpful. For example, your issue may result in the same error message, but it may be caused by something other than what #6897 resolved (which was due to a breaking change in the Kubernetes client).

A more clear explanation would be more helpful for us to help diagnose your issue better. Thanks.

@ekristen
Copy link

I ran into this on a 1.16 cluster using 2.16.2 and the latest Minio chart. I can try on a fresh cluster just to verify

@fradeve
Copy link

fradeve commented Dec 31, 2019

I have been hit by this. I am not sure if this help, but in my case I can confirm that upgrading to 2.16.1 fixed the issue.

We are on Kubernetes 1.12.10-gke.17, and in order to get around this bug I had to upgrade both Helm and Tiller to 2.16.1 (ArchLinux's AUR doesn't have a package for 2.16.1, so I had to compile this by hand).

@hebersonaguiar
Copy link

@wtx626 This issue has been resolved. It will be released with 2.16.1

@thomastaylor312 this is a best solution for this issue, thanks.

@jvanwygerden
Copy link

Can anybody confirm this patch has been released on 2.16.1 ?

Also, is there any work-around possible (that is known) for 2.16.0 - specifically, to upgrade charts that have K8s job manifests

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests