-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Controller fails to pick up ImageCache resource on fresh install if webhook server is down #70
Comments
Thanks for posting this issue. Will look into this for v0.8.0 |
senthilrch
added a commit
that referenced
this issue
Jun 7, 2021
For deploy-using-yaml, I have fixed this by introducing "kubectl rollout status deployment kubefledged-webhook-server --watch" before applying manifest of controller. For helm-chart, I'll look out for different solution (init container etc.) |
senthilrch
added a commit
that referenced
this issue
Jun 9, 2021
Merged
AnchorArray
added a commit
to noteable-io/kube-fledged
that referenced
this issue
Sep 16, 2021
* Move custom resource definitions to crds directory * modified travis build conditions * Initial commit for v0.8.0 * Bumped up versions of go, alpine, operator sdk, docker, cri-tool * upgrade go in travis ci * update dependencies * code changes for upgraded dependencies * fix unit tests * make release-amd64 to build all 4 amd64 images * v1alpha1 -> v1alpha2, kubefledged.k8s.io -> kubefledged.io * ignore build binaries * add labels to manifests * updates to imagecache manifest * update apigroup apiversion in validatingwebhook * issue senthilrch#70 deploy-using-yaml: wait for webhook-server running * issue senthilrch#66: upgrade crd api version to v1 * updates to helm chart & operator * update clusterrole to list and watch * update signer name in csr * remove v1alpha1 cr * issue senthilrch#70 add init container to wait for webhook-server * helm chart fix webhook service name * helm chart update * fix chart apiVersion * updated readme for helm chart installation * updated name of helm operator cr * fix issue senthilrch#75: workaround * Ensure validating webhook configuration client config service name for the webhook server mataches the correct webhook service name * Ensure validating webhook configuration client config service name for the webhook server mataches the correct webhook service name * add init option to webhook server * update manifests * updated helm chart * updated makefile and manifests * updated helm operator * makefile wait for operator ready * get imagecache before updating status * pre-install hook for validatingwebhookconfiguration * fix golint errors * modify refresh/purge annotation key to kubefledged.io/xxx * cri-client-image name as env instead of cmd flag * read busybox image from env * use busybox image from gcr.io to overcome dockerhub ratelimiting * fix issue senthilrch#89 change hostpath filetype to socket * delete pre-install hook in "make remove-operator-and-kubefledged" * add annotations to validatingwebhookconfiguration * add "helm repo update" to readme * continue processing job deletion when not found * check if refresh-cache annotation exists * add "helm repo update" to readme * update helm chart to use release namespace * update design proposal document * expose helm parameters in operator CR * document helm parameters * update release version to v0.8.2 * set status to known when unable to fetch pod * add check for image pull/delete status unknown * update log messages * deploy controller and operator to same namespace * fix unit test errors * restore Kubefledged CR during "remove-operator-and-kubefledged" * Update README.md * Update README.md * Update design-proposal.md * Update README.md Co-authored-by: Diego Rodriguez <diego@noteable.io> Co-authored-by: Senthil Raja Chermapandian <senthilrch@gmail.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
When installing kube-fledged directly from the helm chart, both the imagecache resource, webhook and the controller servers are deployed at same time and if the controller comes up before the webhook server, then the controller runs into the following error:
Then the controller does not pick up the imagecache resource that was deployed. The current workaround is to manually delete the controller pod and then it's able to pick up the imagecache resource.
The text was updated successfully, but these errors were encountered: