Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

upstream deployment: olm pod keeps crashing #714

Closed
smanpathak opened this issue Feb 15, 2019 · 6 comments
Closed

upstream deployment: olm pod keeps crashing #714

smanpathak opened this issue Feb 15, 2019 · 6 comments

Comments

@smanpathak
Copy link
Contributor

I followed the install guide to deploy upstream version of olm.

kubectl create -f deploy/upstream/manifests/latest/

Upon inspecting olm namespace, I noticed that one of the pods (registry?) keeps crashing

k -n olm get po  -w
NAME                                READY   STATUS             RESTARTS   AGE
catalog-operator-77ffcc7f88-w8ctc   1/1     Running            0          12s
olm-operator-c77b64c95-57lb9        0/1     CrashLoopBackOff   1          12s
olm-operators-bh6pv                 1/1     Running            0          10s
olm-operator-c77b64c95-57lb9   0/1   Error   2     17s
olm-operator-c77b64c95-57lb9   0/1   CrashLoopBackOff   2     26s
olm-operator-c77b64c95-57lb9   0/1   Error   3     40s
olm-operator-c77b64c95-57lb9   0/1   CrashLoopBackOff   3     41s
olm-operator-c77b64c95-57lb9   0/1   Error   4     89s
olm-operator-c77b64c95-57lb9   0/1   CrashLoopBackOff   4     96s

Looks like a bug

k -n olm logs po/olm-operator-c77b64c95-57lb9
flag provided but not defined: -writeStatusName
Usage of /bin/olm:
  -alsologtostderr
        log to standard error as well as files
  -debug
        use debug log level
  -interval duration
        wake up interval (default 5m0s)
  -kubeconfig string
        absolute path to the kubeconfig file
  -log_backtrace_at value
        when logging hits line file:N, emit a stack trace
  -log_dir string
        If non-empty, write log files in this directory
  -logtostderr
        log to standard error instead of files
  -stderrthreshold value
        logs at or above this threshold go to stderr
  -v value
        log level for V logs
  -version
        displays olm version
  -vmodule value
        comma-separated list of pattern=N settings for file-filtered logging
  -watchedNamespaces -watchedNamespaces=""
        comma separated list of namespaces for olm operator to watch. If not set, or set to the empty string (e.g. -watchedNamespaces=""), olm operator will watch all namespaces in the cluster.
@njhale
Copy link
Member

njhale commented Feb 16, 2019

@smanpathak That's actually the olm-operator itself. Looks like there's a mismatch between the deployment manifest and the image it references. We'll get to this soon. Thanks for pointing it out.

@gfenn-newbury
Copy link

Can confirm, this is happening for me, too

@ghost
Copy link

ghost commented Mar 2, 2019

Same here on fresh minikube running k8s v1.13.5

@jarifibrahim
Copy link

On a side note, the quickstart/olm.yaml has the correct image.
User kubectl create -f deploy/upstream/quickstart/olm.yaml to install OLM.

@miguelsorianod
Copy link

miguelsorianod commented Mar 19, 2019

Hi,

I'm experiencing the same problem. I'm on an OpenShift 3.11 installation and I also see the same error.
In my case I deployed the OKD manifests (and not the upstream ones) located in: deploy/okd/manifests/0.8.1

The contents of the containers of the OLM operator in 0000_50_olm_06-olm-operator.deployment.yaml are:

       containers:
        - name: olm-operator
          command:
          - /bin/olm
          args:
          - -writeStatusName
          - operator-lifecycle-manager
          image: quay.io/coreos/olm@sha256:995a181839f301585a0e115c083619b6d73812c58a8444d7b13b8e407010325f

The SHA256 of quay.io/coreos/olm@sha256:995a181839f301585a0e115c083619b6d73812c58a8444d7b13b8e407010325f corresponds to the 0.8.1 tag in quay.

To discard any problems with Kubernetes/OpenShift I manually downloaded the quay.io/coreos/olm:0.8.1 image and executed the same command and arguments:

msoriano@localhost:~$ docker run -it quay.io/coreos/olm:0.8.1 olm -writeStatusName operator-lifecycle-manager
flag provided but not defined: -writeStatusName
Usage of olm:
  -alsologtostderr
    	log to standard error as well as files
  -debug
    	use debug log level
  -interval duration
    	wake up interval (default 5m0s)
  -kubeconfig string
    	absolute path to the kubeconfig file
  -log_backtrace_at value
    	when logging hits line file:N, emit a stack trace
  -log_dir string
    	If non-empty, write log files in this directory
  -logtostderr
    	log to standard error instead of files
  -stderrthreshold value
    	logs at or above this threshold go to stderr
  -v value
    	log level for V logs
  -version
    	displays olm version
  -vmodule value
    	comma-separated list of pattern=N settings for file-filtered logging
  -watchedNamespaces -watchedNamespaces=""
    	comma separated list of namespaces for olm operator to watch. If not set, or set to the empty string (e.g. -watchedNamespaces=""), olm operator will watch all namespaces in the cluster.

The -writeStatusName flag seems to not be available in that image version.

@ecordell
Copy link
Member

ecordell commented May 9, 2019

We've published a new upstream release which should fix these issues for you.

@ecordell ecordell closed this as completed May 9, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants