Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Restarting CodeReady Container does leave pods in unusable state #2474

Closed
steinrr opened this issue Jun 24, 2021 · 12 comments
Closed

[BUG] Restarting CodeReady Container does leave pods in unusable state #2474

steinrr opened this issue Jun 24, 2021 · 12 comments
Labels
kind/bug Something isn't working os/macos

Comments

@steinrr
Copy link

steinrr commented Jun 24, 2021

General information

  • OS: macOS
  • Hypervisor: Default Macos
  • Did you run crc setup before starting it (Yes/No)? No
  • Running CRC on: Laptop

CRC version

CodeReady Containers version: 1.27.0+3d6bc39d
OpenShift version: 4.7.11 (not embedded in executable)

CRC status

DEBU OpenShift version: 4.7.11 (not embedded in executable) 
DEBU Running 'crc status'                         
DEBU Checking file: /Users/steinrr/.crc/machines/crc/.crc-exist 
DEBU Checking file: /Users/steinrr/.crc/machines/crc/.crc-exist 
DEBU Found binary path at /Applications/CodeReady Containers.app/Contents/Resources/crc-driver-hyperkit 
DEBU Launching plugin server for driver hyperkit  
DEBU Plugin server listening at address 127.0.0.1:52269 
DEBU () Calling .GetVersion                       
DEBU Using API Version 1                          
DEBU () Calling .SetConfigRaw                     
DEBU () Calling .GetMachineName                   
DEBU (crc) Calling .GetState                      
DEBU (crc) Calling .GetBundleName                 
DEBU Making call to close driver server           
DEBU (crc) Calling .Close                         
DEBU Successfully made call to close driver server 
DEBU Making call to close connection to plugin binary 
CRC VM:          Stopped
OpenShift:       Stopped (v4.7.11)
Disk Usage:      0B of 0B (Inside the CRC VM)
Cache Usage:     24.86GB
Cache Directory: /Users/steinrr/.crc/cache

CRC config

Host Operating System

ProductName:	macOS
ProductVersion:	11.4
BuildVersion:	20F71

Steps to reproduce

Description of problem:

Running latest CodeReady Container version of OpenShift on my Mac. I am able to push applications to the cluster and they are working fine.

When I stop the CodeReady Container (crc stop), and start it again (crc start) - the pods are starting and seems to run fine. But when I open the URL of any of them - I get "Application is not available". I get the same on all applications - even "Hello World" kind of applications.

So somehow shutting down CRC and starting it again creates some "strange status" on the pods. I have tried to delete the pods and also to scale to 0 instances and then add instances, but they still do not work. So even if OpenShift removes the pods and start new instances - they fail. Logs do not give any info. They only way to "fix" the application is to do a "odo push -f" that forces rebuild and redeploy of the application in the OpenShift cluster.
Version-Release number of selected component (if applicable):

Steps to Reproduce:

  1. Start CodeReady Container version of OpenShift (crc start)
  2. Use "odo push" to push application (e.g. nodejs example)
  3. Stop CRC (crc stop)
  4. Start CRC (crc start)
  5. Observe that OpenShift is starting and pods are starting
  6. Click URL of deployed application

Additional info:

  • "odo push -f" fixes issue.
  • Deleting pod does not fix issue.
  • Scaling pod count to 0 and then back to 1 does not fix problem.

Expected

Page of web application to show

Actual

Page showing "Application is not available" showing

Logs

N/A

@steinrr steinrr added kind/bug Something isn't working status/need triage labels Jun 24, 2021
@praveenkumar praveenkumar changed the title [BUG] [BUG] Restarting CodeReady Container does leave pods in unusable state Jun 24, 2021
@praveenkumar
Copy link
Member

@steinrr I tried to reproduce it but not able to on my mac machine. following is what I did, let me know if I am missing some steps.

  1. Downloaded 1.27.0 release of crc (.pkg file)
  2. Installed it using installer
  3. Start the crc using tray which appear on top bar after setup
  4. Deploy an sample app
$ oc new-app httpd-example
$ oc get routes
NAME            HOST/PORT                                PATH   SERVICES        PORT    TERMINATION   WILDCARD
httpd-example   httpd-example-default.apps-crc.testing          httpd-example   <all>                 None
$ curl -Ik httpd-example-default.apps-crc.testing
HTTP/1.1 200 OK
Date: Thu, 24 Jun 2021 07:52:24 GMT
Server: Apache/2.4.37 (Red Hat Enterprise Linux) OpenSSL/1.1.1g
Last-Modified: Thu, 24 Jun 2021 07:39:14 GMT
ETag: "924b-5c57e1ea16c80"
Accept-Ranges: bytes
Content-Length: 37451
Content-Type: text/html; charset=UTF-8
Set-Cookie: e602f1d7f1f150765b6e4697b44ad13d=8a3d55b1fa9e15dbb090faa45450ee7c; path=/; HttpOnly
Cache-control: private

Stop the instance using tray and started it again. I can still access to my application without deleting the pods or anything.

$ crc stop
$ crc start
$ curl -Ik httpd-example-default.apps-crc.testing
HTTP/1.1 200 OK
Date: Thu, 24 Jun 2021 07:52:24 GMT
Server: Apache/2.4.37 (Red Hat Enterprise Linux) OpenSSL/1.1.1g
Last-Modified: Thu, 24 Jun 2021 07:39:14 GMT
ETag: "924b-5c57e1ea16c80"
Accept-Ranges: bytes
Content-Length: 37451
Content-Type: text/html; charset=UTF-8
Set-Cookie: e602f1d7f1f150765b6e4697b44ad13d=8a3d55b1fa9e15dbb090faa45450ee7c; path=/; HttpOnly
Cache-control: private

Can you provide following output when you hit this issue?

$ oc get pods -n <namespace_for_which_pods_acting_wired>
$ oc get events -n <namespace_for_which_pods_acting_wired>
$ oc describe pod <pod_name> -n <namespace_for_which_pods_acting_wired>

@praveenkumar
Copy link
Member

I can see respective bug redhat-developer/odo#4822 so it might be something with odo?

@guillaumerose
Copy link
Contributor

For the nodejs example, odo starts a dev container based on a vanilla node image + a volume with the code.
The volume is declared as follow:

  volumes:
    - name: odo-projects
      emptyDir: {}
    - name: odo-supervisord-shared-data
      emptyDir: {}

I believe this directory is filled by odo during the push. I think this kind of directory (emptyDir) are temporary and doesn't survive a reboot.

@steinrr
Copy link
Author

steinrr commented Jun 24, 2021

I can see respective bug openshift/odo#4822 so it might be something with odo?

This is the same case. It is registered by me as you can see.

@steinrr
Copy link
Author

steinrr commented Jun 24, 2021

@steinrr I tried to reproduce it but not able to on my mac machine. following is what I did, let me know if I am missing some steps.

  1. Downloaded 1.27.0 release of crc (.pkg file)
  2. Installed it using installer
  3. Start the crc using tray which appear on top bar after setup
  4. Deploy an sample app
$ oc new-app httpd-example
$ oc get routes
NAME            HOST/PORT                                PATH   SERVICES        PORT    TERMINATION   WILDCARD
httpd-example   httpd-example-default.apps-crc.testing          httpd-example   <all>                 None
$ curl -Ik httpd-example-default.apps-crc.testing
HTTP/1.1 200 OK
Date: Thu, 24 Jun 2021 07:52:24 GMT
Server: Apache/2.4.37 (Red Hat Enterprise Linux) OpenSSL/1.1.1g
Last-Modified: Thu, 24 Jun 2021 07:39:14 GMT
ETag: "924b-5c57e1ea16c80"
Accept-Ranges: bytes
Content-Length: 37451
Content-Type: text/html; charset=UTF-8
Set-Cookie: e602f1d7f1f150765b6e4697b44ad13d=8a3d55b1fa9e15dbb090faa45450ee7c; path=/; HttpOnly
Cache-control: private

Stop the instance using tray and started it again. I can still access to my application without deleting the pods or anything.

$ crc stop
$ crc start
$ curl -Ik httpd-example-default.apps-crc.testing
HTTP/1.1 200 OK
Date: Thu, 24 Jun 2021 07:52:24 GMT
Server: Apache/2.4.37 (Red Hat Enterprise Linux) OpenSSL/1.1.1g
Last-Modified: Thu, 24 Jun 2021 07:39:14 GMT
ETag: "924b-5c57e1ea16c80"
Accept-Ranges: bytes
Content-Length: 37451
Content-Type: text/html; charset=UTF-8
Set-Cookie: e602f1d7f1f150765b6e4697b44ad13d=8a3d55b1fa9e15dbb090faa45450ee7c; path=/; HttpOnly
Cache-control: private

Can you provide following output when you hit this issue?

$ oc get pods -n <namespace_for_which_pods_acting_wired>
$ oc get events -n <namespace_for_which_pods_acting_wired>
$ oc describe pod <pod_name> -n <namespace_for_which_pods_acting_wired>

Try using odo and node example. That is the one I have issues with. I see that @guillaumerose has a theory here as well.

@praveenkumar
Copy link
Member

Try using odo and node example. That is the one I have issues with. I see that @guillaumerose has a theory here as well.

@steinrr I thought you said even an hello world program it is not working so I used sample app not with odo so this might be issue with odo not with the CRC because it is behaving as expected.

@steinrr
Copy link
Author

steinrr commented Jun 24, 2021

Try using odo and node example. That is the one I have issues with. I see that @guillaumerose has a theory here as well.

@steinrr I thought you said even an hello world program it is not working so I used sample app not with odo so this might be issue with odo not with the CRC because it is behaving as expected.

Ok - sorry - I wrote:

  1. Use "odo push" to push application (e.g. nodejs example)

I am not sure where the fault is, I use odo to push the application to OpenShift. When I restart CRC, I observe that it does not work anymore. If it is odo, OpenShift or CRC - I do not know.

@praveenkumar
Copy link
Member

Try using odo and node example. That is the one I have issues with. I see that @guillaumerose has a theory here as well.

@steinrr I thought you said even an hello world program it is not working so I used sample app not with odo so this might be issue with odo not with the CRC because it is behaving as expected.

Ok - sorry - I wrote:

No issue, we want to understand where is the issue.

1. Use "odo push" to push application (e.g. nodejs example)

I am not sure where the fault is, I use odo to push the application to OpenShift. When I restart CRC, I observe that it does not work anymore. If it is odo, OpenShift or CRC - I do not know.

I would suggest close look on issue created on the odo side, we will also try to communicate with odo team to debug it more.

@steinrr
Copy link
Author

steinrr commented Jun 24, 2021

I did take som status when the pod is NOT WORKING:

% oc get pods                                           
NAME                                     READY   STATUS    RESTARTS   AGE
nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf   1/1     Running   0          7d3h

% oc get events |grep nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf
7d3h        Normal    Scheduled           pod/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf    Successfully assigned srrtest/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf to crc-m89r2-master-0
7d3h        Normal    AddedInterface      pod/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf    Add eth0 [10.217.0.54/23]
7d3h        Normal    Pulled              pod/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf    Container image "registry.access.redhat.com/ocp-tools-4/odo-init-container-rhel8:1.1.10" already present on machine
7d3h        Normal    Created             pod/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf    Created container copy-supervisord
7d3h        Normal    Started             pod/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf    Started container copy-supervisord
7d3h        Normal    Pulling             pod/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf    Pulling image "registry.access.redhat.com/ubi8/nodejs-14:latest"
7d3h        Normal    Pulled              pod/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf    Successfully pulled image "registry.access.redhat.com/ubi8/nodejs-14:latest" in 1.691193153s
7d3h        Normal    Created             pod/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf    Created container runtime
7d3h        Normal    Started             pod/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf    Started container runtime
7d3h        Normal    AddedInterface      pod/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf    Add eth0 [10.217.0.27/23]
7d3h        Normal    Pulled              pod/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf    Container image "registry.access.redhat.com/ocp-tools-4/odo-init-container-rhel8:1.1.10" already present on machine
7d3h        Normal    Created             pod/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf    Created container copy-supervisord
7d3h        Normal    Started             pod/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf    Started container copy-supervisord
7d3h        Normal    Pulling             pod/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf    Pulling image "registry.access.redhat.com/ubi8/nodejs-14:latest"
7d3h        Normal    Pulled              pod/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf    Successfully pulled image "registry.access.redhat.com/ubi8/nodejs-14:latest" in 2.785994928s
7d3h        Normal    Created             pod/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf    Created container runtime
7d3h        Normal    Started             pod/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf    Started container runtime
13m         Normal    AddedInterface      pod/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf    Add eth0 [10.217.0.24/23]
12m         Normal    Pulled              pod/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf    Container image "registry.access.redhat.com/ocp-tools-4/odo-init-container-rhel8:1.1.10" already present on machine
12m         Normal    Created             pod/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf    Created container copy-supervisord
12m         Normal    Started             pod/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf    Started container copy-supervisord
12m         Normal    Pulling             pod/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf    Pulling image "registry.access.redhat.com/ubi8/nodejs-14:latest"
12m         Normal    Pulled              pod/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf    Successfully pulled image "registry.access.redhat.com/ubi8/nodejs-14:latest" in 8.84197754s
12m         Normal    Created             pod/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf    Created container runtime
12m         Normal    Started             pod/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf    Started container runtime
7d3h        Normal    SuccessfulCreate    replicaset/nodejs-nodejs-ex-gxpb-7b9fddcf66   Created pod: nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf

% oc describe pod nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf
Name:         nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf
Namespace:    srrtest
Priority:     0
Node:         crc-m89r2-master-0/192.168.126.11
Start Time:   Thu, 17 Jun 2021 07:09:16 +0200
Labels:       app=app
              app.kubernetes.io/instance=nodejs-nodejs-ex-gxpb
              app.kubernetes.io/managed-by=odo
              app.kubernetes.io/managed-by-version=v2.2.1
              app.kubernetes.io/name=nodejs
              app.kubernetes.io/part-of=app
              component=nodejs-nodejs-ex-gxpb
              pod-template-hash=7b9fddcf66
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "",
                    "interface": "eth0",
                    "ips": [
                        "10.217.0.24"
                    ],
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "",
                    "interface": "eth0",
                    "ips": [
                        "10.217.0.24"
                    ],
                    "default": true,
                    "dns": {}
                }]
              openshift.io/scc: restricted
Status:       Running
IP:           10.217.0.24
IPs:
  IP:           10.217.0.24
Controlled By:  ReplicaSet/nodejs-nodejs-ex-gxpb-7b9fddcf66
Init Containers:
  copy-supervisord:
    Container ID:  cri-o://c18b3ee2c0cc0ec72281426ab3a06f74605239b14d3f6cbd2aab4827fd050a10
    Image:         registry.access.redhat.com/ocp-tools-4/odo-init-container-rhel8:1.1.10
    Image ID:      registry.access.redhat.com/ocp-tools-4/odo-init-container-rhel8@sha256:0b25f37779ac09c197ef18e003cc6e49ac590cc79b058783f6f0a21eeb81581b
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/bin/cp
    Args:
      -r
      /opt/odo-init/.
      /opt/odo/
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 24 Jun 2021 10:15:38 +0200
      Finished:     Thu, 24 Jun 2021 10:15:47 +0200
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /opt/odo/ from odo-supervisord-shared-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-r7bgz (ro)
Containers:
  runtime:
    Container ID:  cri-o://2a97702390a7d23c3b493905a43159852007ad984788863c6bbc60b5c7ab2b52
    Image:         registry.access.redhat.com/ubi8/nodejs-14:latest
    Image ID:      registry.access.redhat.com/ubi8/nodejs-14@sha256:f03d58a9adaf56900dc96bce927bf33cd8720991fd090e3a8ae5037f3a0c2539
    Port:          8080/TCP
    Host Port:     0/TCP
    Command:
      /opt/odo/bin/supervisord
    Args:
      -c
      /opt/odo/conf/devfile-supervisor.conf
    State:          Running
      Started:      Thu, 24 Jun 2021 10:16:06 +0200
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  1Gi
    Requests:
      memory:  1Gi
    Environment:
      PROJECTS_ROOT:                  /project
      PROJECT_SOURCE:                 /project
      ODO_COMMAND_RUN:                npm start
      ODO_COMMAND_RUN_WORKING_DIR:    /project
      ODO_COMMAND_DEBUG:              npm run debug
      ODO_COMMAND_DEBUG_WORKING_DIR:  /project
      DEBUG_PORT:                     5858
    Mounts:
      /opt/odo/ from odo-supervisord-shared-data (rw)
      /project from odo-projects (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-r7bgz (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  odo-projects:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  odo-supervisord-shared-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  default-token-r7bgz:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-r7bgz
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason          Age   From               Message
  ----    ------          ----  ----               -------
  Normal  Scheduled       7d3h  default-scheduler  Successfully assigned srrtest/nodejs-nodejs-ex-gxpb-7b9fddcf66-ggpvf to crc-m89r2-master-0
  Normal  AddedInterface  7d3h  multus             Add eth0 [10.217.0.54/23]
  Normal  Pulled          7d3h  kubelet            Container image "registry.access.redhat.com/ocp-tools-4/odo-init-container-rhel8:1.1.10" already present on machine
  Normal  Created         7d3h  kubelet            Created container copy-supervisord
  Normal  Started         7d3h  kubelet            Started container copy-supervisord
  Normal  Pulling         7d3h  kubelet            Pulling image "registry.access.redhat.com/ubi8/nodejs-14:latest"
  Normal  Pulled          7d3h  kubelet            Successfully pulled image "registry.access.redhat.com/ubi8/nodejs-14:latest" in 1.691193153s
  Normal  Created         7d3h  kubelet            Created container runtime
  Normal  Started         7d3h  kubelet            Started container runtime
  Normal  AddedInterface  7d3h  multus             Add eth0 [10.217.0.27/23]
  Normal  Pulled          7d3h  kubelet            Container image "registry.access.redhat.com/ocp-tools-4/odo-init-container-rhel8:1.1.10" already present on machine
  Normal  Created         7d3h  kubelet            Created container copy-supervisord
  Normal  Started         7d3h  kubelet            Started container copy-supervisord
  Normal  Pulling         7d3h  kubelet            Pulling image "registry.access.redhat.com/ubi8/nodejs-14:latest"
  Normal  Pulled          7d3h  kubelet            Successfully pulled image "registry.access.redhat.com/ubi8/nodejs-14:latest" in 2.785994928s
  Normal  Created         7d3h  kubelet            Created container runtime
  Normal  Started         7d3h  kubelet            Started container runtime
  Normal  AddedInterface  14m   multus             Add eth0 [10.217.0.24/23]
  Normal  Pulled          13m   kubelet            Container image "registry.access.redhat.com/ocp-tools-4/odo-init-container-rhel8:1.1.10" already present on machine
  Normal  Created         13m   kubelet            Created container copy-supervisord
  Normal  Started         13m   kubelet            Started container copy-supervisord
  Normal  Pulling         13m   kubelet            Pulling image "registry.access.redhat.com/ubi8/nodejs-14:latest"
  Normal  Pulled          13m   kubelet            Successfully pulled image "registry.access.redhat.com/ubi8/nodejs-14:latest" in 8.84197754s
  Normal  Created         13m   kubelet            Created container runtime
  Normal  Started         13m   kubelet            Started container runtime

@steinrr
Copy link
Author

steinrr commented Jun 24, 2021

Same debug for a new and WORKING pod:

% oc get pods
NAME                                     READY   STATUS    RESTARTS   AGE
nodejs-nodejs-ex-dabd-7b8fbcb966-4jbjf   1/1     Running   0          70s

% oc get events |grep nodejs-nodejs-ex-dabd-7b8fbcb966-4jbjf
115s        Normal    Scheduled           pod/nodejs-nodejs-ex-dabd-7b8fbcb966-4jbjf    Successfully assigned srrtest/nodejs-nodejs-ex-dabd-7b8fbcb966-4jbjf to crc-m89r2-master-0
113s        Normal    AddedInterface      pod/nodejs-nodejs-ex-dabd-7b8fbcb966-4jbjf    Add eth0 [10.217.0.62/23]
113s        Normal    Pulled              pod/nodejs-nodejs-ex-dabd-7b8fbcb966-4jbjf    Container image "registry.access.redhat.com/ocp-tools-4/odo-init-container-rhel8:1.1.10" already present on machine
113s        Normal    Created             pod/nodejs-nodejs-ex-dabd-7b8fbcb966-4jbjf    Created container copy-supervisord
112s        Normal    Started             pod/nodejs-nodejs-ex-dabd-7b8fbcb966-4jbjf    Started container copy-supervisord
112s        Normal    Pulling             pod/nodejs-nodejs-ex-dabd-7b8fbcb966-4jbjf    Pulling image "registry.access.redhat.com/ubi8/nodejs-14:latest"
111s        Normal    Pulled              pod/nodejs-nodejs-ex-dabd-7b8fbcb966-4jbjf    Successfully pulled image "registry.access.redhat.com/ubi8/nodejs-14:latest" in 1.198257802s
111s        Normal    Created             pod/nodejs-nodejs-ex-dabd-7b8fbcb966-4jbjf    Created container runtime
111s        Normal    Started             pod/nodejs-nodejs-ex-dabd-7b8fbcb966-4jbjf    Started container runtime
115s        Normal    SuccessfulCreate    replicaset/nodejs-nodejs-ex-dabd-7b8fbcb966   Created pod: nodejs-nodejs-ex-dabd-7b8fbcb966-4jbjf

% oc describe pod nodejs-nodejs-ex-dabd-7b8fbcb966-4jbjf
Name:         nodejs-nodejs-ex-dabd-7b8fbcb966-4jbjf
Namespace:    srrtest
Priority:     0
Node:         crc-m89r2-master-0/192.168.126.11
Start Time:   Thu, 24 Jun 2021 10:47:05 +0200
Labels:       app=app
              app.kubernetes.io/instance=nodejs-nodejs-ex-dabd
              app.kubernetes.io/managed-by=odo
              app.kubernetes.io/managed-by-version=v2.2.1
              app.kubernetes.io/name=nodejs
              app.kubernetes.io/part-of=app
              component=nodejs-nodejs-ex-dabd
              pod-template-hash=7b8fbcb966
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "",
                    "interface": "eth0",
                    "ips": [
                        "10.217.0.62"
                    ],
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "",
                    "interface": "eth0",
                    "ips": [
                        "10.217.0.62"
                    ],
                    "default": true,
                    "dns": {}
                }]
              openshift.io/scc: restricted
Status:       Running
IP:           10.217.0.62
IPs:
  IP:           10.217.0.62
Controlled By:  ReplicaSet/nodejs-nodejs-ex-dabd-7b8fbcb966
Init Containers:
  copy-supervisord:
    Container ID:  cri-o://929adc63a07aa4503d5edb30cb8412b69a54ac46648ef4017594b8f0bc6da7c1
    Image:         registry.access.redhat.com/ocp-tools-4/odo-init-container-rhel8:1.1.10
    Image ID:      registry.access.redhat.com/ocp-tools-4/odo-init-container-rhel8@sha256:0b25f37779ac09c197ef18e003cc6e49ac590cc79b058783f6f0a21eeb81581b
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/bin/cp
    Args:
      -r
      /opt/odo-init/.
      /opt/odo/
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 24 Jun 2021 10:47:08 +0200
      Finished:     Thu, 24 Jun 2021 10:47:08 +0200
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /opt/odo/ from odo-supervisord-shared-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-r7bgz (ro)
Containers:
  runtime:
    Container ID:  cri-o://2cddf2b9c5f0c8f4125bda3097f7ce6d261c9470b12626e256a1003595bd6592
    Image:         registry.access.redhat.com/ubi8/nodejs-14:latest
    Image ID:      registry.access.redhat.com/ubi8/nodejs-14@sha256:f03d58a9adaf56900dc96bce927bf33cd8720991fd090e3a8ae5037f3a0c2539
    Port:          8080/TCP
    Host Port:     0/TCP
    Command:
      /opt/odo/bin/supervisord
    Args:
      -c
      /opt/odo/conf/devfile-supervisor.conf
    State:          Running
      Started:      Thu, 24 Jun 2021 10:47:09 +0200
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  1Gi
    Requests:
      memory:  1Gi
    Environment:
      PROJECTS_ROOT:                  /project
      PROJECT_SOURCE:                 /project
      ODO_COMMAND_RUN:                npm start
      ODO_COMMAND_RUN_WORKING_DIR:    /project
      ODO_COMMAND_DEBUG:              npm run debug
      ODO_COMMAND_DEBUG_WORKING_DIR:  /project
      DEBUG_PORT:                     5858
    Mounts:
      /opt/odo/ from odo-supervisord-shared-data (rw)
      /project from odo-projects (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-r7bgz (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  odo-projects:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  odo-supervisord-shared-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  default-token-r7bgz:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-r7bgz
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason          Age    From               Message
  ----    ------          ----   ----               -------
  Normal  Scheduled       2m53s  default-scheduler  Successfully assigned srrtest/nodejs-nodejs-ex-dabd-7b8fbcb966-4jbjf to crc-m89r2-master-0
  Normal  AddedInterface  2m51s  multus             Add eth0 [10.217.0.62/23]
  Normal  Pulled          2m51s  kubelet            Container image "registry.access.redhat.com/ocp-tools-4/odo-init-container-rhel8:1.1.10" already present on machine
  Normal  Created         2m51s  kubelet            Created container copy-supervisord
  Normal  Started         2m50s  kubelet            Started container copy-supervisord
  Normal  Pulling         2m50s  kubelet            Pulling image "registry.access.redhat.com/ubi8/nodejs-14:latest"
  Normal  Pulled          2m49s  kubelet            Successfully pulled image "registry.access.redhat.com/ubi8/nodejs-14:latest" in 1.198257802s
  Normal  Created         2m49s  kubelet            Created container runtime
  Normal  Started         2m49s  kubelet            Started container runtime

@praveenkumar
Copy link
Member

@steinrr redhat-developer/odo#4822 (comment) looks like this is not issue with CRC but how odo is working right now with mounts. I am closing this issue from here and let's track it on odo side only.

@steinrr
Copy link
Author

steinrr commented Jun 24, 2021

@steinrr openshift/odo#4822 (comment) looks like this is not issue with CRC but how odo is working right now with mounts. I am closing this issue from here and let's track it on odo side only.

Still does not work after the advice form odo github. Updated the post there with more information.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working os/macos
Projects
None yet
Development

No branches or pull requests

3 participants