Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CRC networking degrades #2597

Closed
owais opened this issue Jul 21, 2021 · 8 comments
Closed

CRC networking degrades #2597

owais opened this issue Jul 21, 2021 · 8 comments
Labels
kind/bug Something isn't working os/macos status/need triage status/stale Issue went stale; did not receive attention or no reply from the OP

Comments

@owais
Copy link

owais commented Jul 21, 2021

Description

CRC cluster on my Macbook laptop runs into networking issues after a few minute of uptime. Usually everything works perfectly for anywhere between 10-20 minutes and then all of a sudden HTTP requests to internet start failing with timeout errors. When this happens, I can still make the same requests from the host (laptop) and they work. I do not run any proxy or VPN. The issues seem to affect most things such as custom services I run in the cluster or cluster's ability to pull new images from quay.

Sometimes restarting a cluster helps but most of the time I have to delete the cluster, run the cleanup command and start a new one to fix the issue. As a result, I'm never able to test/develop/debug services and operators on OpenShift for more than ~15 minutes.

General information

  • OS: macOS

  • Hypervisor: hyperkit

  • Did you run crc setup before starting it (Yes/No)?
    Yes.

  • Running CRC on: Laptop / Baremetal-Server / VM
    Laptop (Intel Macbook Pro)

CRC version

CodeReady Containers version: 1.29.1+bc5f4409
OpenShift version: 4.7.18 (not embedded in executable)

CRC status

DEBU CodeReady Containers version: 1.29.1+bc5f4409
DEBU OpenShift version: 4.7.18 (not embedded in executable)
DEBU Running 'crc status'
DEBU Checking file: /Users/olone/.crc/machines/crc/.crc-exist
DEBU Checking file: /Users/olone/.crc/machines/crc/.crc-exist
DEBU Found binary path at /Applications/CodeReady Containers.app/Contents/Resources/crc-driver-hyperkit
DEBU Launching plugin server for driver hyperkit
DEBU Plugin server listening at address 127.0.0.1:51634
DEBU () Calling .GetVersion
DEBU Using API Version 1
DEBU () Calling .SetConfigRaw
DEBU () Calling .GetMachineName
DEBU (crc) Calling .GetState
DEBU (crc) Calling .GetBundleName
DEBU Running SSH command: df -B1 --output=size,used,target /sysroot | tail -1
DEBU Using ssh private keys: [/Users/olone/.crc/machines/crc/id_ecdsa /Users/olone/.crc/cache/crc_hyperkit_4.7.18/id_ecdsa_crc]
DEBU SSH command results: err: <nil>, output: 32737570816 13603016704 /sysroot
DEBU Making call to close driver server
DEBU (crc) Calling .Close
DEBU Successfully made call to close driver server
DEBU Making call to close connection to plugin binary
CRC VM:          Running
OpenShift:       Running (v4.7.18)
Disk Usage:      13.6GB of 32.74GB (Inside the CRC VM)
Cache Usage:     22.04GB
Cache Directory: /Users/olone/.crc/cache

CRC config

❯ crc config view
- autostart-tray                        : false
- consent-telemetry                     : no
- cpus                                  : 6
- memory                                : 14296
- nameserver                            : 8.8.8.8
- pull-secret-file                      : /Users/olone/Downloads/pull-secret

Host Operating System

ProductName:	macOS
ProductVersion:	11.4
BuildVersion:	20F71

Steps to reproduce

  1. Start a new CRC cluster
  2. Run the cluster for 15-20 mins
  3. Run some HTTP requests to remote servers (not something on the cluster or host/laptop)
  4. At some point most requests start failing with io timeout errors.

Expected

Networking to continue to work under all circumstances.

Actual

After 15-20 mins, most HTTP requests start failing. Calling remote endpoints manually starts failing. The cluster also fails to pull images with timeout errors. Some times restarting the cluster and the stay icon fixes it but most of the time I have to delete the cluster, run crc cleanup and start a new one to fix it only to run into the same issue again in another 15-20 minutes.

Logs

Before gather the logs try following if that fix your issue

DEBU CodeReady Containers version: 1.29.1+bc5f4409
DEBU OpenShift version: 4.7.18 (not embedded in executable)
DEBU Running 'crc start'
DEBU Total memory of system is 34359738368 bytes
DEBU No new version available. The latest version is 1.29.1
DEBU Found binary path at /Applications/CodeReady Containers.app/Contents/Resources/crc-driver-hyperkit
DEBU Launching plugin server for driver hyperkit
DEBU Plugin server listening at address 127.0.0.1:53231
DEBU () Calling .GetVersion
DEBU Using API Version 1
DEBU () Calling .SetConfigRaw
DEBU () Calling .GetMachineName
DEBU (crc) Calling .GetState
DEBU Making call to close driver server
DEBU (crc) Calling .Close
DEBU Successfully made call to close driver server
DEBU Making call to close connection to plugin binary
INFO Checking if running as non-root
INFO Checking if crc-admin-helper executable is cached
INFO Checking for obsolete admin-helper executable
DEBU Checking if an older admin-helper executable is installed
DEBU No older admin-helper executable found
INFO Checking if running on a supported CPU architecture
INFO Checking minimum RAM requirements
DEBU Total memory of system is 34359738368 bytes
INFO Checking if running emulated on a M1 CPU
INFO Checking if HyperKit is installed
INFO Checking if qcow-tool is installed
INFO Checking if crc-driver-hyperkit is installed
DEBU Checking file: /Users/olone/.crc/machines/crc/.crc-exist
DEBU Found binary path at /Applications/CodeReady Containers.app/Contents/Resources/crc-driver-hyperkit
DEBU Launching plugin server for driver hyperkit
DEBU Plugin server listening at address 127.0.0.1:53235
DEBU () Calling .GetVersion
DEBU Using API Version 1
DEBU () Calling .SetConfigRaw
DEBU () Calling .GetMachineName
DEBU (crc) Calling .GetBundleName
DEBU (crc) Calling .GetState
INFO Starting CodeReady Containers VM for OpenShift 4.7.18...
DEBU Updating CRC VM configuration
DEBU (crc) Calling .GetConfigRaw
DEBU (crc) Calling .Start
DEBU (crc) DBG | time="2021-07-21T20:46:38+05:30" level=debug msg="Using hyperkit binary from /Applications/CodeReady Containers.app/Contents/Resources/hyperkit"
DEBU (crc) DBG | time="2021-07-21T20:46:38+05:30" level=debug msg="Starting with cmdline: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-cd55796a47c651faaa67fc150433992288c11664eaf89b85d74c9f3f9bfadb8c/vmlinuz-4.18.0-240.22.1.el8_3.x86_64 random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ignition.platform.id=qemu ostree=/ostree/boot.1/rhcos/cd55796a47c651faaa67fc150433992288c11664eaf89b85d74c9f3f9bfadb8c/0 root=UUID=4bf8aa65-e34a-4858-b706-c8f348c15321 rw rootflags=prjquota"
DEBU (crc) DBG | time="2021-07-21T20:46:38+05:30" level=debug msg="Trying to execute /Applications/CodeReady Containers.app/Contents/Resources/hyperkit -A -u -F /Users/olone/.crc/machines/crc/hyperkit.pid -c 6 -m 14296M -s 0:0,hostbridge -s 31,lpc -U c3d68012-0208-11ea-9fd7-f2189899ab08 -s 1:0,virtio-blk,file:///Users/olone/.crc/machines/crc/crc.qcow2,format=qcow -s 2,virtio-sock,guest_cid=3,path=/Users/olone/.crc/machines/crc -s 3,virtio-rnd -l com1,autopty=/Users/olone/.crc/machines/crc/tty,log=/Users/olone/.crc/machines/crc/console-ring -f kexec,/Users/olone/.crc/cache/crc_hyperkit_4.7.18/vmlinuz-4.18.0-240.22.1.el8_3.x86_64,/Users/olone/.crc/cache/crc_hyperkit_4.7.18/initramfs-4.18.0-240.22.1.el8_3.x86_64.img,earlyprintk=serial BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-cd55796a47c651faaa67fc150433992288c11664eaf89b85d74c9f3f9bfadb8c/vmlinuz-4.18.0-240.22.1.el8_3.x86_64 random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ignition.platform.id=qemu ostree=/ostree/boot.1/rhcos/cd55796a47c651faaa67fc150433992288c11664eaf89b85d74c9f3f9bfadb8c/0 root=UUID=4bf8aa65-e34a-4858-b706-c8f348c15321 rw rootflags=prjquota"
DEBU (crc) DBG | time="2021-07-21T20:46:38+05:30" level=debug msg="error: Temporary Error: hyperkit not running yet - sleeping 1s"
DEBU (crc) DBG | time="2021-07-21T20:46:39+05:30" level=debug msg="retry loop 1"
DEBU (crc) Calling .GetConfigRaw
DEBU Waiting for machine to be running, this may take a few minutes...
DEBU retry loop: attempt 0
DEBU (crc) Calling .GetState
DEBU Machine is up and running!
DEBU (crc) Calling .GetState
INFO CodeReady Containers instance is running with IP 127.0.0.1
DEBU Waiting until ssh is available
DEBU retry loop: attempt 0
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/olone/.crc/cache/crc_hyperkit_4.7.18/id_ecdsa_crc /Users/olone/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:53242->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err     : ssh: handshake failed: read tcp 127.0.0.1:53242->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 1
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/olone/.crc/cache/crc_hyperkit_4.7.18/id_ecdsa_crc /Users/olone/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: <nil>, output:
INFO CodeReady Containers VM is running
DEBU Running SSH command: cat /home/core/.ssh/authorized_keys
DEBU SSH command results: err: <nil>, output: ecdsa-sha2-nistp521 AAAAE2VjZHNhLXNoYTItbmlzdHA1MjEAAAAIbmlzdHA1MjEAAACFBAGui7nyae/3SvApJrThoJ0Xcs7K6Q6z13gUXmzMXv0qr4So9zVuQSAnTKS1e0UE3F5cuRJomgEVGHf82JebrKN9iQGrR/b7fq7JHDwt4ibrWy3RaP37G2X1j6Sy9yeqD8nsA+3JbO+1OOiGycSdcsOeqszDFkFrmAcjffn+3Vzl0ftdnQ==
DEBU Running SSH command: realpath /dev/disk/by-label/root
DEBU SSH command results: err: <nil>, output: /dev/vda4
DEBU Using root access: Growing /dev/vda4 partition
DEBU Running SSH command: sudo /usr/bin/growpart /dev/vda 4
DEBU SSH command results: err: Process exited with status 1, output: NOCHANGE: partition 4 is size 63961055. it cannot be grown
DEBU No free space after /dev/vda4, nothing to do
DEBU Running SSH command: cat /etc/resolv.conf
DEBU SSH command results: err: <nil>, output: # Generated by CRC
search crc.testing
nameserver 192.168.127.1

INFO Adding 8.8.8.8 as nameserver to the instance...
DEBU Running SSH command: NS=8.8.8.8; cat /etc/resolv.conf |grep -i "^nameserver $NS" || echo "nameserver $NS" | sudo tee -a /etc/resolv.conf
DEBU SSH command results: err: <nil>, output: nameserver 8.8.8.8
DEBU Using root access: make root Podman socket accessible
DEBU Running SSH command: sudo chmod 777 /run/podman/ /run/podman/podman.sock
DEBU SSH command results: err: <nil>, output:
DEBU Running '/Applications/CodeReady Containers.app/Contents/Resources/crc-admin-helper-darwin rm api.crc.testing oauth-openshift.apps-crc.testing console-openshift-console.apps-crc.testing downloads-openshift-console.apps-crc.testing canary-openshift-ingress-canary.apps-crc.testing default-route-openshift-image-registry.apps-crc.testing'
DEBU Running '/Applications/CodeReady Containers.app/Contents/Resources/crc-admin-helper-darwin add 127.0.0.1 api.crc.testing oauth-openshift.apps-crc.testing console-openshift-console.apps-crc.testing downloads-openshift-console.apps-crc.testing canary-openshift-ingress-canary.apps-crc.testing default-route-openshift-image-registry.apps-crc.testing'
DEBU Creating /etc/resolv.conf with permissions 0644 in the CRC VM
DEBU Running SSH command: <hidden>
DEBU SSH command succeeded
DEBU retry loop: attempt 0
DEBU Running SSH command: host -R 3 foo.apps-crc.testing
DEBU SSH command results: err: <nil>, output: foo.apps-crc.testing has address 192.168.127.2
INFO Check internal and public DNS query...
DEBU Running SSH command: host -R 3 quay.io
DEBU SSH command results: err: <nil>, output: quay.io has address 3.233.133.41
quay.io has address 3.216.152.103
quay.io has address 34.197.63.98
quay.io has address 44.193.101.5
quay.io has address 54.197.99.84
quay.io has address 50.16.140.223
quay.io has address 54.156.10.58
quay.io has address 3.213.173.170
INFO Check DNS query from host...
DEBU api.crc.testing resolved to [::1 127.0.0.1]
DEBU Running SSH command: test -e /var/lib/kubelet/config.json
DEBU SSH command results: err: <nil>, output:
INFO Verifying validity of the kubelet certificates...
DEBU Running SSH command: date --date="$(sudo openssl x509 -in /var/lib/kubelet/pki/kubelet-client-current.pem -noout -enddate | cut -d= -f 2)" --iso-8601=seconds
DEBU SSH command results: err: <nil>, output: 2021-08-01T03:53:32+00:00
DEBU Running SSH command: date --date="$(sudo openssl x509 -in /var/lib/kubelet/pki/kubelet-server-current.pem -noout -enddate | cut -d= -f 2)" --iso-8601=seconds
DEBU SSH command results: err: <nil>, output: 2021-08-01T03:54:44+00:00
DEBU Running SSH command: date --date="$(sudo openssl x509 -in /etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/aggregator-client-ca/ca-bundle.crt -noout -enddate | cut -d= -f 2)" --iso-8601=seconds
DEBU SSH command results: err: <nil>, output: 2021-08-20T00:56:16+00:00
INFO Starting OpenShift kubelet service
DEBU Using root access: Executing systemctl daemon-reload command
DEBU Running SSH command: sudo systemctl daemon-reload
DEBU SSH command results: err: <nil>, output:
DEBU Using root access: Executing systemctl start kubelet
DEBU Running SSH command: sudo systemctl start kubelet
DEBU SSH command results: err: <nil>, output:
INFO Waiting for kube-apiserver availability... [takes around 2min]
DEBU retry loop: attempt 0
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 1
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 2
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 3
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 4
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 5
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: <nil>, output: NAME                 STATUS   ROLES           AGE   VERSION
crc-4727w-master-0   Ready    master,worker   20d   v1.20.0+87cc9a4
DEBU NAME                 STATUS   ROLES           AGE   VERSION
crc-4727w-master-0   Ready    master,worker   20d   v1.20.0+87cc9a4
DEBU Waiting for availability of resource type 'secret'
DEBU retry loop: attempt 0
DEBU Running SSH command: timeout 5s oc get secret --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: <nil>, output: NAME                       TYPE                                  DATA   AGE
builder-dockercfg-65hmj    kubernetes.io/dockercfg               1      20d
builder-token-pqzmg        kubernetes.io/service-account-token   4      20d
builder-token-x4xpk        kubernetes.io/service-account-token   4      20d
default-dockercfg-jfdwm    kubernetes.io/dockercfg               1      20d
default-token-jggxq        kubernetes.io/service-account-token   4      20d
default-token-mhbsr        kubernetes.io/service-account-token   4      20d
deployer-dockercfg-q9k6z   kubernetes.io/dockercfg               1      20d
deployer-token-2zcv7       kubernetes.io/service-account-token   4      20d
deployer-token-2zxxt       kubernetes.io/service-account-token   4      20d
DEBU NAME                       TYPE                                  DATA   AGE
builder-dockercfg-65hmj    kubernetes.io/dockercfg               1      20d
builder-token-pqzmg        kubernetes.io/service-account-token   4      20d
builder-token-x4xpk        kubernetes.io/service-account-token   4      20d
default-dockercfg-jfdwm    kubernetes.io/dockercfg               1      20d
default-token-jggxq        kubernetes.io/service-account-token   4      20d
default-token-mhbsr        kubernetes.io/service-account-token   4      20d
deployer-dockercfg-q9k6z   kubernetes.io/dockercfg               1      20d
deployer-token-2zcv7       kubernetes.io/service-account-token   4      20d
deployer-token-2zxxt       kubernetes.io/service-account-token   4      20d
DEBU Running SSH command: <hidden>
DEBU SSH command succeeded
DEBU Running SSH command: <hidden>
DEBU SSH command succeeded
DEBU Waiting for availability of resource type 'clusterversion'
DEBU retry loop: attempt 0
DEBU Running SSH command: timeout 5s oc get clusterversion --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: <nil>, output: NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.7.18    True        False         20d     Cluster version is 4.7.18
DEBU NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.7.18    True        False         20d     Cluster version is 4.7.18
DEBU Running SSH command: timeout 30s oc get clusterversion version -o jsonpath="{['spec']['clusterID']}" --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: <nil>, output: 5effe9f8-02ae-4b8c-8e87-5f8d5fd2306b
DEBU Creating /tmp/routes-controller.json with permissions 0444 in the CRC VM
DEBU Running SSH command: <hidden>
DEBU SSH command succeeded
DEBU Running SSH command: timeout 30s oc apply -f /tmp/routes-controller.json --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: <nil>, output: pod/routes-controller configured
DEBU Waiting for availability of resource type 'configmaps'
DEBU retry loop: attempt 0
DEBU Running SSH command: timeout 5s oc get configmaps --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: <nil>, output: NAME               DATA   AGE
kube-root-ca.crt   1      20d
DEBU NAME               DATA   AGE
kube-root-ca.crt   1      20d
DEBU Running SSH command: timeout 30s oc get configmaps admin-kubeconfig-client-ca -n openshift-config -o jsonpath="{.data.ca-bundle\.crt}" --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: <nil>, output: -----BEGIN CERTIFICATE-----
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
-----END CERTIFICATE-----
INFO Starting OpenShift cluster... [waiting for the cluster to stabilize]
INFO All operators are available. Ensuring stability...
DEBU kube-scheduler operator is still progressing, Reason: NodeInstaller
DEBU marketplace operator is still progressing, Reason: OperatorStarting
DEBU marketplace operator not available, Reason: OperatorStarting
DEBU network operator is still progressing, Reason: Deploying
INFO 3 operators are progressing: kube-scheduler, marketplace, network
DEBU kube-scheduler operator is still progressing, Reason: NodeInstaller
INFO Operator kube-scheduler is progressing
DEBU kube-scheduler operator is still progressing, Reason: NodeInstaller
INFO Operator kube-scheduler is progressing

If I make the same requests from my laptop instead of from within the CRC cluster, it works.
At the time of reporting this issue, I had changed nameserver to 8.8.8.8 but I've experienced the same issue for weeks before setting the nameserver.

@owais owais added kind/bug Something isn't working status/need triage labels Jul 21, 2021
@guillaumerose
Copy link
Contributor

Interesting! Thanks for the bug report. I will try tomorrow.

When it happens, can you still reach the console or run oc commands?

@owais
Copy link
Author

owais commented Jul 21, 2021

Yes, I can load console just fine and run all oc commands as well. Timeouts only happen when some service talks to the internet from within the cluster. To my naive mind, I think something goes wrong with crc-http.sock or tap.sock. Happy to help debug. LMK what more info I can provide.

@guillaumerose
Copy link
Contributor

I am running the VM for 1.5h on my mac right now and it works.

Can you look at the output of curl --unix-socket ~/.crc/crc-http.sock http:/unix/network/stats | jq .? Maybe we will see a high number of drop somewhere.

Can you try to avoid the DNS resolution when you test? Maybe do a curl to an IP.

@guillaumerose
Copy link
Contributor

I tried during 5h and I didn't face the issue. I also ran periodically Apache Bench on a static website to eventually trigger the issue.

@owais
Copy link
Author

owais commented Jul 23, 2021

Sorry, I haven't worked with CRC in the last couple of days so haven't had a chance to gather the requested info. I'll likely be able to do it early next week.

I tried during 5h and I didn't face the issue. I also ran periodically Apache Bench on a static website to eventually trigger the issue.

Do you mena you were able to reproduce the issue with Apache Bench? FWIW, I'm using OpenTelemetry collector that exports a bunch of metrics to a remote endpoint (SignalFx)

@stale
Copy link

stale bot commented Sep 21, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the status/stale Issue went stale; did not receive attention or no reply from the OP label Sep 21, 2021
@stale stale bot closed this as completed Oct 6, 2021
@IBMRob
Copy link

IBMRob commented Oct 6, 2022

I am also seeing something similar

Inside the VM I was able to curl both Quay.io and also redhat so the VM it self appears to have network access but when attempting to use podman we see the error:

curl

curl -s -o /dev/null -w "%{http_code}" https://quay.io 
200
curl -s -o /dev/null -w "%{http_code}" https://registry.access.redhat.com
301

podman

[core@crc-j2d48-master-0 ~]$ podman pull registry.redhat.io/redhat/certified-operator-index:v4.11
Trying to pull registry.redhat.io/redhat/certified-operator-index:v4.11...
Error: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.11: pinging container registry registry.redhat.io: Get "https://registry.redhat.io/v2/": dial tcp: lookup registry.redhat.io on 192.168.127.1:53: read udp 192.168.127.2:46800->192.168.127.1:53: i/o timeout

From outside the VM

convery@Robs-M1 20221002 % curl --unix-socket ~/.crc/crc-http.sock http:/unix/network/stats | jq .
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  3953    0  3953    0     0   479k      0 --:--:-- --:--:-- --:--:--  551k
{
  "ARP": {
    "DisabledPacketsReceived": 0,
    "MalformedPacketsReceived": 0,
    "OutgoingRepliesDropped": 0,
    "OutgoingRepliesSent": 1666,
    "OutgoingRequestBadLocalAddressErrors": 0,
    "OutgoingRequestInterfaceHasNoLocalAddressErrors": 0,
    "OutgoingRequestsDropped": 0,
    "OutgoingRequestsSent": 2765,
    "PacketsReceived": 3196,
    "RepliesReceived": 1530,
    "RequestsReceived": 1666,
    "RequestsReceivedUnknownTargetAddress": 0
  },
  "BytesReceived": 181329755,
  "BytesSent": 324363453,
  "DroppedPackets": 0,
  "ICMP": {
    "V4": {
      "PacketsReceived": {
        "ICMPv4PacketStats": {
          "DstUnreachable": 35891,
          "EchoReply": 0,
          "EchoRequest": 0,
          "InfoReply": 0,
          "InfoRequest": 0,
          "ParamProblem": 0,
          "Redirect": 0,
          "SrcQuench": 0,
          "TimeExceeded": 0,
          "Timestamp": 0,
          "TimestampReply": 0
        },
        "Invalid": 0
      },
      "PacketsSent": {
        "Dropped": 0,
        "ICMPv4PacketStats": {
          "DstUnreachable": 0,
          "EchoReply": 0,
          "EchoRequest": 0,
          "InfoReply": 0,
          "InfoRequest": 0,
          "ParamProblem": 0,
          "Redirect": 0,
          "SrcQuench": 0,
          "TimeExceeded": 0,
          "Timestamp": 0,
          "TimestampReply": 0
        },
        "RateLimited": 0
      }
    },
    "V6": {
      "PacketsReceived": {
        "ICMPv6PacketStats": {
          "DstUnreachable": 0,
          "EchoReply": 0,
          "EchoRequest": 0,
          "MulticastListenerDone": 0,
          "MulticastListenerQuery": 0,
          "MulticastListenerReport": 0,
          "NeighborAdvert": 0,
          "NeighborSolicit": 0,
          "PacketTooBig": 0,
          "ParamProblem": 0,
          "RedirectMsg": 0,
          "RouterAdvert": 0,
          "RouterSolicit": 0,
          "TimeExceeded": 0
        },
        "Invalid": 0,
        "RouterOnlyPacketsDroppedByHost": 0,
        "Unrecognized": 0
      },
      "PacketsSent": {
        "Dropped": 0,
        "ICMPv6PacketStats": {
          "DstUnreachable": 0,
          "EchoReply": 0,
          "EchoRequest": 0,
          "MulticastListenerDone": 0,
          "MulticastListenerQuery": 0,
          "MulticastListenerReport": 0,
          "NeighborAdvert": 0,
          "NeighborSolicit": 0,
          "PacketTooBig": 0,
          "ParamProblem": 0,
          "RedirectMsg": 0,
          "RouterAdvert": 0,
          "RouterSolicit": 0,
          "TimeExceeded": 0
        },
        "RateLimited": 0
      }
    }
  },
  "IGMP": {
    "PacketsReceived": {
      "ChecksumErrors": 0,
      "IGMPPacketStats": {
        "LeaveGroup": 0,
        "MembershipQuery": 0,
        "V1MembershipReport": 0,
        "V2MembershipReport": 0
      },
      "Invalid": 0,
      "Unrecognized": 0
    },
    "PacketsSent": {
      "Dropped": 0,
      "IGMPPacketStats": {
        "LeaveGroup": 0,
        "MembershipQuery": 0,
        "V1MembershipReport": 0,
        "V2MembershipReport": 0
      }
    }
  },
  "IP": {
    "DisabledPacketsReceived": 0,
    "Forwarding": {
      "Errors": 0,
      "ExhaustedTTL": 0,
      "ExtensionHeaderProblem": 0,
      "HostUnreachable": 0,
      "LinkLocalDestination": 0,
      "LinkLocalSource": 0,
      "NoMulticastPendingQueueBufferSpace": 0,
      "PacketTooBig": 0,
      "UnexpectedMulticastInputInterface": 0,
      "UnknownOutputEndpoint": 0,
      "Unrouteable": 0
    },
    "IPTablesForwardDropped": 0,
    "IPTablesInputDropped": 0,
    "IPTablesOutputDropped": 0,
    "IPTablesPostroutingDropped": 0,
    "IPTablesPreroutingDropped": 0,
    "InvalidDestinationAddressesReceived": 0,
    "InvalidSourceAddressesReceived": 0,
    "MalformedFragmentsReceived": 0,
    "MalformedPacketsReceived": 0,
    "OptionRecordRouteReceived": 0,
    "OptionRouterAlertReceived": 0,
    "OptionTimestampReceived": 0,
    "OptionUnknownReceived": 0,
    "OutgoingPacketErrors": 2927,
    "PacketsDelivered": 417550,
    "PacketsReceived": 417550,
    "PacketsSent": 404473,
    "ValidPacketsReceived": 417550
  },
  "NICs": {
    "DisabledRx": {
      "Bytes": 0,
      "Packets": 0
    },
    "MalformedL4RcvdPackets": 0,
    "Neighbor": {
      "UnreachableEntryLookups": 408
    },
    "Rx": {
      "Bytes": 175436259,
      "Packets": 420738
    },
    "Tx": {
      "Bytes": 324430114,
      "Packets": 405977
    },
    "TxPacketsDroppedNoBufferSpace": 0
  },
  "TCP": {
    "ActiveConnectionOpenings": 2698,
    "ChecksumErrors": 0,
    "CurrentConnected": 1,
    "CurrentEstablished": 0,
    "EstablishedClosed": 2305,
    "EstablishedResets": 137,
    "EstablishedTimedout": 2,
    "FailedConnectionAttempts": 1425,
    "FailedPortReservations": 0,
    "FastRecovery": 0,
    "FastRetransmit": 0,
    "InvalidSegmentsReceived": 0,
    "ListenOverflowAckDrop": 0,
    "ListenOverflowInvalidSynCookieRcvd": 0,
    "ListenOverflowSynCookieRcvd": 0,
    "ListenOverflowSynCookieSent": 0,
    "ListenOverflowSynDrop": 0,
    "PassiveConnectionOpenings": 0,
    "ResetsReceived": 1590,
    "ResetsSent": 9715,
    "Retransmits": 30,
    "SACKRecovery": 0,
    "SegmentSendErrors": 0,
    "SegmentsAckedWithDSACK": 0,
    "SegmentsSent": 238639,
    "SlowStartRetransmits": 30,
    "SpuriousRTORecovery": 0,
    "SpuriousRecovery": 0,
    "TLPRecovery": 0,
    "Timeouts": 32,
    "ValidSegmentsReceived": 202851
  },
  "UDP": {
    "ChecksumErrors": 0,
    "MalformedPacketsReceived": 0,
    "PacketSendErrors": 0,
    "PacketsReceived": 165958,
    "PacketsSent": 165834,
    "ReceiveBufferErrors": 0,
    "UnknownPortErrors": 2951
  }
}

Originally raised in #3373

@mhcastro
Copy link

Hi @IBMRob - are you running on Apple M1 Max silicon? I didn't have issues with CRC in the Intel-chip MacOS. Just got this on the new Apple M1 Max silicon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working os/macos status/need triage status/stale Issue went stale; did not receive attention or no reply from the OP
Projects
None yet
Development

No branches or pull requests

5 participants