Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Canary Integration test is failing intermittently #14131

Open
parth-gr opened this issue Apr 25, 2024 · 0 comments
Open

Canary Integration test is failing intermittently #14131

parth-gr opened this issue Apr 25, 2024 · 0 comments
Labels

Comments

@parth-gr
Copy link
Member

Is this a bug report or feature request?

  • Bug Report
Run rgw_endpoint=$(kubectl get service -n rook-ceph -l rgw=store-a |  awk '/rgw/ {print $3":80"}')
++ kubectl get service -n rook-ceph -l rgw=store-a
++ awk '/rgw/ {print $3":80"}'
+ rgw_endpoint=10.105.8.171:80
++ kubectl get pod -l app=rook-ceph-tools -n rook-ceph -o 'jsonpath={.items[*].metadata.name}'
+ toolbox=rook-ceph-tools-98bfc477d-zh8f6
+ timeout 15 sh -c 'until kubectl -n rook-ceph exec rook-ceph-tools-98bfc477d-zh8f6 -- python3 /etc/ceph/create-external-cluster-resources.py --rbd-data-pool-name replicapool --rgw-endpoint 10.105.8.171:80 2> output.txt; do sleep 1 && echo '\''waiting for the rgw endpoint to be validated'\''; done'
[{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "a=10.99.85.196:6789", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "c7b45b68-3866-4738-b681-2b71408a3fd6", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "client.healthchecker", "userKey": "AQC3LSFmTnS2IBAAWAEfe8cqD4Ua8Z7zsGJwhA=="}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "10.244.191.154", "MonitoringPort": "9283"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "AQAFLSFmSuaICBAACPXkEvP+OA7GabfxOvbW2w=="}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "AQAELSFmfnT8OBAApu5hYV+RBHoJ4Dj6gBoVfg=="}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "AQAFLSFmmluUExAA8qPTHHlDuJ19cCuRW00vdQ=="}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "AQAFLSFm1PSpHhAALRMsyrZ6sa6Grd9uL4zseA=="}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "http://10.244.191.154:7000/"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "replicapool", "csi.storage.k8s.io/provisioner-secret-name": "rook-csi-rbd-provisioner", "csi.storage.k8s.io/controller-expand-secret-name": "rook-csi-rbd-provisioner", "csi.storage.k8s.io/node-stage-secret-name": "rook-csi-rbd-node"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "myfs", "pool": "myfs-replicated", "csi.storage.k8s.io/provisioner-secret-name": "rook-csi-cephfs-provisioner", "csi.storage.k8s.io/controller-expand-secret-name": "rook-csi-cephfs-provisioner", "csi.storage.k8s.io/node-stage-secret-name": "rook-csi-cephfs-node"}}]

+ tests/scripts/github-action-helper.sh check_empty_file output.txt
++ find_extra_block_dev
+++ sudo lsblk
++ echo 'NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda       8:0    0   75G  0 disk 
├─sda1    8:1    0 74.9G  0 part /
├─sda14   8:14   0    4M  0 part 
└─sda15   8:15   0  106M  0 part /boot/efi
sdb       8:16   0   75G  0 disk 
├─sdb1    8:17   0    6G  0 part 
└─sdb2    8:18   0    6G  0 part '
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda       8:0    0   75G  0 disk 
├─sda1    8:1    0 74.9G  0 part /
├─sda14   8:14   0    4M  0 part 
└─sda15   8:15   0  106M  0 part /boot/efi
sdb       8:16   0   75G  0 disk 
├─sdb1    8:17   0    6G  0 part 
└─sdb2    8:18   0    6G  0 part 
+++ awk '{print $2}'
+++ grep boot
+++ sudo lsblk --noheading --list --output MOUNTPOINT,PKNAME
++ boot_dev=sda
++ echo '  == find_extra_block_dev(): boot_dev='\''sda'\'''
  == find_extra_block_dev(): boot_dev='sda'
+++ grep -v sda
+++ head -1
+++ grep -v loop
+++ sudo lsblk --noheading --list --nodeps --output KNAME
++ extra_dev=sdb
++ echo '  == find_extra_block_dev(): extra_dev='\''sdb'\'''
  == find_extra_block_dev(): extra_dev='sdb'
++ echo sdb
+ : sdb
++ basename sdb
+ BLOCK=sdb
+ NETWORK_ERROR='connection reset by peer'
+ SERVICE_UNAVAILABLE_ERROR='Service Unavailable'
+ INTERNAL_ERROR=INTERNAL_ERROR
+ INTERNAL_SERVER_ERROR='[50](https://github.com/rook/rook/actions/runs/8739503763/job/23981053403?pr=13807#step:28:51)0 Internal Server Error'
+ FUNCTION=check_empty_file
+ shift
+ check_empty_file output.txt
+ output_file=output.txt
+ '[' -s output.txt ']'
+ echo 'script failed with stderr error'
script failed with stderr error
+ cat output.txt
+ rm -f output.txt
The provided rgw Endpoint, '10.105.8.171:80', is invalid.
+ exit 1
Error: Process completed with exit code 1.

Deviation from expected behavior:

CI should run successfully

Expected behavior:

How to reproduce it (minimal and precise):

File(s) to submit:

  • Cluster CR (custom resource), typically called cluster.yaml, if necessary

Logs to submit:

  • Operator's logs, if necessary

  • Crashing pod(s) logs, if necessary

    To get logs, use kubectl -n <namespace> logs <pod name>
    When pasting logs, always surround them with backticks or use the insert code button from the Github UI.
    Read GitHub documentation if you need help.

Cluster Status to submit:

  • Output of kubectl commands, if necessary

    To get the health of the cluster, use kubectl rook-ceph health
    To get the status of the cluster, use kubectl rook-ceph ceph status
    For more details, see the Rook kubectl Plugin

Environment:

  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Cloud provider or hardware configuration:
  • Rook version (use rook version inside of a Rook Pod):
  • Storage backend version (e.g. for ceph do ceph -v):
  • Kubernetes version (use kubectl version):
  • Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
  • Storage backend status (e.g. for Ceph use ceph health in the Rook Ceph toolbox):
@parth-gr parth-gr added the bug label Apr 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant