Welcome to Ubuntu 20.04.4 LTS (GNU/Linux 4.4.0-17763-Microsoft x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage System information as of Thu Dec 21 00:26:10 DST 2023 System load: 0.52 Processes: 7 Usage of /home: unknown Users logged in: 0 Memory usage: 65% IPv4 address for eth0: 10.100.176.132 Swap usage: 5% 189 updates can be applied immediately. 139 of these updates are standard security updates. To see these additional updates run: apt list --upgradable The list of available updates is more than a week old. To check for new updates run: sudo apt update This message is shown once a day. To disable it please create the /home/maffadmin/.hushlogin file. maffadmin@comdevjpevim01:~$ whoami ; date ; uname -n maffadmin Thu Dec 21 00:26:25 DST 2023 comdevjpevim01 maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$ kubectl config use-context gisdevjpeaks01 Switched to context "gisdevjpeaks01". maffadmin@comdevjpevim01:~$ kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE falco falco-lq494 0/1 CrashLoopBackOff 155 (3m5s ago) 12h falco falco-lws9v 0/1 CrashLoopBackOff 155 (78s ago) 12h falco falco-nc7fx 0/1 CrashLoopBackOff 155 (112s ago) 12h falco falco-wrsmz 0/1 CrashLoopBackOff 155 (2m ago) 12h gatekeeper-system gatekeeper-audit-689f4bc4fd-2xt84 1/1 Running 0 21d gatekeeper-system gatekeeper-controller-7cb9856964-gcklz 1/1 Running 0 21d gatekeeper-system gatekeeper-controller-7cb9856964-t8svd 1/1 Running 0 21d ingress-basic api-accounts-69447575d-kljl5 1/1 Running 0 21d ingress-basic api-assets-7d569cd64b-rv4pg 1/1 Running 0 21d ingress-basic api-fonts-59456444cb-mjq5q 1/1 Running 0 21d ingress-basic api-geocoder-5d77b8f94f-nhzfb 1/1 Running 0 21d ingress-basic api-gl-5dcf5d64dc-srmwp 1/1 Running 0 21d ingress-basic api-rastertiles-7489fbd85c-pp5js 1/1 Running 0 21d ingress-basic api-styles-7c6f95c99c-k5d8g 1/1 Running 0 21d ingress-basic api-tilesets-f9bbc46db-cblb8 1/1 Running 0 21d ingress-basic api-vectortiles-8bf89849b-z4z7n 1/1 Running 0 21d ingress-basic atlas-backend-6b66c56fd8-qmppp 2/2 Running 0 21d ingress-basic atlas-core-6b68bd888f-cf6js 1/1 Running 0 21d ingress-basic atlas-memcached-67fcfb6b7f-p6wjd 1/1 Running 0 21d ingress-basic atlas-minio-bb9788795-n2xhj 1/1 Running 0 21d ingress-basic atlas-navigation-74b7ff9cb7-mbkzg 1/2 CrashLoopBackOff 10119 (2m4s ago) 21d ingress-basic atlas-redis-b967c8f75-xskvx 1/1 Running 0 21d ingress-basic atlas-router-697cb8c596-j874h 1/1 Running 0 21d ingress-basic nginx-ingress-ingress-nginx-controller-7b7f6f5488-qknh6 1/1 Running 0 21d ingress-basic nginx-ingress-ingress-nginx-controller-7b7f6f5488-thcb2 1/1 Running 0 21d kube-system ama-logs-nhn59 2/2 Running 0 21d kube-system ama-logs-r7c2b 2/2 Running 0 21d kube-system ama-logs-rs-db7769fb-xzd5q 1/1 Running 0 21d kube-system ama-logs-sb9c4 2/2 Running 0 21d kube-system ama-logs-vncln 2/2 Running 0 21d kube-system azure-ip-masq-agent-5smf5 1/1 Running 0 21d kube-system azure-ip-masq-agent-m22w2 1/1 Running 0 21d kube-system azure-ip-masq-agent-xssxp 1/1 Running 0 21d kube-system azure-ip-masq-agent-xvp6h 1/1 Running 0 21d kube-system azure-policy-5487cfcd96-tcpzx 1/1 Running 0 21d kube-system azure-policy-webhook-b8f4bc669-dwshc 1/1 Running 0 21d kube-system cloud-node-manager-cpzl5 1/1 Running 0 21d kube-system cloud-node-manager-g9gqd 1/1 Running 0 21d kube-system cloud-node-manager-j48st 1/1 Running 0 21d kube-system cloud-node-manager-ln9xz 1/1 Running 0 21d kube-system coredns-785fcf7bdd-cs65h 1/1 Running 0 21d kube-system coredns-785fcf7bdd-f6rmt 1/1 Running 0 21d kube-system coredns-autoscaler-686794c454-62x2d 1/1 Running 0 21d kube-system csi-azuredisk-node-5tbrk 3/3 Running 0 21d kube-system csi-azuredisk-node-fp2r2 3/3 Running 0 21d kube-system csi-azuredisk-node-psrtr 3/3 Running 0 21d kube-system csi-azuredisk-node-qzzqc 3/3 Running 0 21d kube-system csi-azurefile-node-446nl 3/3 Running 0 21d kube-system csi-azurefile-node-kdpr2 3/3 Running 0 21d kube-system csi-azurefile-node-pz7fc 3/3 Running 0 21d kube-system csi-azurefile-node-z6slx 3/3 Running 0 21d kube-system konnectivity-agent-7b77d967d7-48r9t 1/1 Running 0 21d kube-system konnectivity-agent-7b77d967d7-bggrh 1/1 Running 0 21d kube-system kube-proxy-62pj4 1/1 Running 0 21d kube-system kube-proxy-dp4qb 1/1 Running 0 21d kube-system kube-proxy-s5mjn 1/1 Running 0 21d kube-system kube-proxy-sc7qn 1/1 Running 0 21d kube-system metrics-server-85756fd984-kxjtf 2/2 Running 1 (19d ago) 21d kube-system metrics-server-85756fd984-zh6wv 2/2 Running 0 21d velero velero-66d9b97945-zznq7 1/1 Running 0 21d maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$ kubectl exec -it falco-lq494 -n falco error: you must specify at least one command for the container maffadmin@comdevjpevim01:~$ kubectl exec -it falco-lq494 error: you must specify at least one command for the container maffadmin@comdevjpevim01:~$ kubectl exec --stdin --tty falco-lq494 -- /bin/bash Error from server (NotFound): pods "falco-lq494" not found maffadmin@comdevjpevim01:~$ kubectl exec --stdin --tty falco-wrsmz -- /bin/bash Error from server (NotFound): pods "falco-wrsmz" not found maffadmin@comdevjpevim01:~$ kubectl exec --stdin --tty falco -- /bin/bash Error from server (NotFound): pods "falco" not found maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$ history | grep nods 2008 history | grep nods maffadmin@comdevjpevim01:~$ history | grep noods 2009 history | grep noods maffadmin@comdevjpevim01:~$ history | grep kubectl 1011 kubectl describe node aks-nodepool1-16552246-vmss000002 1020 kubectl get nodes 1021 kubectl debug node/aks-nodepool1-16552246-vmss000000 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 1026 kubectl delete -n velero backup.velero.io/velero-atlas-backup01-20220204013052 1028 kubectl delete -n velero backup.velero.io/velero-atlas-backup01-20211127170010 1029 kubectl delete -n velero backup.velero.io/velero-atlas-backup01-20211116170052 1030 kubectl delete -n velero backup.velero.io/velero-atlas-backup01-20211231170031 1031 kubectl delete -n velero backup.velero.io/velero-atlas-backup01-20211114170049 1033 kubectl get nodes 1034 kubectl debug node/aks-nodepool1-16552246-vmss000000 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 1039 kubectl delete -n velero backup.velero.io/velero-atlas-backup01-20220218181147 1048 kubectl config use-context gisdevjpeaks01 1073 kubectl config use-context gisdevjpeaks01 1074 kubectl get pvc pvc-azuredisk 1091 kubectl version 1092 kubectl get pods -n ingress-basic 1093 kubectl describe pod nginx-ingress-ingress-nginx-controller-788557bd7f-nqgps 1094 kubectl describe pod nginx-ingress-ingress-nginx-controller-788557bd7f-nqgps -n ingress-basic 1095 kubectl get deployment -n ingress-basic 1096 kubectl descirbe deployment nginx-ingress-ingress-nginx-controller -n ingress-basci 1097 kubectl describee deployment nginx-ingress-ingress-nginx-controller -n ingress-basic 1098 kubectl describe deployment nginx-ingress-ingress-nginx-controller -n ingress-basic 1116 kubectl get pod 1117 kubectl get pod -n ingress-basic 1118 kubectl get pdb -A 1120 kubectl get pdb -A 1121 kubectl get pdb -A -o yaml> backup.yaml 1123 kubectl delete --all pdb -A 1124 kubectl get pdb -A 1129 kubectl apply -f backup.yaml 1130 kubectl get pdb -A 1132 kubectl version --client 1205 kubectl config current-context 1206 kubectl describe nodes 1207 kubectl get pods -A -o wide 1208 kubectl describe pod -n ingress-basic nginx-ingress-ingress-nginx-controller-7b7f6f5488-fjn6l 1209 kubectl describe pod -n ingress-basic nginx-ingress-ingress-nginx-controller-7b7f6f5488-zhg8n 1211 kubectl config current-context 1212 kubectl get pdb 1213 kubectl get pdb -A 1214 kubectl get pdb -A -o yaml> backup.yaml 1216 kubectl delete --all pdb -A 1217 kubectl get pdb -A 1219 kubectl apply -f backup.yaml 1220 kubectl get pdb -A 1227 kubectl config use-contect gisdevjpeaks01 1228 kubectl config use-context gisdevjpeaks01 1230 kubectl config use-context gisdevjpeaks01 1231 kubectl get deployment -n velero 1232 kubectl get deployment velero -n velero -o yaml 1237 kubectl get deployment -n velero 1238 kubectl config use-context gisstgjpeaks01 1239 kubectl config use-context gisdevjpeaks01 1240 kubectl get deployment -n velero 1241 kubectl config use-context gisdevjpeaks01 1242 kubectl get deployment -n velero 1243 kubectl get pods -n velero 1244 kubectl get nodes -o wide 1245 kubectl debug node/aks-nodepool1-16552246-vmss000001 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 -n 1246 kubectl debug node/aks-nodepool1-16552246-vmss000001 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 -n default 1252 kubectl debug node/aks-nodepool1-16552246-vmss000001 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 -n default 1253 kubectl config use-context gisdevjpeaks01 1254 kubectl debug node/aks-nodepool1-16552246-vmss000001 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 -n default 1256 kubectl debug node/aks-nodepool1-16552246-vmss000001 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 -n default 1257 kubectl debug node/aks-nodepool1-16552246-vmss000002 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 -n default 1258 kubectl get nodes -o wide 1259 kubectl debug node/aks-nodepool1-16552246-vmss000007 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 -n default 1260 kubectl config use-context gisdevjpeaks01 1261 kubectl debug node/aks-nodepool1-16552246-vmss000007 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 -n default 1262 kubectl debug node/aks-nodepool1-16552246-vmss000009 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 -n default 1263 kubectl debug node/aks-nodepool1-16552246-vmss000001 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 -n default 1264 kubectl get nodes -o wide 1265 kubectl debug node/aks-nodepool1-16552246-vmss000001 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 -n default 1266 kubectl config use-context gisdevjpeaks01 1267 kubectl get nodes -o wide 1268 kubectl debug node/aks-nodepool1-16552246-vmss000001 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 -n default 1269 kubectl debug node/aks-nodepool1-16552246-vmss000007 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 -n default 1277 kubectl debug node/aks-nodepool1-16552246-vmss000009 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 -n default 1278 kubectl debug node/aks-nodepool1-16552246-vmss000009 -it -n default 1280 kubectl debug node/aks-nodepool1-16552246-vmss000001 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 -n default 1281 kubectl get nodes -o wide 1282 kubectl debug node/aks-nodepool1-16552246-vmss000001 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 -n default 1283 kubectl config use-context gisdevjpeaks01 1284 kubectl debug node/aks-nodepool1-16552246-vmss000001 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 -n default 1285 kubectl debug node/aks-nodepool1-16552246-vmss000007 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 -n default 1286 kubectl config use-context gisdevjpeaks01 1287 kubectl debug node/aks-nodepool1-16552246-vmss000007 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 -n default 1288 kubectl debug node/aks-nodepool1-16552246-vmss000001 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 -n default 1290 kubectl version 1294 kubectl config current-context 1295 kubectl get pdb -A 1296 kubectl get pdb -A -o yaml> backup.yaml 1298 kubectl get pdb -A -o yaml> backup.yaml 1300 kubectl delete --all pdb -A 1301 kubectl get pdb -A 1303 kubectl config current-context 1304 kubectl apply -f backup.yaml 1305 kubectl get pdb -A 1307 kubectl version 1309 kubectl version 1311 kubectl logs falco-9zjtt -n falco --timestamps=true 1313 kubectl logs falco-9zjtt -n falco --timestamps=true -p 1315 kubectl describe pods falco-9zjtt -n falco 1317 kubectl get pod -n falco falco-9zjtt -o yaml 1321 kubectl config current-context 1322 kubectl get pdb -A 1323 kubectl get pdb -A -o yaml> backup.yaml 1325 kubectl delete --all pdb -A 1326 kubectl get pdb -A 1327 kubectl config current-context 1328 kubectl apply -f backup.yaml 1329 kubectl get pdb -A 1330 kubectl config use-context gisdebjpeaks01 1331 kubectl config use-context gisdevjpeaks01 1332 kubectl get pod -A 1333 kubectl config use-context gisdevjpeaks01 1334 kubectl get pod -A 1335 kubectl exec -it api-tilesets-5484f57f7f-fr6xk -n ingress-basic -- /bin/sh 1336 kubectl exec -it atlas-backend-8476dbc5d4-wjfvb -n ingress-basic -- /bin/sh 1341 kubectl get Pod -A 1342 kubectl cp ./atlas-user.emaffmap-mapbox.mbtiles atlas-backend-8476dbc5d4-wjfvb:/usr/local/src/atlas-ddb/mbtiles/atlas-user.emaffmap-mapbox.mbtiles 1343 kubectl get pods 1344 kubectl get pod 1345 kubectl get pod -A 1346 kubectl cp ./atlas-user.emaffmap-mapbox.mbtiles atlas-backend-8476dbc5d4-wjfvb:/usr/local/src/atlas-ddb/mbtiles/atlas-user.emaffmap-mapbox.mbtiles -n ingress-basic 1347 kubectl cp ./atlas-user.emaffmap-mapbox.mbtiles ingress-basic/atlas-backend-8476dbc5d4-wjfvb:/usr/local/src/atlas-ddb/mbtiles/atlas-user.emaffmap-mapbox.mbtiles 1352 kubectl cp ./atlas-user.emaffmap-mapbox.mbtiles ingress-basic/atlas-backend-8476dbc5d4-wjfvb:/usr/local/src/atlas-ddb/mbtiles/atlas-user.emaffmap-mapbox.mbtiles 1353 kubectl cp ./upto16.atlas-user.emaffmap-mapbox.mbtiles ingress-basic/atlas-backend-8476dbc5d4-wjfvb:/usr/local/src/atlas-ddb/mbtiles/upto16.atlas-user.emaffmap-mapbox.mbtiles 1354 kubectl exec -it atlas-backend-8476dbc5d4-wjfvb -n ingress-basic -- /bin/sh 1356 kubectl use-context gisdevjpeaks01 1357 kubectl config use-context gisdevjpeaks01 1358 kubectl get Pod -A 1360 kubectl cp atlas-user.emaffmap-mapbox.mbtiles atlas-backend-8476dbc5d4-wjfvb:/usr/local/src/atlas-ddb/mbtiles -c atlas-ddb 1361 kubectl cp atlas-user.emaffmap-mapbox.mbtiles ingress-basic:atlas-backend-8476dbc5d4-wjfvb:/usr/local/src/atlas-ddb/mbtiles -c atlas-ddb 1362 kubectl cp atlas-user.emaffmap-mapbox.mbtiles ingress-basic:atlas-backend-8476dbc5d4-wjfvb:/usr/local/atlas-ddb/mbtiles -c atlas-ddb 1363 kubectl cp -n ingress-basic atlas-user.emaffmap-mapbox.mbtiles atlas-backend-8476dbc5d4-wjfvb:/usr/local/atlas-ddb/mbtiles -c atlas-ddb 1365 kubectl cp atlas-user.emaffmap-mapbox.mbtiles ingress-basic/atlas-backend-8476dbc5d4-wjfvb:/usr/local/atlas-ddb/mbtiles -c atlas-ddb 1366 kubectl get Pod -A 1367 kubectl exec -it atlas-backend-8476dbc5d4-wjfvb -n ingress-basic -c atlas-ddb -- /bin/sh 1368 kubectl cp atlas-user.emaffmap-mapbox.mbtiles ingress-basic/atlas-backend-8476dbc5d4-wjfvb:/usr/local/src/atlas-ddb/mbtiles -c atlas-ddb 1369 kubectl cp upto16.atlas-user.emaffmap-mapbox.mbtiles ingress-basic/atlas-backend-8476dbc5d4-wjfvb:/usr/local/src/atlas-ddb/mbtiles -c atlas-ddb 1370 kubectl exec -it atlas-backend-8476dbc5d4-wjfvb -n ingress-basic -c atlas-ddb -- /bin/sh 1371 kubectl config use-context gisdevjpeaks01 1373 kubectl get Pod -A 1374 kubectl exec kubectl exec -it atlas-backend-8476dbc5d4-wjfvb -n ingress-basic -c atlas-ddb -- /bin/sh 1375 kubectl exec -it atlas-backend-8476dbc5d4-wjfvb -n ingress-basic -c atlas-ddb -- /bin/sh 1376 kubectl config use-context gisdevjpeaks 1378 kubectl config use-context gisdevjpeaks01 1379 kubectl get Pod -A 1380 kubectl cp atlas-user.nagasaki-converted-by-mapbox.mbtiles ingress-basic/atlas-backend-8476dbc5d4-wjfvb:/usr/local/src/atlas-ddb/mbtiles -c atlas-ddb 1382 kubectl cp atlas-user.nagasaki-converted-by-mapbox.mbtiles ingress-basic/atlas-backend-8476dbc5d4-wjfvb:/usr/local/src/atlas-ddb/mbtiles -c atlas-ddb 1383 kubectl exec -it atlas-backend-8476dbc5d4-wjfvb -n ingress-basic -c atlas-ddb -- /bin/sh 1384 kubectl config use-context gisdevjpeaks01 1385 kubectl config use-context gisdevjpeaks-1 1386 kubectl config use-context gisdevjpeaks01 1390 kubectl get Pod -A 1391 kubectl cp atlas-user.nagasaki-converted-by-mapbox.mbtiles ingress-basic/atlas-backend-8476dbc5d4-wjfvb:/usr/local/src/atlas-ddb/mbtiles -c atlas-ddb 1392 kubectl exe -it atlas-backend-8476dbc5d4-wjfvb -n ingress-basic -c atlas-ddb -- /bin/sh 1393 kubectl exec -it atlas-backend-8476dbc5d4-wjfvb -n ingress-basic -c atlas-ddb -- /bin/sh 1396 kubectl config current-context 1397 kubectl get pdb -A 1398 kubectl get pdb -A -o yaml> backup.yaml 1400 kubectl delete --all pdb -A 1401 kubectl get pdb -A 1402 kubectl config current-context 1403 kubectl apply -f backup.yaml 1404 kubectl get pdb -A 1410 kubectl config use-context gisdevjpeaks01 1411 kubectl get Pod -A 1413 kubectl cp atlas-user.kokusaikogyo-50327069_002.mbtiles ingress-basic/atlas-backend-8476dbc5d4-8qk27:/usr/local/src/atlas-ddb/mbtiles -c atlas-ddb 1414 kubectl cp atlas-user.kokusaikogyo-51327791_002.mbtiles ingress-basic/atlas-backend-8476dbc5d4-8qk27:/usr/local/src/atlas-ddb/mbtiles -c atlas-ddb 1415 kubectl cp atlas-user.kokusaikogyo-53371519_002.mbtiles ingress-basic/atlas-backend-8476dbc5d4-8qk27:/usr/local/src/atlas-ddb/mbtiles -c atlas-ddb 1416 kubectl config use-context gisdevjpeaks01 1423 kubectl config current-context 1424 kubectl get pdb -A 1425 kubectl get pdb -A -o yaml> backup.yaml 1427 kubectl delete --all pdb -A 1428 kubectl get pdb -A 1429 kubectl config current-context 1430 kubectl apply -f backup.yaml 1431 kubectl get pdb -A 1433 kubectl version 1435 kubectl version 1437 kubectl config use-context gisdevjpeaks01 1438 kubectl get pod -A 1439 kubectl exec -it atlas-backend-76c7d6546d-dtjnx -n ingress-basic -c atlas-ddb -- /bin/sh 1440 kubectl config use-context gisdevjpeaks01 1441 kubectl get pod -A 1442 kubectl exec -it atlas-backend-76c7d6546d-dtjnx -n ingress-basic -c atlas-ddb -- /bin/sh 1443 kubectl config current-context 1444 kubectl get pdb -A 1445 kubectl get pdb -A -o yaml> backup.yaml 1447 kubectl delete --all pdb -A 1448 kubectl get pdb -A 1449 kubectl config current-context 1450 kubectl apply -f backup.yaml 1451 kubectl get pdb -A 1456 kubectl config use-context gisdevjpeaks01 1457 kubectl get pod -A 1458 kubectl exec -it atlas-backend-76c7d6546d-x9k4t -n ingress-basic -c atlas-ddb -- /bin/sh 1459 kubectl config current-context 1460 kubectl get pdb -A 1461 kubectl get pdb -A -o yaml> backup.yaml 1463 kubectl delete --all pdb -A 1464 kubectl get pdb -A 1465 kubectl config current-context 1466 kubectl apply -f backup.yaml 1467 kubectl get pdb -A 1471 kubectl version 1476 kubectl version 1484 kubectl version 1508 kubectl version 1510 kubectl version 1514 kubectl version 1519 kubectl version 1522 kubectl get node 1528 kubectl config current-context 1529 kubectl get pdb -A 1530 kubectl get pdb -A -o yaml> backup.yaml 1532 kubectl delete --all pdb -A 1533 kubectl get pdb -A 1534 kubectl get secret --selector=owner=helm 1535 kubectl version --short | grep -i server 1539 kubectl config current-context 1542 kubectl config use-context <コンテキスト名> 1544 kubectl config current-context 1547 kubectl apply -f backup.yaml 1548 kubectl get pdb -A 1549 kubectl config current-context 1550 kubectl apply -f backup.yaml 1551 kubectl get pdb -A 1554 kubectl get nodes -o wide > get_nodes.txt 1555 kubectl describe nodes > describe_nodes.txt 1556 kubectl get pod -A -o wide > get_pods.txt 1557 kubectl describe pods -A > describe_pods.txt 1558 kubectl logs falco-ntd8r -n kube-system --timestamps=true > falco-ntd8r.log 1559 kubectl logs falco-ntd8r -n falco --timestamps=true > falco-ntd8r.log 1560 kubectl logs falco-ntd8r -n falco --timestamps=true -p > falco-ntd8r_p.log 1561 kubectl logs falco-lb8ct -n falco --timestamps=true > falco-lb8ct.log 1562 kubectl logs falco-lb8ct -n falco --timestamps=true -p > falco-lb8ct_p.log 1563 kubectl logs falco-5nvc6 -n falco --timestamps=true > falco-5nvc6.log 1564 kubectl logs falco-5nvc6 -n falco --timestamps=true -p > falco-5nvc6_p.log 1565 kubectl logs atlas-navigation-757f96cd9f-29b9x -n ingress-basic --timestamps=true > atlas-navigation-757f96cd9f-29b9x.log 1566 kubectl logs atlas-navigation-757f96cd9f-29b9x -n ingress-basic --timestamps=true -p > atlas-navigation-757f96cd9f-29b9x_p.log 1571 kubectl version 1574 kubectl get node 1590 kubectl config use-context gisdevjpeaks01 1591 kubectl get pod -A 1592 kubectl get pod -n falcp 1593 kubectl get pod -n falco 1594 kubectl get deployment -n falco 1595 kubectl get deployment -A 1596 kubectl get pod -n falco 1609 kubectl config use-context gisdevjpeaks01 1610 kubectl get pod -A 1611 kubectl get pod -A -o wide 1612 kubectl describe pods -A 1613 kubectl describe pods -n atlas-backend 1621 kubectl describe pods -n ingress-basic >pod.txt 1630 kubectl config use-context gisdevjpeaks01 1648 kubectl get pod -A 1649 kubectl exec -it atlas-backend-6b66c56fd8-c6qpg -n ingress-basic -c atlas-ddb -- /bin/sh 1651 kubectl config use-context gisdevjpeaks01 1652 kubectl get nodes -o wide > get_nodes.txt 1653 kubectl describe nodes > describe_nodes.txt 1654 kubectl get pod -A -o wide > get_pods.txt 1655 kubectl describe pods -A > describe_pods.txt 1658 history | grep kubectl 1659 kubectl config current-context 1661 kubectl config use-context gisdevjpeaks01 1664 kubectl config current-context 1665 kubectl get pdb -A 1666 kubectl get pdb -A -o yaml> backup.yaml 1668 kubectl delete --all pdb -A 1669 kubectl get pdb -A 1670 kubectl config current-context 1671 kubectl apply -f backup.yaml 1672 kubectl get pdb -A 1699 kubectl config current-context 1700 kubectl config use-context gisdevjpeaks01 1714 kubectl debug node/nodepool1 -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 1716 kubectl debug node/aks-nodepool1-16552246-vmss00000i -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 1717 kubectl config use-context gisdevjpeaks01 1718 kubectl get secret/velero -n velero 1719 kubectl get secret/crdential-evelero -n velero 1720 kubectl get secret/crdential-velero -n velero 1721 kubectl get deployment/velero -n velero 1722 kubectl get secret/velero-credentials -n velero 1736 history kubectl 1737 history| grep kubectl 1739 kubectl config use-context gisdevjpeaks01 1740 kubectl config current-context 1742 kubectl get nodes -o wide > get_nodes.txt 1743 kubectl describe nodes > describe_nodes.txt 1744 kubectl get CSINode -o yaml > get_csinode.txt 1745 kubectl get nodes -o wide 1746 kubectl describe nodes 1747 kubectl get CSINode -o yaml 1748 kubectl describe nodes | grep atlas- 1750 kubectl get pvc -o wide -A > get_pvc.txt 1751 kubectl describe pvc -A > describe_pvc.txt 1753 kubectl get pv -o wide -A > get_pv.txt 1754 kubectl describe pv -A > describe_pv.txt 1756 kubectl get sc -o yaml -A > get_sc.txt 1757 kubectl describe sc > describe_sc.txt 1759 kubectl get pod -A -o wide > get_pods.txt 1760 kubectl describe pods -A > describe_pods.txt 1761 kubectl describe nodes | grep atlas- 1762 kubectl logs atlas-backend-6b66c56fd8-c6qpg -n ingress-basic --timestamps=true > atlas-backend-6b66c56fd8-c6qpg.log 1763 kubectl logs atlas-backend-6b66c56fd8-c6qpg -n ingress-basic --timestamps=true -p > atlas-backend-6b66c56fd8-c6qpg_p.log 1764 kubectl logs atlas-backend-6b66c56fd8-c6qpg -n ingress-basic --timestamps=true > atlas-backend-6b66c56fd8-c6qpg.log 1765 kubectl logs atlas-backend-6b66c56fd8-c6qpg -n ingress-basic --timestamps=true -p > atlas-backend-6b66c56fd8-c6qpg_p.log 1769 kubectl config use-context gisdevjpeaks01 1771 kubectl cp node-debugger-aks-nodepool1-16552246-vmss00000i-5lqgm:host/var/log/azure logs01 1777 history | grep kubectl 1778 kubectl config use-context gisdevjpeaks01 1779 kubectl config current-context 1780 kubectl get nodes -o wide 1781 kubectl debug node/aks-nodepool1-16552246-vmss00000i -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 1800 kubectl get pod 1801 history | grep kubectl 1802 kubectl get pod 1803 kubectl cp node-debugger-aks-nodepool1-16552246-vmss00000j-rllgb:host/var/log/azure logj01 1814 kubectl get pod 1815 kubectl cp node-debugger-aks-nodepool1-16552246-vmss00000k-jcsqq:host/var/log/azure logj01 1816 kubectl cp node-debugger-aks-nodepool1-16552246-vmss00000j-rllgb:host/var/log/azure logj01 1817 kubectl cp node-debugger-aks-nodepool1-16552246-vmss00000k-jcsqq:host/var/log/azure logk01 1829 kubectl get pod 1830 kubectl cp node-debugger-aks-nodepool1-16552246-vmss00000j-gv6nr:host/var/log/azure logj01 1837 history | grep kubectl 1838 kubectl config use-context gisdevjpeaks01 1839 kubectl config current-context 1840 kubectl get nodes -o wide 1841 kubectl debug node/aks-nodepool1-16552246-vmss00000j -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 1842 kubectl get nodes -o wide 1843 kubectl debug node/aks-nodepool1-16552246-vmss00000k -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 1844 kubectl debug node/aks-nodepool1-16552246-vmss00000j -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 1846 history | grep kubectl 1847 kubectl config use-context gisdevjpeaks01 1852 kubectl config use-context gisdevjpeaks01 1857 history | grep kubectl 1929 kubectl get nodes 1930 kubectl get pod -A 1931 kubectl get nodes 1933 kubectl get pod -A 1935 kubectl get nodes 1941 kubectl get nodes -o wide 1942 kubectl get pod -o wide -A 1943 kubectl get nodes -o wide 1944 kubectl get pod -o wide -A 1945 kubectl get nodes 1946 kubectl cordon aks-nodepool1-16552246-vmss00000i 1947 kubectl drain aks-nodepool1-16552246-vmss00000i --ignore-daemonsets --delete-emptydir-data 1948 kubectl get pod -o wide -A 1949 kubectl get pod -o wide 1950 kubectl drain aks-nodepool1-16552246-vmss00000i --ignore-daemonsets --delete-emptydir-data 1951 kubectl uncordon aks-nodepool1-16552246-vmss00000i 1952 kubectl drain aks-nodepool1-16552246-vmss00000i --ignore-daemonsets --delete-emptydir-data 1953 kubectl delete pod node-debugger-aks-nodepool1-16552246-vmss00000i-5lqgm 1954 kubectl get pod -o wide 1955 kubectl delete pod node-debugger-aks-nodepool1-16552246-vmss00000i-mq2vs 1956 kubectl get pod -o wide 1957 kubectl delete pod node-debugger-aks-nodepool1-16552246-vmss00000i-zh72f 1958 kubectl get pod -o wide 1959 kubectl cordon aks-nodepool1-16552246-vmss00000i 1960 kubectl drain aks-nodepool1-16552246-vmss00000i --ignore-daemonsets --delete-emptydir-data 1961 kubectl get pod -o wide -A 1962 kubectl get nodes -o wide 1963 kubectl delete node aks-nodepool1-16552246-vmss00000i 1966 kubectl use-context gisdevjpeaks01 1967 kubectl config use-context gisdevjpeaks01 1968 kubectl get pod -A 1972 kubectl config use-context gisdevjpeaks01 1974 kubectl get pod -A 1991 history | grep kubectl 1992 kubectl config use-context gisdevjpeaks01 1993 kubectl get pod -A 1994 kubectl logs falco-54sgs 1996 kubectl config use-context gisdevjpeaks01 1997 kubectl get pod -A 2001 kubectl config use-context gisdevjpeaks01 2002 kubectl get pod -A 2003 kubectl exec -it falco-lq494 -n falco 2004 kubectl exec -it falco-lq494 2005 kubectl exec --stdin --tty falco-lq494 -- /bin/bash 2006 kubectl exec --stdin --tty falco-wrsmz -- /bin/bash 2007 kubectl exec --stdin --tty falco -- /bin/bash 2010 history | grep kubectl maffadmin@comdevjpevim01:~$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME aks-nodepool1-16552246-vmss00000u Ready agent 21d v1.25.6 10.100.179.10 Ubuntu 22.04.2 LTS 5.15.0-1041-azure containerd://1.7.1+azure-1 aks-nodepool1-16552246-vmss00000v Ready agent 21d v1.25.6 10.100.179.18 Ubuntu 22.04.2 LTS 5.15.0-1041-azure containerd://1.7.1+azure-1 aks-nodepool1-16552246-vmss00000w Ready agent 21d v1.25.6 10.100.179.46 Ubuntu 22.04.2 LTS 5.15.0-1041-azure containerd://1.7.1+azure-1 aks-nodepool1-16552246-vmss00000x Ready agent 21d v1.25.6 10.100.179.72 Ubuntu 22.04.2 LTS 5.15.0-1041-azure containerd://1.7.1+azure-1 maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$ kubectl get pod -o wide No resources found in default namespace. maffadmin@comdevjpevim01:~$ kubectl get pod -o wide -n falco NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES falco-lq494 0/1 Error 157 (5m20s ago) 13h 10.100.179.51 aks-nodepool1-16552246-vmss00000w falco-lws9v 0/1 CrashLoopBackOff 156 (3m34s ago) 13h 10.100.179.19 aks-nodepool1-16552246-vmss00000v falco-nc7fx 0/1 CrashLoopBackOff 156 (4m9s ago) 13h 10.100.179.77 aks-nodepool1-16552246-vmss00000x falco-wrsmz 0/1 CrashLoopBackOff 156 (4m22s ago) 13h 10.100.179.103 aks-nodepool1-16552246-vmss00000u maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$ kubectl exec -it falco-lq494 sh kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. Error from server (NotFound): pods "falco-lq494" not found maffadmin@comdevjpevim01:~$ kubectl exec -it falco-lws9v sh kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. Error from server (NotFound): pods "falco-lws9v" not found maffadmin@comdevjpevim01:~$ kubectl exec -it falco-nc7fx sh kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. Error from server (NotFound): pods "falco-nc7fx" not found maffadmin@comdevjpevim01:~$ kubectl get deployment No resources found in default namespace. maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$ kubectl get deployment -n falco No resources found in falco namespace. maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$ kubectl run falco --image=falco pod/falco created maffadmin@comdevjpevim01:~$ kubectl get pod -o wide -n falco NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES falco-lq494 0/1 CrashLoopBackOff 158 (57s ago) 13h 10.100.179.51 aks-nodepool1-16552246-vmss00000w falco-lws9v 0/1 CrashLoopBackOff 157 (4m28s ago) 13h 10.100.179.19 aks-nodepool1-16552246-vmss00000v falco-nc7fx 0/1 CrashLoopBackOff 157 (5m2s ago) 13h 10.100.179.77 aks-nodepool1-16552246-vmss00000x falco-wrsmz 0/1 Error 158 (5m18s ago) 13h 10.100.179.103 aks-nodepool1-16552246-vmss00000u maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$ kubectl get pod --show-labels -n falco NAME READY STATUS RESTARTS AGE LABELS falco-lq494 0/1 CrashLoopBackOff 158 (90s ago) 13h app=falco,controller-revision-hash=6fd5696d58,pod-template-generation=2,role=security falco-lws9v 0/1 CrashLoopBackOff 157 (5m1s ago) 13h app=falco,controller-revision-hash=6fd5696d58,pod-template-generation=2,role=security falco-nc7fx 0/1 CrashLoopBackOff 158 (28s ago) 13h app=falco,controller-revision-hash=6fd5696d58,pod-template-generation=2,role=security falco-wrsmz 0/1 CrashLoopBackOff 158 (40s ago) 13h app=falco,controller-revision-hash=6fd5696d58,pod-template-generation=2,role=security maffadmin@comdevjpevim01:~$ kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default falco 0/1 ImagePullBackOff 0 2m33s falco falco-lq494 0/1 CrashLoopBackOff 158 (3m18s ago) 13h falco falco-lws9v 0/1 CrashLoopBackOff 158 (104s ago) 13h falco falco-nc7fx 0/1 CrashLoopBackOff 158 (2m16s ago) 13h falco falco-wrsmz 0/1 CrashLoopBackOff 158 (2m28s ago) 13h gatekeeper-system gatekeeper-audit-689f4bc4fd-2xt84 1/1 Running 0 21d gatekeeper-system gatekeeper-controller-7cb9856964-gcklz 1/1 Running 0 21d gatekeeper-system gatekeeper-controller-7cb9856964-t8svd 1/1 Running 0 21d ingress-basic api-accounts-69447575d-kljl5 1/1 Running 0 21d ingress-basic api-assets-7d569cd64b-rv4pg 1/1 Running 0 21d ingress-basic api-fonts-59456444cb-mjq5q 1/1 Running 0 21d ingress-basic api-geocoder-5d77b8f94f-nhzfb 1/1 Running 0 21d ingress-basic api-gl-5dcf5d64dc-srmwp 1/1 Running 0 21d ingress-basic api-rastertiles-7489fbd85c-pp5js 1/1 Running 0 21d ingress-basic api-styles-7c6f95c99c-k5d8g 1/1 Running 0 21d ingress-basic api-tilesets-f9bbc46db-cblb8 1/1 Running 0 21d ingress-basic api-vectortiles-8bf89849b-z4z7n 1/1 Running 0 21d ingress-basic atlas-backend-6b66c56fd8-qmppp 2/2 Running 0 21d ingress-basic atlas-core-6b68bd888f-cf6js 1/1 Running 0 21d ingress-basic atlas-memcached-67fcfb6b7f-p6wjd 1/1 Running 0 21d ingress-basic atlas-minio-bb9788795-n2xhj 1/1 Running 0 21d ingress-basic atlas-navigation-74b7ff9cb7-mbkzg 1/2 CrashLoopBackOff 10124 (2m40s ago) 21d ingress-basic atlas-redis-b967c8f75-xskvx 1/1 Running 0 21d ingress-basic atlas-router-697cb8c596-j874h 1/1 Running 0 21d ingress-basic nginx-ingress-ingress-nginx-controller-7b7f6f5488-qknh6 1/1 Running 0 21d ingress-basic nginx-ingress-ingress-nginx-controller-7b7f6f5488-thcb2 1/1 Running 0 21d kube-system ama-logs-nhn59 2/2 Running 0 21d kube-system ama-logs-r7c2b 2/2 Running 0 21d kube-system ama-logs-rs-db7769fb-xzd5q 1/1 Running 0 21d kube-system ama-logs-sb9c4 2/2 Running 0 21d kube-system ama-logs-vncln 2/2 Running 0 21d kube-system azure-ip-masq-agent-5smf5 1/1 Running 0 21d kube-system azure-ip-masq-agent-m22w2 1/1 Running 0 21d kube-system azure-ip-masq-agent-xssxp 1/1 Running 0 21d kube-system azure-ip-masq-agent-xvp6h 1/1 Running 0 21d kube-system azure-policy-5487cfcd96-tcpzx 1/1 Running 0 21d kube-system azure-policy-webhook-b8f4bc669-dwshc 1/1 Running 0 21d kube-system cloud-node-manager-cpzl5 1/1 Running 0 21d kube-system cloud-node-manager-g9gqd 1/1 Running 0 21d kube-system cloud-node-manager-j48st 1/1 Running 0 21d kube-system cloud-node-manager-ln9xz 1/1 Running 0 21d kube-system coredns-785fcf7bdd-cs65h 1/1 Running 0 21d kube-system coredns-785fcf7bdd-f6rmt 1/1 Running 0 21d kube-system coredns-autoscaler-686794c454-62x2d 1/1 Running 0 21d kube-system csi-azuredisk-node-5tbrk 3/3 Running 0 21d kube-system csi-azuredisk-node-fp2r2 3/3 Running 0 21d kube-system csi-azuredisk-node-psrtr 3/3 Running 0 21d kube-system csi-azuredisk-node-qzzqc 3/3 Running 0 21d kube-system csi-azurefile-node-446nl 3/3 Running 0 21d kube-system csi-azurefile-node-kdpr2 3/3 Running 0 21d kube-system csi-azurefile-node-pz7fc 3/3 Running 0 21d kube-system csi-azurefile-node-z6slx 3/3 Running 0 21d kube-system konnectivity-agent-7b77d967d7-48r9t 1/1 Running 0 21d kube-system konnectivity-agent-7b77d967d7-bggrh 1/1 Running 0 21d kube-system kube-proxy-62pj4 1/1 Running 0 21d kube-system kube-proxy-dp4qb 1/1 Running 0 21d kube-system kube-proxy-s5mjn 1/1 Running 0 21d kube-system kube-proxy-sc7qn 1/1 Running 0 21d kube-system metrics-server-85756fd984-kxjtf 2/2 Running 1 (19d ago) 21d kube-system metrics-server-85756fd984-zh6wv 2/2 Running 0 21d velero velero-66d9b97945-zznq7 1/1 Running 0 21d maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$ maffadmin@comdevjpevim01:~$