Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

`kubectl get` should have a way to filter for advanced pods status #49387

Open
simonswine opened this issue Jul 21, 2017 · 41 comments

Comments

@simonswine
Copy link
Member

commented Jul 21, 2017

What happened:

I'd like to have a simple command to check for pods that are currently not ready

What you expected to happen:

I can see a couple of options:

  • There is some magic flag I am not aware of
  • Having a flag for kubectl get to filter the output using go/jsonpath
  • Distinguish between Pod Phase Running&Ready and Running
  • Flag to filter on ready status

How to get that currently:

kubectl get pods --all-namespaces -o json  | jq -r '.items[] | select(.status.phase != "Running" or ([ .status.conditions[] | select(.type == "Ready" and .state == false) ] | length ) == 1 ) | .metadata.namespace + "/" + .metadata.name'
@simonswine

This comment has been minimized.

Copy link
Member Author

commented Jul 21, 2017

/kind feature

@simonswine

This comment has been minimized.

Copy link
Member Author

commented Jul 21, 2017

/sig cli

@EtienneDeneuve

This comment has been minimized.

Copy link

commented Aug 28, 2017

Same here, It sound incredible to use a complex syntax to only list non running container...

@jackzampolin

This comment has been minimized.

Copy link

commented Oct 18, 2017

Ideally I would be able to say something like:

kubectl get pods --namespace foo -l status=pending
@carlossg

This comment has been minimized.

Copy link
Contributor

commented Nov 23, 2017

I had to make a small modification to .status == "False" to get it to work

kubectl get pods -a --all-namespaces -o json  | jq -r '.items[] | select(.status.phase != "Running" or ([ .status.conditions[] | select(.type == "Ready" and .status == "False") ] | length ) == 1 ) | .metadata.namespace + "/" + .metadata.name'
@dixudx

This comment has been minimized.

Copy link
Member

commented Nov 24, 2017

#50140 provides a new flag --field-selector to filter these pods now.

$ kubectl get pods --field-selector=status.phase!=Running

/close

@asarkar

This comment has been minimized.

Copy link

commented Dec 13, 2017

@dixudx

kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T19:11:02Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"0b9efaeb34a2fc51ff8e4d34ad9bc6375459c4a4", GitTreeState:"clean", BuildDate:"2017-11-29T22:43:34Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}
kubectl get po --field-selector=status.phase==Running -l app=k8s-watcher
Error: unknown flag: --field-selector
@dixudx

This comment has been minimized.

Copy link
Member

commented Dec 13, 2017

@asarkar --field-selector is targeted for v1.9, which is coming out soon.

@simonswine

This comment has been minimized.

Copy link
Member Author

commented Jan 15, 2018

@dixudx thanks for the PR for the field-selector. But I think this is not what I had in mind. I wanted to be able figure out pods that have one or more container that are not passing the readiness checks.

Given that I have non ready pod (kubectl v1.9.1) READY 0/1:

$ kubectl get pods                                       
NAME          READY     STATUS    RESTARTS   AGE
pod-unready   0/1       Running   0          50s

This pod is still in Phase running, so I can't get it using your proposed filter:

$ kubectl get pods --field-selector=status.phase!=Running
No resources found.
@simonswine

This comment has been minimized.

Copy link
Member Author

commented Jan 15, 2018

/reopen

@k8s-ci-robot k8s-ci-robot reopened this Jan 15, 2018

@af732p

This comment has been minimized.

Copy link

commented Jan 18, 2018

Got the same issue.
I would be glad to have something like:
kubectl get pods --field-selector=status.ready!=True

@artemyarulin

This comment has been minimized.

Copy link

commented Feb 21, 2018

Hm, can I use it for getting nested array items? Like I want to do

kubectl get pods --field-selector=status.containerStatuses.restartCount!=0

But it returns error, tried status.containerStatuses..restartCount, but it also doesn't work and returns same error Error from server (BadRequest): Unable to find "pods" that match label selector "", field selector "status.containerStatuses..restartCount==0": field label not supported: status.containerStatuses..restartCount

@jackzampolin

This comment has been minimized.

Copy link

commented Feb 21, 2018

@artemyarulin try status.containerStatuses[*].restartCount==0

@artemyarulin

This comment has been minimized.

Copy link

commented Feb 22, 2018

Thanks, just tried with kubectl v1.9.3/cluster v1.9.2 and it returns same error - Error from server (BadRequest): Unable to find "pods" that match label selector "", field selector "status.containerStatuses[*].restartCount!=0": field label not supported: status.containerStatuses[*].restartCount. Am I doing something wrong? Does it work for you?

eloycoto added a commit to eloycoto/cilium that referenced this issue Mar 6, 2018
Ginkgo: Fixed WaitCleanAllTerminatingEndpoints
Due the test-flake I discovered that the termination helper didn't work
as expected, and the status.phase is not represent at all
(kubernetes/kubernetes#49387)

Issue:

```
vagrant@k8s1:~$ kubectl delete pod testds-w7prl
pod "testds-w7prl" deleted
vagrant@k8s1:~$ kubectl get pods
NAME               READY     STATUS        RESTARTS   AGE
netcatds-bhxv4     1/1       Running       0          5m
netcatds-zpzzl     1/1       Running       0          5m
testclient-8qx59   1/1       Running       0          1m
testclient-r9xmm   1/1       Running       0          1m
testds-fwss5       1/1       Running       0          32s
testds-w7prl       0/1       Terminating   0          1m
vagrant@k8s1:~$ kubectl get pods -o "jsonpath='{.items[*].status.phase}'"
'Running Running Running Running^C
vagrant@k8s1:~$ kubectl get pods
NAME               READY     STATUS        RESTARTS   AGE
netcatds-bhxv4     1/1       Running       0          5m
netcatds-zpzzl     1/1       Running       0          5m
testclient-8qx59   1/1       Running       0          1m
testclient-r9xmm   1/1       Running       0          1m
testds-fwss5       1/1       Running       0          40s
testds-w7prl       0/1       Terminating   0          1m
vagrant@k8s1:~$
```

Signed-off-by: Eloy Coto <eloy.coto@gmail.com>
ianvernon added a commit to cilium/cilium that referenced this issue Mar 8, 2018
Ginkgo: Fixed WaitCleanAllTerminatingEndpoints
Due the test-flake I discovered that the termination helper didn't work
as expected, and the status.phase is not represent at all
(kubernetes/kubernetes#49387)

Issue:

```
vagrant@k8s1:~$ kubectl delete pod testds-w7prl
pod "testds-w7prl" deleted
vagrant@k8s1:~$ kubectl get pods
NAME               READY     STATUS        RESTARTS   AGE
netcatds-bhxv4     1/1       Running       0          5m
netcatds-zpzzl     1/1       Running       0          5m
testclient-8qx59   1/1       Running       0          1m
testclient-r9xmm   1/1       Running       0          1m
testds-fwss5       1/1       Running       0          32s
testds-w7prl       0/1       Terminating   0          1m
vagrant@k8s1:~$ kubectl get pods -o "jsonpath='{.items[*].status.phase}'"
'Running Running Running Running^C
vagrant@k8s1:~$ kubectl get pods
NAME               READY     STATUS        RESTARTS   AGE
netcatds-bhxv4     1/1       Running       0          5m
netcatds-zpzzl     1/1       Running       0          5m
testclient-8qx59   1/1       Running       0          1m
testclient-r9xmm   1/1       Running       0          1m
testds-fwss5       1/1       Running       0          40s
testds-w7prl       0/1       Terminating   0          1m
vagrant@k8s1:~$
```

Signed-off-by: Eloy Coto <eloy.coto@gmail.com>
@migueleliasweb

This comment has been minimized.

Copy link

commented Apr 19, 2018

Sadly, the same thing happens for v1.9.4:

What I'm trying to do here is to get all pods with a given parent uid...

$ kubectl get pod --field-selector='metadata.ownerReferences[*].uid=d83a23e1-37ba-11e8-bccf-0a5d7950f698'
Error from server (BadRequest): Unable to find "pods" that match label selector "", field selector "ownerReferences[*].uid=d83a23e1-37ba-11e8-bccf-0a5d7950f698": field label not supported: ownerReferences[*].uid

Waiting anxiously for this feature •ᴗ•

@dixudx

This comment has been minimized.

Copy link
Member

commented Apr 19, 2018

--field-selector='metadata.ownerReferences[].uid=d83a23e1-37ba-11e8-bccf-0a5d7950f698'
field label not supported: ownerReferences[
].uid

This filter string is not supported.

For pods, only "metadata.name", "metadata.namespace", "spec.nodeName", "spec.restartPolicy", "spec.schedulerName", status.phase", "status.podIP", "status.nominatedNodeName", "sepc.nodeName" are supported.

@migueleliasweb If you want to filer out the pod in your case, you can use jq.

$ kubectl get pod -o json | jq '.items | map(select(.metadata.ownerReferences[] | .uid=="d83a23e1-37ba-11e8-bccf-0a5d7950f698"))'

Also you can use JSONPath Support of kubectl.

@migueleliasweb

This comment has been minimized.

Copy link

commented Apr 19, 2018

Thanks @dixudx . But let me understand a litle bit better. If I'm running this query in a cluster with a few thousand pods:

  • Does the APIServer fetch all of them from ETCD and them apply in-memory filtering ?
  • Or does my kubectl receive all pods and apply locally the filter ?
  • Or the filtering occours inside the ETCD ? So only the filtered results are returned ?
@dixudx

This comment has been minimized.

Copy link
Member

commented Apr 20, 2018

Does the APIServer will fetch all of them from ETCD and them apply in-memory filtering ?
Or does my kubectl will receive all pods and apply locally the filter ?
Or the filtering occours inside the ETCD ? So only the filtered results are returned ?

@migueleliasweb If --field-selector is issued when using kubectl, the filtering is in a cache of apiserver. APIServer will have a single watch open to etcd, watching all the objects (of a given type) without any filtering. The changes delivered from etcd will then be stored in a cache of apiserver.

For --sort-by, the filtering is on kubectl client side.

@kvs

This comment has been minimized.

Copy link

commented Apr 26, 2018

This work great for me with kubectl get, but it would also be nice if it could apply to delete and describe

@fejta-bot

This comment has been minimized.

Copy link

commented Jul 25, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@george-angel

This comment has been minimized.

Copy link
Contributor

commented Jul 25, 2018

/remove-lifecycle stale

@YoninL

This comment has been minimized.

Copy link

commented Aug 20, 2018

I'm using kubectl get po --all-namespaces | grep -vE '1/1|2/2|3/3' to list all not-full-Ready pods.

@clanesf

This comment has been minimized.

Copy link

commented Aug 22, 2018

IMHO this is more than just a feature, it's pretty much a must. The fact that you can have a pod that is in a CrashBackoffLoop that isn't listed by the --field-selector=status.phase!=Running makes the whole field-selector thing pretty useless. There should be an easy way to get a list of pods that have problems without resorting to parsing json.

@eosfor

This comment has been minimized.

Copy link

commented Aug 27, 2018

With PowerShell, like this, you can do something like

To return all statuses

Get-PodStatus | ft -autosize

image

To filter them out

Get-PodStatus | where { ($_.status -eq "Running") -and ($_.state -eq "ready") } | ft -AutoSize
@sam-untapt

This comment has been minimized.

Copy link

commented Sep 26, 2018

FYI:

kubectl get pods --field-selector=status.phase!=Succeeded

Can be used to filter out completed jobs, which appear to be included by default as of version 217.

Not:

kubectl get pods --field-selector=status.phase!=Completed which, to me, would be more rational, given that the STATUS is displayed as 'Completed'.

@danbeaulieu

This comment has been minimized.

Copy link

commented Oct 19, 2018

Is this supposed to work on status.phase? I terminated a node and all of the pods display as Unknown or NodeLost but they aren't filtered by the field-selector:

$ kubectl get pods --field-selector=status.phase=Running --all-namespaces
NAMESPACE     NAME                                          READY   STATUS     RESTARTS   AGE
kube-system   coredns-78fcdf6894-9gc7n                      1/1     Running    0          1h
kube-system   coredns-78fcdf6894-lt58z                      1/1     Running    0          1h
kube-system   etcd-i-0564e0652e0560ac4                      1/1     Unknown    0          1h
kube-system   etcd-i-0af8bbf22a66edc1d                      1/1     Running    0          1h
kube-system   etcd-i-0e780f1e91f5a7116                      1/1     Running    0          1h
kube-system   kube-apiserver-i-0564e0652e0560ac4            1/1     Unknown    0          1h
kube-system   kube-apiserver-i-0af8bbf22a66edc1d            1/1     Running    1          1h
kube-system   kube-apiserver-i-0e780f1e91f5a7116            1/1     Running    1          1h
kube-system   kube-controller-manager-i-0564e0652e0560ac4   1/1     Unknown    1          1h
kube-system   kube-controller-manager-i-0af8bbf22a66edc1d   1/1     Running    0          1h
kube-system   kube-controller-manager-i-0e780f1e91f5a7116   1/1     Running    0          1h
kube-system   kube-router-9kkxh                             1/1     NodeLost   0          1h
kube-system   kube-router-dj9sp                             1/1     Running    0          1h
kube-system   kube-router-n4zzw                             1/1     Running    0          1h
kube-system   kube-scheduler-i-0564e0652e0560ac4            1/1     Unknown    0          1h
kube-system   kube-scheduler-i-0af8bbf22a66edc1d            1/1     Running    0          1h
kube-system   kube-scheduler-i-0e780f1e91f5a7116            1/1     Running    0          1h
kube-system   tiller-deploy-7678f78996-6t84j                1/1     Running    0          1h

I wouldn't expect those the non-Running pods to be listed with this query...

@slmingol

This comment has been minimized.

Copy link

commented Nov 16, 2018

Should this field selector work for other object types as well? Doesn't seem to work for pvc.

$ kubectl get pvc --field-selector=status.phase!=Bound
Error from server (BadRequest): Unable to find {"" "v1" "persistentvolumeclaims"} that match label selector "", field selector "status.phase!=Bound": "status.phase" is not a known field selector: only "metadata.name", "metadata.namespace"
@ye

This comment has been minimized.

Copy link

commented Nov 16, 2018

This field selector syntax is still confusing to me, for some reason I can't filter "Evicted" status positively (they can only be shown for not Running). What did I do wrong here?

I did read through https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/ still can't get it working.

$ kubectl get po --field-selector status.phase!=Running
NAME                        READY     STATUS      RESTARTS   AGE
admin-55d76dc598-sr78x      0/2       Evicted     0          22d
admin-57f6fcc898-df82r      0/2       Evicted     0          17d
admin-57f6fcc898-dt5kb      0/2       Evicted     0          18d
admin-57f6fcc898-jqp9j      0/2       Evicted     0          17d
admin-57f6fcc898-plxhr      0/2       Evicted     0          17d
admin-57f6fcc898-x5kdz      0/2       Evicted     0          17d
admin-57f6fcc898-zgsr7      0/2       Evicted     0          18d
admin-6489584498-t5fzf      0/2       Evicted     0          28d
admin-6b7f5dbb5d-8h9kt      0/2       Evicted     0          9d
admin-6b7f5dbb5d-k57sk      0/2       Evicted     0          9d
admin-6b7f5dbb5d-q7h7q      0/2       Evicted     0          7d
admin-6b7f5dbb5d-sr8j6      0/2       Evicted     0          9d
admin-7454f9b9f7-wrgdk      0/2       Evicted     0          38d
admin-76749dd59d-tj48m      0/2       Evicted     0          22d
admin-78648ccb66-qxgjp      0/2       Evicted     0          17d
admin-795c79f58f-dtcnb      0/2       Evicted     0          25d
admin-7d58ff6cfd-5pt9p      0/2       Evicted     0          4d
admin-7d58ff6cfd-99pzq      0/2       Evicted     0          3d
admin-7d58ff6cfd-9cbjd      0/2       Evicted     0          3d
admin-b5d6d84d6-5q67l       0/2       Evicted     0          12d
admin-b5d6d84d6-fh2ck       0/2       Evicted     0          13d
admin-b5d6d84d6-r4d8b       0/2       Evicted     0          14d
admin-c56558f95-bxxq5       0/2       Evicted     0          7d
api-5445fd6b8b-4jts8        0/2       Evicted     0          3d
api-5445fd6b8b-5b2jp        0/2       Evicted     0          2d
api-5445fd6b8b-7km72        0/2       Evicted     0          4d
api-5445fd6b8b-8tsgf        0/2       Evicted     0          4d
api-5445fd6b8b-ppnxp        0/2       Evicted     0          2d
api-5445fd6b8b-qqnxr        0/2       Evicted     0          2d
api-5445fd6b8b-z77wp        0/2       Evicted     0          2d
api-5445fd6b8b-zjcmg        0/2       Evicted     0          2d
api-5b6647d48b-frbhj        0/2       Evicted     0          9d
api-9459cb775-5cz7f         0/2       Evicted     0          1d

$ kubectl get po --field-selector status.phase=Evicted
No resources found.
$ kubectl get po --field-selector status.phase==Evicted
No resources found.
$ kubectl get po --field-selector status.phase=="Evicted"
No resources found.
$ kubectl get po --field-selector status.phase="Evicted"
No resources found.

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.7", GitCommit:"0c38c362511b20a098d7cd855f1314dad92c2780", GitTreeState:"clean", BuildDate:"2018-08-20T10:09:03Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.6-gke.11", GitCommit:"42df8ec7aef509caba40b6178616dcffca9d7355", GitTreeState:"clean", BuildDate:"2018-11-08T20:06:00Z", GoVersion:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"}
@yanijc

This comment has been minimized.

Copy link

commented Jan 9, 2019

There ought to be a way to just list Running pods that have (or have not) passed their readiness check.

Also, where is it documented what the values in the Ready column mean? (0/1, 1/1)

@wavetylor

This comment has been minimized.

Copy link

commented Jan 24, 2019

@ye It's not working because Evicted isn't a status.phase value: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/

Evicted belong to a Status' reason: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/#podstatus-v1-core

Which unfortunately isn't queryable with field-selector at the moment.

@jmara

This comment has been minimized.

Copy link

commented Feb 6, 2019

Shouldn't CrashLoopBackOff be included, because its a status.phase according to the pod-lifecycle docs?

17:18:13 $ kubectl get pods --field-selector=status.phase!=Running
No resources found.
17:19:32 $ kubectl get pods|grep CrashLoopBackOff
kubernetes-dashboard-head-57b9585588-lvr5t               0/1     CrashLoopBackOff   2292       8d
17:22:45 $ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-10T23:28:14Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
@fejta-bot

This comment has been minimized.

Copy link

commented May 7, 2019

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@raelga

This comment has been minimized.

Copy link
Member

commented May 11, 2019

/remove-lifecycle stale

@pawelprazak

This comment has been minimized.

Copy link

commented Jun 6, 2019

still and issue, 2 years later

@krish7919

This comment has been minimized.

Copy link

commented Jun 12, 2019

I cannot believe this has been around for so long without a solution.

@luvkrai

This comment has been minimized.

Copy link

commented Jun 19, 2019

i'm using kubectl get pods --all-namespaces | grep -Ev '([0-9]+)/\1'

@albertvaka

This comment has been minimized.

Copy link

commented Jun 21, 2019

This can already be done in kubectl using jsonpath output:

Eg: this will print the namespace and name of each pod in Running state.

kubectl get pods --all-namespaces -o jsonpath="{range .items[?(@.status.phase == 'Running')]}{.metadata.namespace}{' '}{.metadata.name}{'\n'}{end}"
@luvkrai

This comment has been minimized.

Copy link

commented Jun 21, 2019

@albertvaka that won't show if you have a pod with CrashLoopBackOff

$ kubectl get pods --all-namespaces -o jsonpath="{range .items[?(@.status.phase != 'Running')]}{.metadata.namespace}{' '}{.metadata.name}{'\n'}{end}"
default pod-with-sidecar
my-system glusterfs-brick-0
my-system sticky-scheduler-6f8d74-6mh4q
$ kubectl get pods --all-namespaces | grep -Ev '([0-9]+)/\1'
NAMESPACE       NAME                                        READY     STATUS             RESTARTS   AGE
default         pod-with-sidecar                            1/2       ImagePullBackOff   0          3m
default         pod-with-sidecar2                           1/2       CrashLoopBackOff   4          3m
my-system       glusterfs-brick-0                           0/2       Pending            0          4m
my-system       sticky-scheduler-6f8d74-6mh4q               0/1       ImagePullBackOff   0          9m

Also i need output format similar to kubectl get pods

even doing this didn't help

$ kubectl get pods --field-selector=status.phase!=Running,status.phase!=Succeeded --all-namespaces
NAMESPACE       NAME                                READY     STATUS              RESTARTS   AGE
default         pod-with-sidecar                    0/2       ContainerCreating   0          37s
my-system       glusterfs-brick-0                   0/2       Pending             0          3m
my-system       sticky-scheduler-6f8d74-6mh4q       0/1       ImagePullBackOff    0          7m
@albertvaka

This comment has been minimized.

Copy link

commented Jun 21, 2019

@albertvaka that won't show if you have a pod with CrashLoopBackOff

That's the point.

Also i need output format similar to kubectl get pods

You can customize the columns you want to display with jsonpath.

@jclarksnps

This comment has been minimized.

Copy link

commented Jul 2, 2019

@albertvaka I believe the point here is that there should be a simple way to get all pods that aren't ready, without writing obtuse json path syntax (which I don't think will work anyway due to CrashLoopBackOff) being excluded from the filter. The fact that pods in a CrashLoopBackOff state are excluded from a query such as kubectl get pods --field-selector=status.phase!=Running is pretty bizzare. Why can't we just have something simple like kubectl get pods --not-ready, or something straightforward.

@brysonshepherd

This comment has been minimized.

Copy link

commented Sep 12, 2019

Still an issue. I agree that if i do do this to see a "running" pod:
kubectl get -n kube-system pods -lname=tiller --field-selector=status.phase=Running NAME READY STATUS RESTARTS AGE tiller-deploy-55c564dc54-2lfpt 0/1 Running 0 71m
I should also be able to do something like this to return containers that are not ready:
kubectl get -n kube-system pods -lname=tiller --field-selector=status.containerStatuses[*].ready!=true

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
You can’t perform that action at this time.