Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failure of kubectl logs is fatal to the Telepresence session #598

Closed
neocortical opened this Issue Apr 17, 2018 · 5 comments

Comments

3 participants
@neocortical
Copy link

neocortical commented Apr 17, 2018

I am trying to set up Telepresence and am unable to connect to our kops-configured k8s cluster running in AWS. Any help appreciated. I don't really understand the error messages I'm seeing, except for trying to proxy logs, which fails because we currently have a non-standard logging setup in the k8s cluster.

Local Environment:
OS: OS X 10.11.6
Telepresence version:
kubectl version: 0.83 (Installed via brew)

Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3", GitCommit:"2c2fe6e8278a5db2d15a013987b53968c743f2a1", GitTreeState:"clean", BuildDate:"2017-08-03T07:00:21Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}

Python version:

Python 3.6.1 |Anaconda custom (x86_64)| (default, May 11 2017, 13:04:09) 
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)] on darwin

Remote environment:
kubectl version:

Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.7", GitCommit:"b30876a5539f09684ff9fde266fda10b37738c9c", GitTreeState:"clean", BuildDate:"2018-01-16T21:52:38Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Command output:

$ telepresence --docker-run --rm -it alpine sh
Volumes are rooted at $TELEPRESENCE_ROOT. See https://telepresence.io/howto/volumes.html for details.

Password:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:301: running exec setns process for init caused \"exit status 21\"": unknown.
Proxy to Kubernetes exited. This is typically due to a lost connection.
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
  File "/usr/local/Cellar/telepresence/0.83/libexec/lib/python3.6/site-packages/telepresence/container.py", line 183, in terminate_if_alive
    mount_cleanup()
  File "/usr/local/Cellar/telepresence/0.83/libexec/lib/python3.6/site-packages/telepresence/remote.py", line 284, in cleanup
    runner.get_output(sudo_prefix + ["umount", "-f", mount_dir])
  File "/usr/local/Cellar/telepresence/0.83/libexec/lib/python3.6/site-packages/telepresence/runner.py", line 249, in get_output
    track, "Capturing", "captured", out_cb, err_cb, args, **kwargs
  File "/usr/local/Cellar/telepresence/0.83/libexec/lib/python3.6/site-packages/telepresence/runner.py", line 227, in run_command
    raise CalledProcessError(retcode, args)
subprocess.CalledProcessError: Command '['sudo', 'umount', '-f', '/tmp/tmpi4a80ekr']' returned non-zero exit status 1.
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
  File "/usr/local/Cellar/telepresence/0.83/libexec/lib/python3.6/site-packages/telepresence/cleanup.py", line 46, in killall
    killer()
  File "/usr/local/Cellar/telepresence/0.83/libexec/lib/python3.6/site-packages/telepresence/container.py", line 44, in kill
    runner.check_call(sudo + ["docker", "stop", "--time=1", name])
  File "/usr/local/Cellar/telepresence/0.83/libexec/lib/python3.6/site-packages/telepresence/runner.py", line 236, in check_call
    track, "Running", "ran", out_cb, err_cb, args, **kwargs
  File "/usr/local/Cellar/telepresence/0.83/libexec/lib/python3.6/site-packages/telepresence/runner.py", line 227, in run_command
    raise CalledProcessError(retcode, args)
subprocess.CalledProcessError: Command '['docker', 'stop', '--time=1', 'telepresence-1523999897-453038-83847']' returned non-zero exit status 1.

telepresence.log:

   0.0 TEL | Telepresence 0.83 launched at Tue Apr 17 14:17:58 2018
   0.0 TEL |   /usr/local/bin/telepresence --docker-run --rm -it alpine sh
   0.0 TEL | [1] Launching: kubectl version --short
   0.0 TEL | [2] Launching: oc version
   0.0 TEL | [2] [Errno 2] No such file or directory: 'oc': 'oc'
   0.0 TEL | [3] Launching: uname -a
   0.0 TEL | Python 3.6.5 (default, Mar 30 2018, 06:41:49)
   0.0 TEL | [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1)]
   0.0   3 | Darwin natedogg.local 15.6.0 Darwin Kernel Version 15.6.0: Tue Jan  9 20:12:05 PST 2018; root:xnu-3248.73.5~1/RELEASE_X86_64 x86_64
   0.0 TEL | [3] exit 0
   0.0 TEL | BEGIN SPAN main.py:418(go_too)
   0.0 TEL | Scout info: {'latest_version': '0.83', 'application': 'telepresence', 'notices': []}
   0.0 TEL | Context: devcluster, namespace: default, kubectl_command: kubectl
   0.0 TEL | [4] Capturing: kubectl --context devcluster cluster-info
   0.8   1 | Client Version: v1.7.3
   0.8   1 | Server Version: v1.8.7
   0.8 TEL | [1] exit 0
   0.8 TEL | [5] Capturing: ssh -V
   0.8 TEL | [6] Capturing: which torsocks
   1.0 TEL | [7] Capturing: which sshfs
   1.0 TEL | BEGIN SPAN main.py:232(start_proxy)
   1.0 TEL | BEGIN SPAN deployment.py:33(create_new_deployment)
   1.0 TEL | [8] Capturing: kubectl --context devcluster --namespace default delete --ignore-not-found svc,deploy --selector=telepresence=9ebd05f9-f4b4-4624-8a3b-5ab03bc655b0
   1.9 TEL | [9] Capturing: kubectl --context devcluster --namespace default run --restart=Always --limits=cpu=100m,memory=256Mi --requests=cpu=25m,memory=64Mi telepresence-1523999876-383612-83847 --image=datawire/telepresence-k8s:0.83 --labels=telepresence=9ebd05f9-f4b4-4624-8a3b-5ab03bc655b0
   6.5 TEL | [9] captured in 4.59 secs.
   6.5 TEL | END SPAN deployment.py:33(create_new_deployment)    5.6s
   6.5 TEL | BEGIN SPAN remote.py:154(get_remote_info)
   6.5 TEL | BEGIN SPAN remote.py:72(get_deployment_json)
   6.5 TEL | [10] Capturing: kubectl --context devcluster --namespace default get deployment -o json --export --selector=telepresence=9ebd05f9-f4b4-4624-8a3b-5ab03bc655b0
   7.3 TEL | END SPAN remote.py:72(get_deployment_json)    0.8s
   7.3 TEL | Expected metadata for pods: {'creationTimestamp': None, 'labels': {'telepresence': '9ebd05f9-f4b4-4624-8a3b-5ab03bc655b0'}}
   7.3 TEL | [11] Capturing: kubectl --context devcluster --namespace default get pod -o json --export
   8.5 TEL | [11] captured in 1.16 secs.
   8.6 TEL | Checking {'app': 'appboyd', 'pod-template-hash': '3175553511', 'tier': 'frontend'} (phase Running)...
   8.6 TEL | Labels don't match.
   8.6 TEL | Checking {'pod-template-hash': '2716574283', 'run': 'curl'} (phase Running)...
   8.6 TEL | Labels don't match.
   8.6 TEL | Checking {'app': 'datadog-agent', 'controller-revision-hash': '1821614485', 'pod-template-generation': '1'} (phase Running)...
   8.6 TEL | Labels don't match.
   8.6 TEL | Checking {'app': 'datadog-agent', 'controller-revision-hash': '1821614485', 'pod-template-generation': '1'} (phase Running)...
   8.6 TEL | Labels don't match.
   8.6 TEL | Checking {'app': 'freegeoip', 'pod-template-hash': '3372574099', 'tier': 'frontend'} (phase Running)...
   8.6 TEL | Labels don't match.
   8.6 TEL | Checking {'app': 'admin-api', 'date': '1523827760', 'pod-template-hash': '4270532164', 'tier': 'backend'} (phase Running)...
   8.6 TEL | Labels don't match.
   8.6 TEL | Checking {'app': 'app-api', 'pod-template-hash': '2186347880', 'tier': 'backend'} (phase Running)...
   8.6 TEL | Labels don't match.
   8.6 TEL | Checking {'app': 'prem-pub', 'pod-template-hash': '352559785', 'tier': 'frontend'} (phase Running)...
   8.6 TEL | Labels don't match.
   8.6 TEL | Checking {'app': 'prem-sub', 'pod-template-hash': '3651246728', 'tier': 'backend'} (phase Running)...
   8.6 TEL | Labels don't match.
   8.6 TEL | Checking {'app': 'kube-state-metrics', 'pod-template-hash': '3291277253'} (phase Running)...
   8.6 TEL | Labels don't match.
   8.6 TEL | Checking {'app': 'nginx', 'pod-template-hash': '1797646116', 'tier': 'frontend'} (phase Running)...
   8.6 TEL | Labels don't match.
   8.6 TEL | Checking {'app': 'nginx', 'pod-template-hash': '1797646116', 'tier': 'frontend'} (phase Running)...
   8.6 TEL | Labels don't match.
   8.6 TEL | Checking {'app': 'nginx-admin', 'pod-template-hash': '3340731770', 'tier': 'frontend'} (phase Running)...
   8.6 TEL | Labels don't match.
   8.6 TEL | Checking {'app': 'nginx-admin', 'pod-template-hash': '3340731770', 'tier': 'frontend'} (phase Running)...
   8.6 TEL | Labels don't match.
   8.6 TEL | Checking {'app': 'sentinel', 'pod-template-hash': '2770931186', 'tier': 'backend'} (phase Running)...
   8.6 TEL | Labels don't match.
   8.6 TEL | Checking {'pod-template-hash': '3898646568', 'telepresence': '9ebd05f9-f4b4-4624-8a3b-5ab03bc655b0'} (phase Pending)...
   8.6 TEL | Looks like we've found our pod!
   8.6 TEL | BEGIN SPAN remote.py:112(wait_for_pod)
   8.6 TEL | [12] Capturing: kubectl --context devcluster --namespace default get pod telepresence-1523999876-383612-83847-7dfdb8b9bd-bxfb6 -o json
   9.4 TEL | END SPAN remote.py:112(wait_for_pod)    0.8s
   9.4 TEL | END SPAN remote.py:154(get_remote_info)    2.9s
   9.4 TEL | BEGIN SPAN main.py:127(connect)
   9.4 TEL | [13] Launching: kubectl --context devcluster --namespace default logs -f telepresence-1523999876-383612-83847-7dfdb8b9bd-bxfb6 --container telepresence-1523999876-383612-83847
   9.4 TEL | [14] Launching: kubectl --context devcluster --namespace default port-forward telepresence-1523999876-383612-83847-7dfdb8b9bd-bxfb6 55700:8022
   9.4 TEL | [15] Running: sudo ifconfig lo0 alias 198.18.0.254
  10.3  13 | Error response from daemon: configured logging reader does not support reading
  10.3 TEL | [13] exit 0
  10.8  14 | Forwarding from 127.0.0.1:55700 -> 8022
  10.8  14 | Forwarding from [::1]:55700 -> 8022
  14.1 TEL | [15] ran in 4.73 secs.
  14.1 TEL | [16] Launching: socat TCP4-LISTEN:55700,bind=198.18.0.254,reuseaddr,fork TCP4:127.0.0.1:55700
  14.1 TEL | [17] Running: ssh -F /dev/null -q -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -p 55700 telepresence@localhost /bin/true
  14.1  14 | Handling connection for 55700
  15.5 TEL | [17] ran in 1.38 secs.
  15.5 TEL | END SPAN main.py:127(connect)    6.1s
  15.5 TEL | BEGIN SPAN main.py:60(get_env_variables)
  15.5 TEL | [18] Capturing: kubectl --context devcluster --namespace default exec telepresence-1523999876-383612-83847-7dfdb8b9bd-bxfb6 --container telepresence-1523999876-383612-83847 env
  17.6 TEL | [18] captured in 2.11 secs.
  17.6 TEL | END SPAN main.py:60(get_env_variables)    2.1s
  17.6 TEL | END SPAN main.py:232(start_proxy)   16.7s
  17.6 TEL | BEGIN SPAN remote.py:240(mount_remote_volumes)
  17.6 TEL | [19] Running: sudo sshfs -p 55700 -F /dev/null -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o allow_other telepresence@localhost:/ /tmp/tmpi4a80ekr
  17.7  14 | Handling connection for 55700
  19.4 TEL | [19] ran in 1.80 secs.
  19.4 TEL | END SPAN remote.py:240(mount_remote_volumes)    1.8s
  19.4 TEL | BEGIN SPAN vpn.py:70(get_proxy_cidrs)
  19.4 TEL | END SPAN vpn.py:70(get_proxy_cidrs)    0.0s
  19.4 TEL | BEGIN SPAN container.py:109(run_docker_command)
  19.4 TEL | [20] Launching: docker run --rm --privileged --name=telepresence-1523999897-453038-83847 datawire/telepresence-local:0.83 proxy '{"port": 55700, "cidrs": ["100.96.4.0/24", "100.96.5.0/24", "100.96.3.0/24", "100.64.0.0/13"], "expose_ports": [], "ip": "198.18.0.254"}'
  19.4 TEL | [21] Running: docker run --network=container:telepresence-1523999897-453038-83847 --rm datawire/telepresence-local:0.83 wait
  19.9  20 | [INFO  tini (1)] Spawned child process 'python3' with pid '6'
  20.0  20 |    0.0 TEL | Telepresence 0.83 launched at Tue Apr 17 21:18:38 2018
  20.0  20 |    0.0 TEL |   /usr/bin/entrypoint.py proxy '{"port": 55700, "cidrs": ["100.96.4.0/24", "100.96.5.0/24", "100.96.3.0/24", "100.64.0.0/13"], "expose_ports": [], "ip": "198.18.0.254"}'
  20.0  20 |    0.0 TEL | [1] Launching: kubectl version --short
  20.0  20 |    0.0 TEL | [1] [Errno 2] No such file or directory: 'kubectl'
  20.0  20 |    0.0 TEL | [2] Launching: oc version
  20.0  20 |    0.0 TEL | [2] [Errno 2] No such file or directory: 'oc'
  20.0  20 |    0.0 TEL | [3] Launching: uname -a
  20.0  20 |    0.0   3 | Linux 17452640d235 4.9.87-linuxkit-aufs #1 SMP Wed Mar 14 15:12:16 UTC 2018 x86_64 Linux
  20.0  20 |    0.0 TEL | Python 3.6.1 (default, Oct  2 2017, 20:46:59)
  20.0  20 |    0.0 TEL | [GCC 6.3.0]
  20.0  20 |    0.0 TEL | [3] exit 0
  20.0  20 |    0.0 TEL | Everything launched. Waiting to exit...
  20.0  20 |    0.0 TEL | BEGIN SPAN cleanup.py:69(wait_for_exit)
  20.1  21 | [INFO  tini (1)] Spawned child process 'python3' with pid '7'
  20.2  20 | Starting sshuttle proxy.
  20.3  20 | firewall manager: Starting firewall with Python version 3.6.1
  20.3  20 | firewall manager: ready method name nat.
  20.3  20 | IPv6 enabled: False
  20.3  20 | UDP enabled: False
  20.3  20 | DNS enabled: True
  20.3  20 | TCP redirector listening on ('127.0.0.1', 12300).
  20.3  20 | DNS listening on ('127.0.0.1', 12300).
  20.3  20 | Starting client with Python version 3.6.1
  20.3  20 | c : connecting to server...
  20.4  14 | Handling connection for 55700
  21.0  20 | Warning: Permanently added '[198.18.0.254]:55700' (ECDSA) to the list of known hosts.
  22.5  20 | Starting server with Python version 3.6.1
  22.5  20 |  s: latency control setting = True
  22.5  20 | c : Connected.
  22.5  20 |  s: available routes:
  22.5  20 |  s:   2/100.96.4.0/24
  22.5  20 | firewall manager: setting up.
  22.5  20 | >> iptables -t nat -N sshuttle-12300
  22.5  20 | >> iptables -t nat -F sshuttle-12300
  22.5  20 | >> iptables -t nat -I OUTPUT 1 -j sshuttle-12300
  22.5  20 | >> iptables -t nat -I PREROUTING 1 -j sshuttle-12300
  22.5  20 | >> iptables -t nat -A sshuttle-12300 -j RETURN --dest 127.0.0.1/32 -p tcp
  22.5  20 | >> iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 100.96.4.0/24 -p tcp --to-ports 12300 -m ttl ! --ttl 42
  22.5  20 | >> iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 100.96.5.0/24 -p tcp --to-ports 12300 -m ttl ! --ttl 42
  22.5  20 | >> iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 100.96.3.0/24 -p tcp --to-ports 12300 -m ttl ! --ttl 42
  22.5  20 | >> iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 100.64.0.0/13 -p tcp --to-ports 12300 -m ttl ! --ttl 42
  22.5  20 | >> iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 192.168.65.1/32 -p udp --dport 53 --to-ports 12300 -m ttl ! --ttl 42
  22.5  20 | >> iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 224.0.0.252/32 -p udp --dport 5355 --to-ports 12300 -m ttl ! --ttl 42
  22.5  20 | conntrack v1.4.4 (conntrack-tools): 0 flow entries have been deleted.
  22.7  20 | c : DNS request from ('172.17.0.5', 39756) to None: 48 bytes
  24.0  21 | [INFO  tini (1)] Main child exited normally (with status '100')
  24.3 TEL | [21] exit 100 in 4.83 secs.
  24.3 TEL | [22] Capturing: docker run --help
  24.3 TEL | END SPAN container.py:109(run_docker_command)    4.9s
  24.3 TEL | Everything launched. Waiting to exit...
  24.3 TEL | BEGIN SPAN cleanup.py:69(wait_for_exit)
  24.4 TEL | [14] exit -15
  24.4  20 | Connection to 198.18.0.254 closed by remote host.
  24.4 TEL | [16] exit 143
  24.4 TEL | [23] Running: docker stop --time=1 telepresence-1523999897-453038-83847
  24.4  20 | >> iptables -t nat -D OUTPUT -j sshuttle-12300
  24.4  20 | >> iptables -t nat -D PREROUTING -j sshuttle-12300
  24.4  20 | >> iptables -t nat -F sshuttle-12300
  24.4  20 | >> iptables -t nat -X sshuttle-12300
  24.4  20 | firewall manager: Error trying to undo /etc/hosts changes.
  24.4  20 | firewall manager: ---> Traceback (most recent call last):
  24.4  20 | firewall manager: --->   File "/usr/lib/python3.6/site-packages/sshuttle/firewall.py", line 274, in main
  24.4  20 | firewall manager: --->     restore_etc_hosts(port_v6 or port_v4)
  24.4  20 | firewall manager: --->   File "/usr/lib/python3.6/site-packages/sshuttle/firewall.py", line 50, in restore_etc_hosts
  24.4  20 | firewall manager: --->     rewrite_etc_hosts({}, port)
  24.4  20 | firewall manager: --->   File "/usr/lib/python3.6/site-packages/sshuttle/firewall.py", line 29, in rewrite_etc_hosts
  24.4  20 | firewall manager: --->     os.link(HOSTSFILE, BAKFILE)
  24.4  20 | firewall manager: ---> OSError: [Errno 18] Cross-device link: '/etc/hosts' -> '/etc/hosts.sbak'
  24.5  20 | c : fatal: server died with error code 255
  24.5  20 | [INFO  tini (1)] Main child exited with signal (with signal 'Terminated')
  25.1  23 | telepresence-1523999897-453038-83847
  25.1 TEL | A subprocess (['kubectl', '--context', 'devcluster', '--namespace', 'default', 'logs', '-f', 'telepresence-1523999876-383612-83847-7dfdb8b9bd-bxfb6', '--container', 'telepresence-1523999876-383612-83847']) died with code 0, killed all processes...
  25.1 TEL | END SPAN cleanup.py:69(wait_for_exit)    0.8s
  25.1 TEL | Shutting down containers...
  25.1 TEL | [24] Capturing: sudo umount -f /tmp/tmpi4a80ekr
  25.1 TEL | [20] exit 143
  25.1  24 | umount: /tmp/tmpi4a80ekr: not currently mounted
  25.1 TEL | [24] exit 1 in 0.01 secs.
  25.1 TEL | [25] Running: sudo ifconfig lo0 -alias 198.18.0.254
  25.2 TEL | [26] Running: docker stop --time=1 telepresence-1523999897-453038-83847
  25.2  26 | Error response from daemon: No such container: telepresence-1523999897-453038-83847
  25.2 TEL | [26] exit 1 in 0.06 secs.
  25.2 TEL | [27] Capturing: kubectl --context devcluster --namespace default delete --ignore-not-found svc,deploy --selector=telepresence=9ebd05f9-f4b4-4624-8a3b-5ab03bc655b0
  31.1 TEL | [27] captured in 5.91 secs.
  31.1 TEL | END SPAN main.py:418(go_too)   31.1s
@ark3

This comment has been minimized.

Copy link
Contributor

ark3 commented Apr 18, 2018

Sorry about the crash. As you point out, kubectl logs fails (or rather, quits immediately with a successful return code). This is a fatal error for Telepresence, causing it to quit immediately. The strange message from docker is likely because Telepresence launched and then killed the associated proxy container while the main container was trying to start.

If you cannot enable kubectl logs for your cluster, manually disabling that call (connect(...) function in main.py) would get you running.

And of course we should fix kubectl logs failure being fatal for Telepresence.

@ark3 ark3 changed the title Telepresence fails to initialize OSX/AWS/kops Failure of kubectl logs is fatal to the Telepresence session Apr 18, 2018

@neocortical

This comment has been minimized.

Copy link
Author

neocortical commented Apr 18, 2018

Thanks for the quick reply, Abhay! Happy to have a) a workaround, and b) another reason to use a more standard logging setup in our cluster. ;)

Cheers,
nate

@nalbury-handy

This comment has been minimized.

Copy link

nalbury-handy commented Jun 11, 2018

This workaround works super well (as does telepresence in general!
I'm still amazed something like this exists), but I was curious if there were plans to officially support/handle non standard logging configurations in the near future?

We currently use the fluentd docker logs driver, and while we hope to switch to the tail input filter in fluentd (seems to be the standard) in the next year or so, we really want to make telepresence part of our developer toolkit sooner than that and don't really want to fork/build the project just to comment out the kubectl logs subprocess.

I'm happy to attempt a PR, but I'm admittedly not the best developer... 🙁Completely understand this probably isn't a super high priority for you guys.

@ark3

This comment has been minimized.

Copy link
Contributor

ark3 commented Jun 12, 2018

We want to fix this issue, i.e. make kubectl logs optional.

Separately, we want to support other logging too, on a case-by-case basis, but we’re currently busy with either 1) requests/bugs from paying customers and 2) features that have broader community interest so it might be a while before we get to this.

Can you file an issue specifically for your case, the fluentd stuff? Thanks.

@nalbury-handy

This comment has been minimized.

Copy link

nalbury-handy commented Jun 14, 2018

Completely understand, will file a separate issue. I'm gonna push our team to move to a more standard logging pipeline(using the fluentd daemonset which tails json docker logs).

Thanks again!

@ark3 ark3 added this to To do in Tel Tracker via automation Feb 1, 2019

ark3 added a commit that referenced this issue Apr 2, 2019

@ark3 ark3 moved this from To do to In progress in Tel Tracker Apr 2, 2019

@ark3 ark3 closed this in #976 Apr 2, 2019

Tel Tracker automation moved this from In progress to Done Apr 2, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.