Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why my prosody-0 is always pending #77

Closed
Lillyyouwu opened this issue Apr 9, 2023 · 7 comments
Closed

Why my prosody-0 is always pending #77

Lillyyouwu opened this issue Apr 9, 2023 · 7 comments

Comments

@Lillyyouwu
Copy link

Lillyyouwu commented Apr 9, 2023

Hi, I am trying to deploy jitsi in LAN with this values.yaml. I am using kubernetes v1.22.1, helm v3.9.4. My cluster have one master node and one agent node.

publicURL: "https://meet.raccoon.com"
tz: UTC
web:
  image:
    tag: "stable-8252"
  ingress:
    enabled: true
    annotations:
      kubernetes.io/ingress.class: "nginx"
      cert-manager.io/cluster-issuer: "letsencrypt-prod"
    hosts:
    - host: jitsi.test.com
      paths: [ "/" ]
    tls:
    - hosts:
      - "meet.raccoon.com"
      secretName: meet.raccoon.com-crt
jvb:
  image:
    tag: "stable-8252"
  service:
    enabled: true
    type: LoadBalancer
  publicIPs:
     - 192.168.186.181
jibri:
  image:
    tag: "stable-8252"
jicofo:
  image:
    tag: "stable-8252"
prosody:
  image:
    tag: "stable-8252"

and I run this

helm upgrade --install myjitsi jitsi/jitsi-meet  \
    --create-namespace \
    --namespace jit \
    --values values.yaml

but I found my prosody-0 is always at pending, and my jvb pod keep restarting. I have these pods:

root@k8s-master:~# kubectl get po -A
NAMESPACE       NAME                                        READY   STATUS      RESTARTS       AGE
cert-manager    cert-manager-5d495db6fc-2zcsc               1/1     Running     10 (10m ago)   2d5h
cert-manager    cert-manager-cainjector-5f9c9d977f-p62ms    1/1     Running     17 (10m ago)   2d5h
cert-manager    cert-manager-webhook-77bf46c6c7-82wvl       1/1     Running     9 (10m ago)    2d5h
ingress-nginx   ingress-nginx-admission-create--1-dw72s     0/1     Completed   0              2d4h
ingress-nginx   ingress-nginx-admission-patch--1-hqcdd      0/1     Completed   0              2d4h
ingress-nginx   ingress-nginx-controller-6c646f59bb-l9xvs   1/1     Running     6 (10m ago)    2d4h
jit             myjitsi-jitsi-meet-jicofo-cf699dc6b-29c68   1/1     Running     0              8m39s
jit             myjitsi-jitsi-meet-jvb-57d7465784-2699d     1/1     Running     4 (65s ago)    8m39s
jit             myjitsi-jitsi-meet-web-7994647b44-np79x     1/1     Running     0              8m39s
jit             myjitsi-prosody-0                           0/1     Pending     0              8m39s
kube-system     coredns-7f6cbbb7b8-8ljzm                    1/1     Running     17 (10m ago)   4d3h
kube-system     coredns-7f6cbbb7b8-bvp7g                    1/1     Running     17 (10m ago)   4d3h
kube-system     etcd-k8s-master                             1/1     Running     17 (10m ago)   4d3h
kube-system     kube-apiserver-k8s-master                   1/1     Running     17 (10m ago)   4d3h
kube-system     kube-controller-manager-k8s-master          1/1     Running     17 (10m ago)   4d3h
kube-system     kube-flannel-ds-9zn6k                       1/1     Running     17 (10m ago)   4d3h
kube-system     kube-flannel-ds-txblz                       1/1     Running     12 (10m ago)   4d2h
kube-system     kube-proxy-6thmm                            1/1     Running     17 (10m ago)   4d3h
kube-system     kube-proxy-rwckt                            1/1     Running     12 (10m ago)   4d2h
kube-system     kube-scheduler-k8s-master                   1/1     Running     17 (10m ago)   4d3h
root@k8s-master:~# 

the logs for 'jvb' are:

JVB 2023-04-09 11:56:21.569 INFO: [12] org.ice4j.ice.harvest.MappingCandidateHarvesters.maybeAdd: Discarding a mapping harvester: org.ice4j.ice.harvest.AwsCandidateHarvester@5897f17
JVB 2023-04-09 11:56:21.569 INFO: [12] org.ice4j.ice.harvest.MappingCandidateHarvesters.initialize: Using org.ice4j.ice.harvest.StaticMappingCandidateHarvester(face=10.244.1.59:9/udp, mask=127.0.0.1:9/udp)
JVB 2023-04-09 11:56:21.569 INFO: [12] org.ice4j.ice.harvest.MappingCandidateHarvesters.initialize: Initialized mapping harvesters (delay=20807ms).  stunDiscoveryFailed=true
JVB 2023-04-09 11:56:26.305 WARNING: [21] [hostname=myjitsi-prosody.jit.svc.cluster.local id=shard0] MucClient.setPresenceExtensions#467: Cannot set presence extension: not connected.
JVB 2023-04-09 11:56:26.355 WARNING: [15] [hostname=myjitsi-prosody.jit.svc.cluster.local id=shard0] MucClient.lambda$getConnectAndLoginCallable$9#640: Error connecting:
org.jivesoftware.smack.SmackException$EndpointConnectionException: Could not lookup the following endpoints: RemoteConnectionEndpointLookupFailure(description='DNS lookup exception for myjitsi-prosody.jit.svc.cluster.local' exception='java.net.UnknownHostException: myjitsi-prosody.jit.svc.cluster.local')
	at org.jivesoftware.smack.SmackException$EndpointConnectionException.from(SmackException.java:334)
	at org.jivesoftware.smack.tcp.XMPPTCPConnection.connectUsingConfiguration(XMPPTCPConnection.java:664)
	at org.jivesoftware.smack.tcp.XMPPTCPConnection.connectInternal(XMPPTCPConnection.java:849)
	at org.jivesoftware.smack.AbstractXMPPConnection.connect(AbstractXMPPConnection.java:526)
	at org.jitsi.xmpp.mucclient.MucClient.lambda$getConnectAndLoginCallable$9(MucClient.java:635)
	at org.jitsi.retry.RetryStrategy$TaskRunner.run(RetryStrategy.java:167)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)
JVB 2023-04-09 11:56:31.306 WARNING: [21] [hostname=myjitsi-prosody.jit.svc.cluster.local id=shard0] MucClient.setPresenceExtensions#467: Cannot set presence extension: not connected.
JVB 2023-04-09 11:56:31.357 WARNING: [14] org.jivesoftware.smackx.ping.PingManager.pingServerIfNecessary: XMPPTCPConnection[not-authenticated] (0) was not authenticated
JVB 2023-04-09 11:56:36.305 WARNING: [21] [hostname=myjitsi-prosody.jit.svc.cluster.local id=shard0] MucClient.setPresenceExtensions#467: Cannot set presence extension: not connected.
JVB 2023-04-09 11:56:41.305 WARNING: [21] [hostname=myjitsi-prosody.jit.svc.cluster.local id=shard0] MucClient.setPresenceExtensions#467: Cannot set presence extension: not connected.
JVB 2023-04-09 11:56:46.305 WARNING: [21] [hostname=myjitsi-prosody.jit.svc.cluster.local id=shard0] MucClient.setPresenceExtensions#467: Cannot set presence extension: not connected.
JVB 2023-04-09 11:56:51.305 WARNING: [21] [hostname=myjitsi-prosody.jit.svc.cluster.local id=shard0] MucClient.setPresenceExtensions#467: Cannot set presence extension: not connected.
JVB 2023-04-09 11:56:51.371 WARNING: [15] [hostname=myjitsi-prosody.jit.svc.cluster.local id=shard0] MucClient.lambda$getConnectAndLoginCallable$9#640: Error connecting:
org.jivesoftware.smack.SmackException$EndpointConnectionException: Could not lookup the following endpoints: RemoteConnectionEndpointLookupFailure(description='DNS lookup exception for myjitsi-prosody.jit.svc.cluster.local' exception='java.net.UnknownHostException: myjitsi-prosody.jit.svc.cluster.local: Temporary failure in name resolution')
	at org.jivesoftware.smack.SmackException$EndpointConnectionException.from(SmackException.java:334)
	at org.jivesoftware.smack.tcp.XMPPTCPConnection.connectUsingConfiguration(XMPPTCPConnection.java:664)
	at org.jivesoftware.smack.tcp.XMPPTCPConnection.connectInternal(XMPPTCPConnection.java:849)
	at org.jivesoftware.smack.AbstractXMPPConnection.connect(AbstractXMPPConnection.java:526)
	at org.jitsi.xmpp.mucclient.MucClient.lambda$getConnectAndLoginCallable$9(MucClient.java:635)
	at org.jitsi.retry.RetryStrategy$TaskRunner.run(RetryStrategy.java:167)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)
JVB 2023-04-09 11:56:56.305 WARNING: [21] [hostname=myjitsi-prosody.jit.svc.cluster.local id=shard0] MucClient.setPresenceExtensions#467: Cannot set presence extension: not connected.
JVB 2023-04-09 11:56:56.372 WARNING: [15] [hostname=myjitsi-prosody.jit.svc.cluster.local id=shard0] MucClient.lambda$getConnectAndLoginCallable$9#640: Error connecting:
org.jivesoftware.smack.SmackException$EndpointConnectionException: Could not lookup the following endpoints: RemoteConnectionEndpointLookupFailure(description='DNS lookup exception for myjitsi-prosody.jit.svc.cluster.local' exception='java.net.UnknownHostException: myjitsi-prosody.jit.svc.cluster.local')
	at org.jivesoftware.smack.SmackException$EndpointConnectionException.from(SmackException.java:334)
	at org.jivesoftware.smack.tcp.XMPPTCPConnection.connectUsingConfiguration(XMPPTCPConnection.java:664)
	at org.jivesoftware.smack.tcp.XMPPTCPConnection.connectInternal(XMPPTCPConnection.java:849)
	at org.jivesoftware.smack.AbstractXMPPConnection.connect(AbstractXMPPConnection.java:526)
	at org.jitsi.xmpp.mucclient.MucClient.lambda$getConnectAndLoginCallable$9(MucClient.java:635)
	at org.jitsi.retry.RetryStrategy$TaskRunner.run(RetryStrategy.java:167)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)
JVB 2023-04-09 11:57:01.202 SEVERE: [17] HealthChecker.run#175: Health check failed in PT0.000152S:
java.lang.Exception: Address discovery through STUN failed
	at org.jitsi.videobridge.health.JvbHealthChecker.check(JvbHealthChecker.kt:39)
	at org.jitsi.videobridge.health.JvbHealthChecker.access$check(JvbHealthChecker.kt:25)
	at org.jitsi.videobridge.health.JvbHealthChecker$healthChecker$1.invoke(JvbHealthChecker.kt:31)
	at org.jitsi.videobridge.health.JvbHealthChecker$healthChecker$1.invoke(JvbHealthChecker.kt:31)
	at org.jitsi.health.HealthChecker.run(HealthChecker.kt:144)
	at org.jitsi.utils.concurrent.RecurringRunnableExecutor.run(RecurringRunnableExecutor.java:216)
	at org.jitsi.utils.concurrent.RecurringRunnableExecutor.runInThread(RecurringRunnableExecutor.java:292)
	at org.jitsi.utils.concurrent.RecurringRunnableExecutor$1.run(RecurringRunnableExecutor.java:328)
JVB 2023-04-09 11:57:01.306 WARNING: [21] [hostname=myjitsi-prosody.jit.svc.cluster.local id=shard0] MucClient.setPresenceExtensions#467: Cannot set presence extension: not connected.

I don't know how to solve it, probably because my bad values.yaml... can anyone help with this? sincerely thank! T T

@spijet
Copy link
Collaborator

spijet commented Apr 10, 2023

Hello @Lillyyouwu!

Sorry for the late reply. Can you please share the output of kubectl -n jit get pod myjitsi-prosody-0 with us? I'm almost sure that the cause for "Pending" status will be mentioned there somewhere.

One possible reason might be that since Prosody is a StatefulSet, it requires some kind of persistent storage to be able to work properly. If you don't have any dynamic storage provisioners (which I assume you don't, since I don't see any in your pod list) or if you didn't set up some static PV beforehand — Prosody will be stuck in "Pending" waiting for some persistent storage.

@Lillyyouwu
Copy link
Author

Thank you for replying me! :D
The relevant output of my prosody-0 are these:

root@k8s-master:/home/raccoon/myjit# kubectl -n jitsi get pod myjitsi-prosody-0
NAME                READY   STATUS    RESTARTS   AGE
myjitsi-prosody-0   0/1     Pending   0          4m48s
root@k8s-master:/home/raccoon/myjit# kubectl get pvc -A
NAMESPACE   NAME                             STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
jitsi       prosody-data-myjitsi-prosody-0   Pending                                                     10m
root@k8s-master:/home/raccoon/myjit# kubectl describe pvc prosody-data-myjitsi-prosody-0 -n jitsi
Name:          prosody-data-myjitsi-prosody-0
Namespace:     jitsi
StorageClass:  
Status:        Pending
Volume:        
Labels:        app.kubernetes.io/instance=myjitsi
               app.kubernetes.io/name=prosody
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       myjitsi-prosody-0
Events:
  Type    Reason         Age                 From                         Message
  ----    ------         ----                ----                         -------
  Normal  FailedBinding  77s (x42 over 11m)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

As you say, it's some problem about persistent storage. I also notice that there are same situation in #63, but I don't know how to fix it. Do I have to claim something in values.yaml, or I should get another dynamic storage provisioners? (Sorry about these stupid questions TAT, I am very new to k8s, just trying to get jitsi work).

Also, I am trying to deploy jitsi in KubeEdge later, in this case, which topology do you recommend? ouo

@spijet
Copy link
Collaborator

spijet commented Apr 11, 2023

There are three ways to solve this problem:

  1. Install any kind of dynamic storage provisioner to allow creation of dynamic PVs in your cluster. One of the simplest provisioners to use is Rancher's local-path-provisioner, available on GitHub;
  2. Create a static PVC (and corresponding PV) and set Prosody to use an existing claim. See more here;
  3. Turn off persistence for good. This is the most simple way, although I'm unsure if it will work properly for your use case, as all Prosody data would be lost on every deploy. If you don't do anything fancy and don't create any user accounts in your Jitsi installation, then it'll likely be OK. To disable persistent storage for Prosody, set .Values.prosody.persistence.enabled to false, like this:
# values.yaml
prosody:
  persistence:
    enabled: false

Personally, I'd go with the Option 1 and set up a proper persistent storage provisioner, especially considering that you have a two-node cluster.

@spijet
Copy link
Collaborator

spijet commented Apr 11, 2023

As for the topology, unfortunately, I don't have any experience with KubeEdge (yet), so I'll try to give you advice based on my experience. If you don't plan to have a ton (more than 50) of users chatting simultaneously, you'll be fine with running Jitsi on a single node, provided that you can allocate this for Jitsi specifically:

  1. Enough network bandwidth (at least 100Mbps up/down, if you plan to use video);
  2. Around 4-6 GiB of RAM (more if you plan to use Jibri for recording and streaming);
  3. Around 2-4 cores of CPU (more if you plan to use Jibri).

Also a good thing to keep in mind is that you should install Jitsi close to the place the most users are living in, e.g. if most of your userbase are living in Southeast Asia, then you should deploy it there or as close as possible so that your users' RTT would be really small.

@Lillyyouwu
Copy link
Author

Lillyyouwu commented Apr 16, 2023

Hi, I am still trying to make my jitsi work. I deployed my jitsi with this values.yaml file, just trying to make it work inside the cluster. I can reach the web page with the port-forwarding strategy, but I can't connect to the web page with the cluster IP of myjitsi-jitsi-meet-web. is that normal?

I deployed jitsi in KubeEdge with the same values.yaml, and it's fine to reach the web page with the ClusterIP.

publicURL: "meet.raccoon.com"

jvb:
  useHostPort: true
  # Use public IP of one (or more) of your nodes,
  # or the public IP of an external LB:
  publicIPs:
    - 192.168.186.180
prosody:
  persistence:
    enabled: false

d4522a09a96ca17397177e1ae1c6039

As for the nodeport strategy, is there still any additional service needed for jvb/web like #63? Or just creating a corresponding ingress is ok?

Thank you soooo much for getting back to me. I'm trying out your advices, still looking for what works for me > <

@spijet
Copy link
Collaborator

spijet commented Apr 17, 2023

Yes, this is normal. ClusterIP services are expected to be available only inside the cluster (i.e. to member nodes and pods), not to outside users.

As for the connectivity issues, there are two main points:

  1. You need to have a way of accessing the jitsi-web pod from the outside world. You can do it either via Ingress, or with a NodePort service that points to the jitsi-web pod.
  2. You also need to make sure JVB (the thing that handles sound and video transmission) actually announces the public IP addresses of the nodes it's running on, so the end users can connect to it. No private IPs here (unless your whole userbase lives on the same LAN as the k8s cluster).

@spijet
Copy link
Collaborator

spijet commented May 8, 2023

I'm going to close the issue for now. Please let me know if you have any problems with your Jitsi Meet installation.

@spijet spijet closed this as completed May 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants