-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why my prosody-0 is always pending #77
Comments
Hello @Lillyyouwu! Sorry for the late reply. Can you please share the output of One possible reason might be that since Prosody is a |
Thank you for replying me! :D root@k8s-master:/home/raccoon/myjit# kubectl -n jitsi get pod myjitsi-prosody-0
NAME READY STATUS RESTARTS AGE
myjitsi-prosody-0 0/1 Pending 0 4m48s
root@k8s-master:/home/raccoon/myjit# kubectl get pvc -A
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
jitsi prosody-data-myjitsi-prosody-0 Pending 10m
root@k8s-master:/home/raccoon/myjit# kubectl describe pvc prosody-data-myjitsi-prosody-0 -n jitsi
Name: prosody-data-myjitsi-prosody-0
Namespace: jitsi
StorageClass:
Status: Pending
Volume:
Labels: app.kubernetes.io/instance=myjitsi
app.kubernetes.io/name=prosody
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: myjitsi-prosody-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 77s (x42 over 11m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set As you say, it's some problem about persistent storage. I also notice that there are same situation in #63, but I don't know how to fix it. Do I have to claim something in values.yaml, or I should get another dynamic storage provisioners? (Sorry about these stupid questions TAT, I am very new to k8s, just trying to get jitsi work). Also, I am trying to deploy jitsi in KubeEdge later, in this case, which topology do you recommend? ouo |
There are three ways to solve this problem:
# values.yaml
prosody:
persistence:
enabled: false Personally, I'd go with the Option 1 and set up a proper persistent storage provisioner, especially considering that you have a two-node cluster. |
As for the topology, unfortunately, I don't have any experience with KubeEdge (yet), so I'll try to give you advice based on my experience. If you don't plan to have a ton (more than 50) of users chatting simultaneously, you'll be fine with running Jitsi on a single node, provided that you can allocate this for Jitsi specifically:
Also a good thing to keep in mind is that you should install Jitsi close to the place the most users are living in, e.g. if most of your userbase are living in Southeast Asia, then you should deploy it there or as close as possible so that your users' RTT would be really small. |
Hi, I am still trying to make my jitsi work. I deployed my jitsi with this I deployed jitsi in KubeEdge with the same publicURL: "meet.raccoon.com"
jvb:
useHostPort: true
# Use public IP of one (or more) of your nodes,
# or the public IP of an external LB:
publicIPs:
- 192.168.186.180
prosody:
persistence:
enabled: false As for the nodeport strategy, is there still any additional service needed for jvb/web like #63? Or just creating a corresponding ingress is ok? Thank you soooo much for getting back to me. I'm trying out your advices, still looking for what works for me > < |
Yes, this is normal. As for the connectivity issues, there are two main points:
|
I'm going to close the issue for now. Please let me know if you have any problems with your Jitsi Meet installation. |
Hi, I am trying to deploy jitsi in LAN with this
values.yaml
. I am using kubernetes v1.22.1, helm v3.9.4. My cluster have one master node and one agent node.and I run this
but I found my
prosody-0
is always at pending, and myjvb
pod keep restarting. I have these pods:the logs for 'jvb' are:
I don't know how to solve it, probably because my bad
values.yaml
... can anyone help with this? sincerely thank! T TThe text was updated successfully, but these errors were encountered: