Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide out of the box clustering #64

Closed
karimhm opened this issue Feb 3, 2021 · 24 comments
Closed

Provide out of the box clustering #64

karimhm opened this issue Feb 3, 2021 · 24 comments
Assignees
Milestone

Comments

@karimhm
Copy link
Contributor

karimhm commented Feb 3, 2021

The official ejabberd community server Docker image does not offer any way to form a cluster. It is hard to find resources online regarding clustering ejabberd.

Are PR accepted?

@badlop
Copy link
Member

badlop commented Feb 15, 2021

Of course, if you are able to get it running, it would be great to merge, document, or at least publish it for anybody else interested in clustering.

@karimhm
Copy link
Contributor Author

karimhm commented Feb 15, 2021

I managed to automatically cluster ejabberd using Kubernetes API. The work is based on VerneMQ Docker image start script.
All of VerneMQ Docker source code is licensed under Apache-2.0 License at the current time, is this OK?

@badlop
Copy link
Member

badlop commented Feb 16, 2021

Well, you can create an ad-hoc repository (or fork) for hosting the required files and documentation. And I can link to your work in the ejabberd Docker documentation, so anybody interested can find it.

@Robbilie
Copy link

Robbilie commented Apr 6, 2021

@karimhm would you mind sharing your script? or at least reviewing mine? :)

https://github.com/Robbilie/kubernetes-ejabberd

if you have addional requirements, let me know, otherwise i would ask @badlop for a review too and create a PR for this repository here :)

@karimhm
Copy link
Contributor Author

karimhm commented Apr 6, 2021

@karimhm would you mind sharing your script? or at least reviewing mine? :)

https://github.com/Robbilie/kubernetes-ejabberd

if you have addional requirements, let me know, otherwise i would ask @badlop for a review too and create a PR for this repository here :)

My Dockerfile look as follow:

FROM ejabberd/ecs:20.12

ENV EJABBERD_HOSTS=localhost \
    EJABBERD_ERLANG_NODE="ejabberd@$(hostname -f)"

USER root

COPY docker-entrypoint.sh /docker-entrypoint.sh

RUN apk add --no-cache py3-jinja2 curl jq \
    && rm -rf /var/cache/apk/* \
    && chmod +x /ready-probe.sh

# Setup runtime environment
USER ejabberd
WORKDIR $HOME

ENTRYPOINT exec /docker-entrypoint.sh

docker-entrypoint.sh look as follow:

#!/bin/sh

readonly EJABBERD_READY_FILE=$HOME/.ejabberd_ready
readonly EJABBERD_CLUSTER_READY_FILE=$HOME/.ejabberd_cluster_ready

# Mark ejabberd as not ready so the `ready-probe.sh` script would be able to know about it.
if [ -e $EJABBERD_READY_FILE ]; then
    rm $EJABBERD_READY_FILE
fi

## Configuration files
# `ejabberd.yml`
readonly EJABBERD_CONFIG_TEMPLATE=$TEMPLATES_DIR/ejabberd.yml.tpl
readonly EJABBERD_CONFIG_FILE=$HOME/conf/ejabberd.yml
# `ejabberdctl.cfg`
readonly EJABBERD_CTL_CONFIG_TEMPLATE=$TEMPLATES_DIR/ejabberdctl.cfg.tpl
readonly EJABBERD_CTL_CONFIG_FILE=$HOME/conf/ejabberdctl.cfg

readonly JINJA_CMD="import os;
import sys;
import jinja2;
sys.stdout.write(
    jinja2.Template
        (sys.stdin.read()
    ).render(env=os.environ))
"

cat $EJABBERD_CONFIG_TEMPLATE | python3 -c "$JINJA_CMD" > $EJABBERD_CONFIG_FILE
cat $EJABBERD_CTL_CONFIG_TEMPLATE | python3 -c "$JINJA_CMD" > $EJABBERD_CTL_CONFIG_FILE

## Clustering
join_cluster() {
    # No need to look for a cluster to join if joined before.
    if [ -e $EJABBERD_CLUSTER_READY_FILE ]; then
        echo "[entrypoint_script] Skip joining cluster, already joined."
        # Mark ejabberd as ready
        touch $EJABBERD_READY_FILE
        return 0
    fi

    if [ $EJABBERD_CLUSTER_KUBERNETES_DISCOVERY == "TRUE" ]; then
        local kubernetes_cluster_name=${EJABBERD_KUBERNETES_CLUSTER_NAME:-cluster.local}
        local kubernetes_namespace=${EJABBERD_KUBERNETES_NAMESPACE:-`cat /var/run/secrets/kubernetes.io/serviceaccount/namespace`}
        local kubernetes_label_selector=${EJABBERD_KUBERNETES_LABEL_SELECTOR:-cluster.local}
        local kubernetes_subdomain=${EJABBERD_KUBERNETES_SUBDOMAIN:-$(curl --silent -X GET $INSECURE --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt https://kubernetes.default.svc.$kubernetes_cluster_name/api/v1/namespaces/$kubernetes_namespace/pods?labelSelector=$kubernetes_label_selector -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" | jq '.items[0].spec.subdomain' | sed 's/"//g' | tr '\n' '\0')}

        if [ $kubernetes_subdomain == "null" ]; then
            EJABBERD_KUBERNETES_HOSTNAME=$EJABBERD_KUBERNETES_POD_NAME.$kubernetes_namespace.svc.$kubernetes_cluster_name
        else
            EJABBERD_KUBERNETES_HOSTNAME=$EJABBERD_KUBERNETES_POD_NAME.$kubernetes_subdomain.$kubernetes_namespace.svc.$kubernetes_cluster_name
        fi

        local join_cluster_result=0
        local pod_names=$(curl --silent -X GET $INSECURE --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt https://kubernetes.default.svc.$kubernetes_cluster_name/api/v1/namespaces/$kubernetes_namespace/pods?labelSelector=$kubernetes_label_selector -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" | jq '.items[].spec.hostname' | sed 's/"//g' | tr '\n' ' ')

        for pod_name in $pod_names;
        do
            if [ $pod_name == "null" ]; then
                echo "[entrypoint_script] No Kubernetes pods were found. This might happen because the current pod is the first pod."
                echo "[entrypoint_script] Skip joining cluster."
                touch $EJABBERD_CLUSTER_READY_FILE
                # Mark ejabberd as ready
                touch $EJABBERD_READY_FILE
                break
            fi
            if [ $pod_name != $EJABBERD_KUBERNETES_POD_NAME ]; then
                local node_to_join="ejabberd@$pod_name.$kubernetes_subdomain.$kubernetes_namespace.svc.$kubernetes_cluster_name"
                echo "[entrypoint_script] Will join cluster node: '$node_to_join'"

                local response=$($HOME/bin/ejabberdctl ping $node_to_join)
                while [ $response != "pong" ]; do
                    echo "[entrypoint_script] Waiting for node: $node_to_join..."
                    sleep 5
                    response=$($HOME/bin/ejabberdctl ping $node_to_join)
                done

                $HOME/bin/ejabberdctl join_cluster $node_to_join
                join_cluster_result=$?

                break
            else
                echo "[entrypoint_script] Skip joining current node: $pod_name"
            fi
        done

        if [ $join_cluster_result == 0 ]; then
            echo "[entrypoint_script] ejabberd did join cluster successfully"
            touch $EJABBERD_CLUSTER_READY_FILE
            # Mark ejabberd as ready
            touch $EJABBERD_READY_FILE
        else
            echo "[entrypoint_script] ejabberd did fail to join cluster"
            exit 2
        fi
    else
        echo "[entrypoint_script] Kubernetes clustering is not enabled"
        # Mark ejabberd as ready
        touch $EJABBERD_READY_FILE
    fi
}

## Termination
EJABBERD_PID=0

terminate() {
    local net_interface=$(route | grep '^default' | grep -o '[^ ]*$')
    local ip_address=$(ip -4 addr show $net_interface | grep -oE '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' | sed -e "s/^[[:space:]]*//" | head -n 1)

    if [ $EJABBERD_PID != 0 ]; then
        # Leave the cluster before terminating
        if [ -n "$EJABBERD_KUBERNETES_HOSTNAME" ]; then
            NODE_NAME_TO_TERMINATE=ejabberd@$EJABBERD_KUBERNETES_HOSTNAME
        else
            NODE_NAME_TO_TERMINATE=ejabberd@$ip_address
        fi

        echo "[entrypoint_script] Leaving cluster '$NODE_NAME_TO_TERMINATE'"
        NO_WARNINGS=true $HOME/bin/ejabberdctl leave_cluster $NODE_NAME_TO_TERMINATE
        $HOME/bin/ejabberdctl stop > /dev/null
        $HOME/bin/ejabberdctl stopped > /dev/null

        kill -s TERM $EJABBERD_PID
        exit 0
    fi
}

trap "terminate" SIGTERM

## Start ejabberd
$HOME/bin/ejabberdctl "foreground" &
EJABBERD_PID=$!
$HOME/bin/ejabberdctl started
join_cluster
wait $EJABBERD_PID

ready-probe.sh look as follow:

#!/bin/sh

if $HOME/bin/ejabberdctl status>/dev/null 2>/dev/null && [ -e $HOME/.ejabberd_ready ]; then
    return 0
else
    return 3
fi

ready-probe.sh readiness probe is needed to prevent race conditions and to allow sequential startup. This will prevent Kubernetes from launching new nodes before the previous ones becomes ready.

@karimhm
Copy link
Contributor Author

karimhm commented Apr 6, 2021

A subset of the Kubernetes deployment look as follow:

env:
  - name: EJABBERD_CLUSTER_KUBERNETES_DISCOVERY
    value: "TRUE"
  - name: EJABBERD_KUBERNETES_POD_NAME
    valueFrom:
      fieldRef:
        fieldPath: metadata.name
  - name: EJABBERD_KUBERNETES_LABEL_SELECTOR
    value: "app=ejabberd"
readinessProbe:
  exec:
    command:
    - /bin/sh
    - -c
    - /ready-probe.sh
  initialDelaySeconds: 15
  periodSeconds: 15

@Robbilie
Copy link

Robbilie commented Apr 7, 2021

did you check out my solution? its a bit more lean, doesnt need kubernetes api access etc

@karimhm
Copy link
Contributor Author

karimhm commented Apr 7, 2021

did you check out my solution? its a bit more lean, doesnt need kubernetes api access etc

Using DNS for node discovery is not solid and often break (caused by the OS and networking equipment configuration).

A distribution coordinator (used to find or register the first node) is needed, in my example Kubernetes API is used due to its simplicity and reliability.

It is possible to use other distribution coordinator such as:

@Robbilie
Copy link

Robbilie commented Apr 7, 2021

Oh well, i am using the script right now and since headless service functionality is exactly intended for this usecase i figured it would be stable enough since its a core kubernetes feature 🤔

@badlop
Copy link
Member

badlop commented Jul 27, 2021

This is possible since ejabberd 21.7

As an example exercise, this docker-compose.yml setups two nodes: it registers an account in main node, and then instructs the replica node to join main. Only replica is accessible for XMPP clients and administration.

version: '3.7'

services:

  main:
    image: ejabberd/ecs:latest
    environment:
      - ERLANG_NODE_ARG=ejabberd@main
      - ERLANG_COOKIE=dummycookie123
      - CTL_ON_CREATE=register admin localhost asd
      - CTL_ON_START=stats registeredusers ;
                     status
    command: ["foreground"]
    healthcheck:
      test: netstat -nl | grep -q 5222
      start_period: 5s
      interval: 5s
      timeout: 5s
      retries: 120

  replica:
    image: ejabberd/ecs:latest
    depends_on:
      main:
        condition: service_healthy
    healthcheck:
      test: netstat -nl | grep -q 5222
      start_period: 5s
      interval: 5s
      timeout: 5s
      retries: 120
    ports:
      - "5222:5222"
      - "5269:5269"
      - "5280:5280"
      - "5443:5443"
    environment:
      - ERLANG_NODE_ARG=ejabberd@replica
      - ERLANG_COOKIE=dummycookie123
      - CTL_ON_CREATE=join_cluster ejabberd@main
      - CTL_ON_START=stats registeredusers ;
                     check_password admin localhost asd ;
                     status
    command: ["foreground"]

@badlop badlop added this to the ecs 21.07 milestone Jul 27, 2021
@badlop badlop self-assigned this Jul 27, 2021
@badlop badlop closed this as completed Jul 27, 2021
@karimhm
Copy link
Contributor Author

karimhm commented Jul 27, 2021

@badlop I believe this should be documented on Docker hub/ejabberd

@Robbilie
Copy link

Robbilie commented Jul 28, 2021

this not really a solution for kubernetes though (and frankly its not a pretty solution for docker-compose either)

in kubernetes terms you would run a single pod as master and a deployment with the slaves. this is really not pretty and other software like mongooseim for example take the approach that basically everyone takes which is a statefulset…

@badlop
Copy link
Member

badlop commented Aug 2, 2021

I believe this should be documented on Docker hub/ejabberd

Ok, documented in 8e9f665

However, the content of https://hub.docker.com/r/ejabberd/ecs/ is a copy of that file, and requires manual update, so it won't see an update until next version...

i would ask @badlop for a review too

I know almost nothing about kubernetes, so I cannot review it, or decide over it, or offer an alternative kubernetes solution.

this not really a solution for kubernetes though

As Dan Greenburg explained:

Give your son Marvin two sports shirts as a present. The first time he wears one of them,
look at him sadly and say in your Basic Tone of Voice, 'The other one you didn't like?' 

(and frankly its not a pretty solution for docker-compose either)

Your improvements are welcomed

@Robbilie
Copy link

Robbilie commented Aug 2, 2021

i did provide a link with an example upfront which could be used as a basis, karim posted their script aswell :)
https://github.com/Robbilie/kubernetes-ejabberd

@shodanx2
Copy link

Is this still the best, official docker-compose.yml file ?
Why is it necessary to run two instances of the server ?

@shodanx2
Copy link

I gave updating this yaml file a try but failed
I wanted to use my existing letsencrypt certificates, make MUC creation admin only and enable stun
It's starts just fine, but none of the config makes it to the ejabberd.conf file
Probably a space in the wrong place but, can't find it.


version: '3.7'
services:
    ejabberd:
        container_name: ejabberd
        ports:
            - '5222:5222'
            - '5269:5269'
            - '5280:5280'
			# line below for STUN/TURN support
            - "5439:5439" 
            - "5443:5443"
        volumes:
		    - /etc/letsencrypt/:/etc/letsencrypt/
        environment:
            - XMPP_DOMAIN=example.de
			- TZ=Europe/Berlin
            - 'EJABBERD_SSLCERT_EXAMPLE_DE=/etc/letsencrypt/live/example.de/fullchain.pem'
			- 'EJABBERD_SSLCERT_PUBSUB_EXAMPLE_DE=/etc/letsencrypt/live/pubsub.example.de/fullchain.pem'
			- 'EJABBERD_SSLCERT_CONFERENCE_EXAMPLE_DE=/etc/letsencrypt/live/conference.example.de/fullchain.pem'
            - 'EJABBERD_ADMINS=admin@example.de admin2@example.de'
            - 'EJABBERD_USERS=admin@example.de:password1234 admin2@example.de:password4567'
			# line below for STUN/TURN support
            - 'EJABBERD_STUN=true'
            - 'EJABBERD_MUC_CREATE_ADMIN_ONLY=true'
#            - 'EJABBERD_LOGLEVEL=4'
#            - CTL_ON_CREATE=
        image: ejabberd/ecs:latest
		restart: unless-stopped
# health check doesn't work, prevents docker-compose from starting "services.ejabberd.healthcheck contains an invalid type, it should be an object"
#        healthcheck:
#            - test: netstat -nl | grep -q 5222
#            - start_period: 5s
#            - interval: 5s
#            - timeout: 5s
#            - retries: 120

@badlop
Copy link
Member

badlop commented Oct 24, 2022

Is this still the best, official docker-compose.yml file ?

What exact file are you referring to?

Why is it necessary to run two instances of the server ?

Not necessary. This issue started with this question and complain:

The official ejabberd community server Docker image does not offer any way to form a cluster. It is hard to find resources online regarding clustering ejabberd.

I showed that it is possible. Nobody said it is necessary.

none of the config makes it to the ejabberd.conf file

Your compose file uses ejabberd/ecs image, but you are using environment options that were introduced and supported in some other image, not in ejabberd/ecs.

@shodanx2
Copy link

thanks badlop
I cannot get this to work
I will try installing on host os instead
thought the docker install would be easier but I failed

@badlop
Copy link
Member

badlop commented Oct 24, 2022

You failed because you are mixing the official documentation and image, with other examples that show configuration for other images that include other features.

If you follow strictly what the docker-ejabberd README says, and the exact example configuration that it links, it works.

@shodanx2
Copy link

But this leads back here for the docker-compose file #64 (comment)

I was trying to have a single file to "paste-edit-run and server's done"

That's why I posted an attempt at another docker-compose file

I'm making a shell script to do that on host instead (like I've done to streamline the docker mailserver installation docker-mailserver/docker-mailserver#2839 )

@badlop
Copy link
Member

badlop commented Nov 7, 2022

Ok, I've updated the docker-ejabberd README and the github container documentation to not link to this place.

@shodanx2
Copy link

shodanx2 commented Nov 8, 2022

Great !

If you, or anyone else reading this, is interested in taking this a step further.

I think the following could be added to the docker-compose file to streamline server installation.

  1. container creates database folder and set permission (*1)
  2. container includes ejabberd.yml example file
  3. fill in hostname
  4. fill in ssl certificate

Could you make the docker-compose.yml from the doc into a file that can be wget ?

possibly https://raw.githubusercontent.com/processone/ejabberd/master/docker-compose.yml ?

I've made a small script that does this in one copy and paste

(script doesn't currently work, something's wrong with the file access [critical] <0.174.0>@ejabberd_app:start/2:72 Failed to start ejabberd application: Failed to read YAML file '/opt/ejabberd/conf/ejabberd.yml': Syntax error on line 24 at position 2: did not find expected key )

 mkdir database; chown 9000:9000 database
 ejabber_hostname=example.com
 ejabber_admin_password=qwerty
 wget -O ejabberd.yml https://raw.githubusercontent.com/processone/ejabberd/master/ejabberd.yml.example
 sed -i "s/  - localhost/  - localhost\n  - ${ejabber_hostname}/" ejabberd.yml
 sed -i "s/# certfiles:/ certfiles:/" ejabberd.yml
 sed -i "s/#  - \/etc\/letsencrypt\/live\/domain.tld\/fullchain.pem/  - \/etc\/letsencrypt\/live\/${ejabber_hostname}\/fullchain.pem/" ejabberd.yml
 sed -i "s/#  - \/etc\/letsencrypt\/live\/domain.tld\/privkey.pem/  - \/etc\/letsencrypt\/live\/${ejabber_hostname}\/privkey.pem/" ejabberd.yml
 sed -i "s/register admin localhost asd/register admin localhost ${ejabber_admin_password}/" docker-compose.yml
 sed -i "s/      - .\/database:\/opt\/ejabberd\/database/      - .\/database:\/opt\/ejabberd\/database\n      - \/etc\/letsencrypt\/:\/etc\/letsencrypt\//" docker-compose.yml

The above is based on another docker container I use, docker-mailserver, here is how they handle this

the hostname is specified like this

services:
  mailserver:
    hostname: example.com

and they assume the user has already taken care of running certbot on their host, so they just add the /etc/letsencrypt folder like this. I think this is a safe assumption, even the ejabber.yml.example file assumes this in their examples

    volumes:
      - /etc/letsencrypt/:/etc/letsencrypt/

@shodanx2
Copy link

shodanx2 commented Nov 8, 2022

Wow that took a long time to debug

image

I was getting desperate, I spammed the .pem files into every path I could think of

root@example:/opt/ejabberd# find ./ | grep \.pem
./privkey.pem
./database/privkey.pem
./database/fullchain.pem
./database/opt/ejabberd/conf/privkey.pem
./database/opt/ejabberd/conf/fullchain.pem
./database/conf/privkey.pem
./database/conf/fullchain.pem
./fullchain.pem
./opt/ejabberd/privkey.pem
./opt/ejabberd/fullchain.pem
./opt/ejabberd/conf/privkey.pem
./opt/ejabberd/conf/fullchain.pem
./conf/privkey.pem
./conf/fullchain.pem

Turns out there's an extra space in the script

image

curse yaml !!

Anyway, it's further along here is what I found out

First, for letsencrypt you need to add the alternate domains to your ssl certificates for each sub domain that xmpp requires

certbot certonly --standalone --expand -d ${ejabber_hostname} -d proxy.${ejabber_hostname} -d pubsub.${ejabber_hostname} -d upload.${ejabber_hostname} -d conference.${ejabber_hostname}

And if you have existing certs, certbot will just create new folders like
/etc/letsencrypt/live/${ejabber_hostname}/fullchain.pem
Which is going to break the scripts below

So here is the procedure for now

1st step

mkdir /opt/ejabberd; cd /opt/ejabberd
nano docker-compose.yml

now paste yml text from container docs

version: '3.7'

services:

  main:
    image: ghcr.io/processone/ejabberd
    container_name: ejabberd
    environment:
      - CTL_ON_CREATE=register admin localhost asd
      - CTL_ON_START=registered_users localhost ;
                     status
    ports:
      - "5222:5222"
      - "5269:5269"
      - "5280:5280"
      - "5443:5443"
    volumes:
      - ./ejabberd.yml:/opt/ejabberd/conf/ejabberd.yml:ro
      - ./database:/opt/ejabberd/database

press CTRL+X, ENTER, y to save, ENTER

2nd step, change ejabber_hostname and ejabber_admin_password
This will make your ssl privkey readable to all user so that the ejabberd process can read it (ejabberd cannot be root)

now repeat with below script
nano setup.sh
copy and paste script below

#!/bin/bash
mkdir database; chown 9000:9000 database
ejabber_hostname=example.com
ejabber_admin_password=qwerty
wget -O ejabberd.yml https://raw.githubusercontent.com/processone/ejabberd/master/ejabberd.yml.example
sed -i "s/  - localhost/  - localhost\n  - ${ejabber_hostname}/" ejabberd.yml
sed -i "s/# certfiles:/certfiles:/" ejabberd.yml
sed -i "s/#  - \/etc\/letsencrypt\/live\/domain.tld\/fullchain.pem/  - \/etc\/letsencrypt\/live\/${ejabber_hostname}\/fullchain.pem/" ejabberd.yml
sed -i "s/#  - \/etc\/letsencrypt\/live\/domain.tld\/privkey.pem/  - \/etc\/letsencrypt\/live\/${ejabber_hostname}\/privkey.pem/" ejabberd.yml
sed -i "s/register admin localhost asd/register admin localhost ${ejabber_admin_password}/" docker-compose.yml
sed -i "s/      - .\/database:\/opt\/ejabberd\/database/      - .\/database:\/opt\/ejabberd\/database\n      - \/etc\/letsencrypt\/:\/etc\/letsencrypt\//" docker-compose.yml
chmod go+r /etc/letsencrypt/live/${ejabber_hostname}/privkey.pem
docker-compose up -d
sleep 5
tail -f -n 500  /var/lib/docker/volumes/*/_data/ejabberd.log

and execute the script

chmod +x setup.sh; ./setup.sh

This will dump you immediately into the logs, press ctrl+c to quit

Here is my log sample,

2022-11-08 03:35:15.478270+00:00 [info] <0.174.0>@ejabberd_config:load/1:82 Loading configuration from /opt/ejabberd/conf/ejabberd.yml
2022-11-08 03:35:16.361756+00:00 [info] <0.174.0>@ejabberd_config:load/1:89 Configuration loaded successfully
2022-11-08 03:35:16.694225+00:00 [info] <0.416.0>@ejabberd_systemd:init/1:103 Got no NOTIFY_SOCKET, notifications disabled
2022-11-08 03:35:16.716965+00:00 [info] <0.419.0>@translate:load/2:127 Building language translation cache
2022-11-08 03:35:17.045509+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia ram table 'ejabberd_commands'
2022-11-08 03:35:17.135566+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia ram table 'route'
2022-11-08 03:35:17.145438+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia ram table 'route_multicast'
2022-11-08 03:35:17.164739+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia ram table 'session'
2022-11-08 03:35:17.169336+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia ram table 'session_counter'
2022-11-08 03:35:17.184422+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia ram table 's2s'
2022-11-08 03:35:17.188221+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia ram table 'temporarily_blocked'
2022-11-08 03:35:17.200205+00:00 [info] <0.415.0>@gen_mod:start_modules/0:130 Loading modules for localhost and example.com
2022-11-08 03:35:17.200455+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia ram table 'mod_register_ip'
2022-11-08 03:35:17.204728+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc table 'sr_group'
2022-11-08 03:35:17.209726+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc table 'sr_user'
2022-11-08 03:35:17.225665+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc_only table 'privacy'
2022-11-08 03:35:17.253577+00:00 [warning] <0.415.0>@mod_mam:start/2:111 Mnesia backend for mod_mam is not recommended: it's limited to 2GB and often gets corrupted when reaching this limit. SQL backend is recommended. Namely, for small servers SQLite is a preferred choice because it's very easy to configure.
2022-11-08 03:35:17.253888+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc_only table 'archive_msg'
2022-11-08 03:35:17.258058+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc_only table 'archive_prefs'
2022-11-08 03:35:17.312026+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc table 'muc_room'
2022-11-08 03:35:17.317295+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc table 'muc_registered'
2022-11-08 03:35:17.321833+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia ram table 'muc_online_room'
2022-11-08 03:35:17.332314+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc_only table 'vcard'
2022-11-08 03:35:17.337399+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc table 'vcard_search'
2022-11-08 03:35:17.360028+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc_only table 'motd'
2022-11-08 03:35:17.365316+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc_only table 'motd_users'
2022-11-08 03:35:17.389411+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia ram table 'bosh'
2022-11-08 03:35:17.393577+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc_only table 'push_session'
2022-11-08 03:35:17.417474+00:00 [info] <0.668.0>@mod_stun_disco:parse_listener/1:616 Going to offer STUN/TURN service: 172.17.0.2:3478 (udp)
2022-11-08 03:35:17.418542+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc_only table 'roster'
2022-11-08 03:35:17.436348+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc_only table 'roster_version'
2022-11-08 03:35:17.505289+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc_only table 'last_activity'
2022-11-08 03:35:17.524074+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc_only table 'offline_msg'
2022-11-08 03:35:17.618579+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc_only table 'caps_features'
2022-11-08 03:35:17.626491+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia ram table 'pubsub_last_item'
2022-11-08 03:35:17.634923+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc table 'pubsub_index'
2022-11-08 03:35:17.645355+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc table 'pubsub_node'
2022-11-08 03:35:17.650120+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc table 'pubsub_state'
2022-11-08 03:35:17.654721+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc_only table 'pubsub_item'
2022-11-08 03:35:17.664441+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc table 'pubsub_orphan'
2022-11-08 03:35:17.670170+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc_only table 'private_storage'
2022-11-08 03:35:17.695745+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc_only table 'mqtt_pub'
2022-11-08 03:35:17.707278+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia ram table 'mqtt_session'
2022-11-08 03:35:17.711884+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia ram table 'mqtt_sub'
2022-11-08 03:35:17.731074+00:00 [info] <0.768.0>@mod_mqtt:init_topic_cache/2:641 Building MQTT cache for localhost, this may take a while
2022-11-08 03:35:17.746896+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia ram table 'bytestream'
2022-11-08 03:35:17.753504+00:00 [warning] <0.415.0>@mod_mam:start/2:111 Mnesia backend for mod_mam is not recommended: it's limited to 2GB and often gets corrupted when reaching this limit. SQL backend is recommended. Namely, for small servers SQLite is a preferred choice because it's very easy to configure.
2022-11-08 03:35:17.756810+00:00 [info] <0.805.0>@mod_stun_disco:parse_listener/1:616 Going to offer STUN/TURN service: 172.17.0.2:3478 (udp)
2022-11-08 03:35:17.761048+00:00 [info] <0.809.0>@mod_mqtt:init_topic_cache/2:641 Building MQTT cache for example.com, this may take a while
2022-11-08 03:35:17.772341+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc_only table 'passwd'
2022-11-08 03:35:17.777379+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia ram table 'reg_users_counter'
2022-11-08 03:35:17.810848+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc_only table 'oauth_token'
2022-11-08 03:35:17.817812+00:00 [info] <0.366.0>@ejabberd_mnesia:create/2:270 Creating Mnesia disc table 'oauth_client'
2022-11-08 03:35:17.872743+00:00 [info] <0.174.0>@ejabberd_cluster_mnesia:wait_for_sync/1:123 Waiting for Mnesia synchronization to complete
2022-11-08 03:35:17.925037+00:00 [warning] <0.505.0>@ejabberd_pkix:log_warnings/1:393 Invalid certificate in /etc/letsencrypt/live/example.com-0001/fullchain.pem: at line 63: certificate is signed by unknown CA
2022-11-08 03:35:17.986026+00:00 [warning] <0.505.0>@ejabberd_pkix:check_domain_certfiles/1:312 No certificate found matching localhost
2022-11-08 03:35:17.986534+00:00 [warning] <0.505.0>@ejabberd_pkix:check_domain_certfiles/1:312 No certificate found matching pubsub.localhost
2022-11-08 03:35:17.986984+00:00 [warning] <0.505.0>@ejabberd_pkix:check_domain_certfiles/1:312 No certificate found matching upload.localhost
2022-11-08 03:35:17.987459+00:00 [warning] <0.505.0>@ejabberd_pkix:check_domain_certfiles/1:312 No certificate found matching conference.localhost
2022-11-08 03:35:17.987866+00:00 [warning] <0.505.0>@ejabberd_pkix:check_domain_certfiles/1:312 No certificate found matching proxy.localhost
2022-11-08 03:35:17.988951+00:00 [info] <0.174.0>@ejabberd_app:start/2:63 ejabberd 22.10.0 is started in the node ejabberd@localhost in 2.80s
2022-11-08 03:35:17.990797+00:00 [info] <0.499.0>@ejabberd_listener:init/4:160 Start accepting TLS connections at [::]:5223 for ejabberd_c2s
2022-11-08 03:35:17.990818+00:00 [info] <0.502.0>@ejabberd_listener:init/4:160 Start accepting TCP connections at [::]:5280 for ejabberd_http
2022-11-08 03:35:17.990975+00:00 [info] <0.504.0>@ejabberd_listener:init/4:160 Start accepting TCP connections at [::]:1883 for mod_mqtt
2022-11-08 03:35:17.991107+00:00 [info] <0.500.0>@ejabberd_listener:init/4:160 Start accepting TCP connections at [::]:5269 for ejabberd_s2s_in
2022-11-08 03:35:17.991113+00:00 [info] <0.501.0>@ejabberd_listener:init/4:160 Start accepting TLS connections at [::]:5443 for ejabberd_http
2022-11-08 03:35:17.991230+00:00 [info] <0.498.0>@ejabberd_listener:init/4:160 Start accepting TCP connections at [::]:5222 for ejabberd_c2s
2022-11-08 03:35:17.991400+00:00 [info] <0.503.0>@ejabberd_listener:init/4:127 Start accepting UDP connections at [::]:3478 for ejabberd_stun
2022-11-08 03:35:17.991393+00:00 [info] <0.786.0>@ejabberd_listener:init/4:160 Start accepting TCP connections at 172.17.0.2:7777 for mod_proxy65_stream
2022-11-08 03:35:17.997386+00:00 [info] <0.503.0>@ejabberd_stun:prepare_turn_opts/2:133 You have several virtual hosts configured, but option 'auth_realm' is undefined and 'auth_type' is set to 'user', so the TURN relay might not be working properly. Using localhost as a fallback

So, from this point,

looks like STUN/TURN doesn't work
and
mod_mam is complaining about not using sqlite, apparently it's "very easy" to configure

Oh, wait a second

[warning] <0.505.0>@ejabberd_pkix:log_warnings/1:393 Invalid certificate in /etc/letsencrypt/live/example.com-0001/fullchain.pem: at line 63: certificate is signed by unknown CA

root@example:/opt/ejabberd# openssl crl2pkcs7 -nocrl -certfile /etc/letsencrypt/live/example.com-0001/fullchain.pem | openssl pkcs7 -print_certs -noout
subject=CN = example.com
issuer=C = US, O = Let's Encrypt, CN = R3

subject=C = US, O = Let's Encrypt, CN = R3
issuer=C = US, O = Internet Security Research Group, CN = ISRG Root X1

subject=C = US, O = Internet Security Research Group, CN = ISRG Root X1
issuer=O = Digital Signature Trust Co., CN = DST Root CA X3

Well... looks like it's valid, why is ejabberd complaining ....

root@example:/opt/ejabberd# openssl x509 -noout -modulus -in /etc/letsencrypt/live/example.com-0001/fullchain.pem | openssl md5
MD5(stdin)= 6152XXXXXXXXXXXXXXXXXXXXXXX9816
root@example:/opt/ejabberd# openssl rsa -noout -modulus -in /etc/letsencrypt/live/example.com-0001/privkey.pem | openssl md5
MD5(stdin)= 6152XXXXXXXXXXXXXXXXXXXXXXX9816

keys match....

root@example:/opt/ejabberd# openssl x509 -in /etc/letsencrypt/live/example.com-0001/cert.pem -noout -pubkey
-----BEGIN PUBLIC KEY-----
MIIBIjXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXX
[...]
XXXXXXXXXXXXXXXXXXXXXXXXXXXX
AQAB
-----END PUBLIC KEY-----
root@example:/opt/ejabberd# openssl rsa -in /etc/letsencrypt/live/example.com-0001/privkey.pem -pubout
writing RSA key
-----BEGIN PUBLIC KEY-----
MIIBIjXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXX
[...]
XXXXXXXXXXXXXXXXXXXXXXXXXXXX
AQAB
-----END PUBLIC KEY-----

public keys match

root@example:/opt/ejabberd# openssl x509 -noout -in /etc/letsencrypt/live/example.com-0001/fullchain.pem -dates
notBefore=Nov  8 01:41:55 2022 GMT
notAfter=Feb  6 01:41:54 2023 GMT

dates good

root@example:/opt/ejabberd# openssl verify -CAfile /etc/letsencrypt/live/example.com-0001/fullchain.pem /etc/letsencrypt/live/example.com-0001/cert.pem
C = US, O = Internet Security Research Group, CN = ISRG Root X1
error 2 at 2 depth lookup: unable to get issuer certificate
error /etc/letsencrypt/live/example.com-0001/cert.pem: verification failed

Oh, what's going on there, might not be a ejabberd problem in this case ....

Oh well, server works, my sunday has disappeared, enough of this ...

@sando38
Copy link

sando38 commented Jul 14, 2023

Just in case anybody would like to try this helm chart:
https://github.com/sando38/helm-ejabberd

It is under development and feedback, testing is very much welcomed.
Merging upstream is also intended.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants