Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mod_event_pusher_http doesn't handle muc_light messages #2363

Closed
jasl opened this issue Jul 8, 2019 · 14 comments
Closed

mod_event_pusher_http doesn't handle muc_light messages #2363

jasl opened this issue Jul 8, 2019 · 14 comments
Assignees
Labels
community Non ESL issues and PRs

Comments

@jasl
Copy link
Contributor

jasl commented Jul 8, 2019

MongooseIM version: 3.4
Installed from: source
Erlang/OTP version: 22

Normal user to user chat works great, but I found my http service doesn't receive messages from muc light room.

Shame of me I don't know what's wrong.

here's my mongooseim.cfg

%%%
%%%               ejabberd configuration file
%%%
%%%'

%%% The parameters used in this configuration file are explained in more detail
%%% in the ejabberd Installation and Operation Guide.
%%% Please consult the Guide in case of doubts, it is included with
%%% your copy of ejabberd, and is also available online at
%%% http://www.process-one.net/en/ejabberd/docs/

%%% This configuration file contains Erlang terms.
%%% In case you want to understand the syntax, here are the concepts:
%%%
%%%  - The character to comment a line is %
%%%
%%%  - Each term ends in a dot, for example:
%%%      override_global.
%%%
%%%  - A tuple has a fixed definition, its elements are
%%%    enclosed in {}, and separated with commas:
%%%      {loglevel, 4}.
%%%
%%%  - A list can have as many elements as you want,
%%%    and is enclosed in [], for example:
%%%      [http_poll, web_admin, tls]
%%%
%%%    Pay attention that list elements are delimited with commas,
%%%    but no comma is allowed after the last list element. This will
%%%    give a syntax error unlike in more lenient languages (e.g. Python).
%%%
%%%  - A keyword of ejabberd is a word in lowercase.
%%%    Strings are enclosed in "" and can contain spaces, dots, ...
%%%      {language, "en"}.
%%%      {ldap_rootdn, "dc=example,dc=com"}.
%%%
%%%  - This term includes a tuple, a keyword, a list, and two strings:
%%%      {hosts, ["jabber.example.net", "im.example.com"]}.
%%%
%%%  - This config is preprocessed during release generation by a tool which
%%%    interprets double curly braces as substitution markers, so avoid this
%%%    syntax in this file (though it's valid Erlang).
%%%
%%%    So this is OK (though arguably looks quite ugly):
%%%      { {s2s_addr, "example-host.net"}, {127,0,0,1} }.
%%%
%%%    And I can't give an example of what's not OK exactly because
%%%    of this rule.
%%%


%%%.   =======================
%%%'   OVERRIDE STORED OPTIONS

%%
%% Override the old values stored in the database.
%%

%%
%% Override global options (shared by all ejabberd nodes in a cluster).
%%
%% override_global.

%%
%% Override local options (specific for this particular ejabberd node).
%%
%% override_local.

%%
%% Remove the Access Control Lists before new ones are added.
%%
%% override_acls.

%%%.   =========
%%%'   GDPR RELATED
%% when set to true, user data removals will be applied on all modules that implements GDPR behaviour -
%% even the disabled ones!
%%
{gdpr_removal_for_disabled_modules, false}.

%%%.   =========
%%%'   DEBUGGING

%%
%% loglevel: Verbosity of log files generated by ejabberd.
%% 0: No ejabberd log at all (not recommended)
%% 1: Critical
%% 2: Error
%% 3: Warning
%% 4: Info
%% 5: Debug
%%
{loglevel, 4}.

%%%.   ================
%%%'   SERVED HOSTNAMES

%%
%% hosts: Domains served by ejabberd.
%% You can define one or several, for example:
%% {hosts, ["example.net", "example.com", "example.org"]}.
%%
{hosts, ["im.cybros.dev"]}.

%%
%% route_subdomains: Delegate subdomains to other XMPP servers.
%% For example, if this ejabberd serves example.org and you want
%% to allow communication with an XMPP server called im.example.org.
%%
% {route_subdomains, s2s}.


%%%.   ===============
%%%'   LISTENING PORTS

%%
%% listen: The ports ejabberd will listen on, which service each is handled
%% by and what options to start it with.
%%
{listen, [
  %% BOSH and WS endpoints over HTTP
  {5280, ejabberd_cowboy, [
    {num_acceptors, 10},
    {transport_options, [{max_connections, 1024}]},
    {modules, [
      {"_", "/http-bind", mod_bosh},
      {"_", "/ws-xmpp", mod_websockets, [
        {ejabberd_service, [
          {access, all},
          {shaper_rule, fast},
          {ip, {127, 0, 0, 1}}
        ]},
        {timeout, 600000}, {ping_rate, 2000}
      ]}
    ]}
  ]},

  %% BOSH and WS endpoints over HTTPS
  % {5285, ejabberd_cowboy, [
  %   {num_acceptors, 10},
  %   {transport_options, [{max_connections, 1024}]},
  %   {ssl, [{certfile, "priv/ssl/cybros.io.pem"}, {keyfile, "priv/ssl/cybros.io.key"}, {password, ""}]},
  %   {modules, [
  %     {"_", "/http-bind", mod_bosh},
  %     {"_", "/ws-xmpp", mod_websockets, [
  %       {timeout, 600000}, {ping_rate, 60000}
  %     ]}
  %   ]}
  % ]},

  %% MongooseIM HTTP API it's important to start it on localhost
  %% or some private interface only (not accessible from the outside)
  %% At least start it on different port which will be hidden behind firewall

  {{8088, "127.0.0.1"}, ejabberd_cowboy, [
    {num_acceptors, 10},
    {transport_options, [{max_connections, 1024}]},
    {modules, [
      {"localhost", "/api", mongoose_api_admin, []}
    ]}
  ]},

  % {8089, ejabberd_cowboy, [
  %   {num_acceptors, 10},
  %   {transport_options, [{max_connections, 1024}]},
  %   {protocol_options, [{compress, true}]},
  %   {ssl, [{certfile, "priv/ssl/cybros.io.pem"}, {keyfile, "priv/ssl/cybros.io.key"}, {password, ""}]},
  %   {modules, [
  %     {"_", "/api/sse", lasse_handler, [mongoose_client_api_sse]},
  %     {"_", "/api/messages/[:with]", mongoose_client_api_messages, []},
  %     {"_", "/api/contacts/[:jid]", mongoose_client_api_contacts, []},
  %     {"_", "/api/rooms/[:id]", mongoose_client_api_rooms, []},
  %     {"_", "/api/rooms/[:id]/config", mongoose_client_api_rooms_config, []},
  %     {"_", "/api/rooms/:id/users/[:user]", mongoose_client_api_rooms_users, []},
  %     {"_", "/api/rooms/[:id]/messages", mongoose_client_api_rooms_messages, []}
  %   ]}
  % ]},

  % {5222, ejabberd_c2s, [
  %   %%
  %   %% If TLS is compiled in and you installed a SSL
  %   %% certificate, specify the full path to the
  %   %% file and uncomment this line:
  %   %%
  %   {certfile, "priv/ssl/server.pem"}, starttls,
  %   %% {zlib, 10000},
  %   %% https://www.openssl.org/docs/apps/ciphers.html#CIPHER_STRINGS
  %   %% {ciphers, "DEFAULT:!EXPORT:!LOW:!SSLv2"},
  %   {access, c2s},
  %   {shaper, c2s_shaper},
  %   {max_stanza_size, 65536},
  %   {protocol_options, ["no_sslv3"]}
  % ]},

  {5269, ejabberd_s2s_in, [
    {shaper, s2s_shaper},
    {max_stanza_size, 131072},
    {protocol_options, ["no_sslv3"]}
  ]}

  %%
  %% ejabberd_service: Interact with external components (transports, ...)
  %%
  % {8888, ejabberd_service, [
  %   {access, all},
  %   {shaper_rule, fast},
  %   {ip, {127, 0, 0, 1}},
  %   {password, "secret"}
  % ]}

  %%
  %% ejabberd_stun: Handles STUN Binding requests
  %%
  % {{3478, udp}, ejabberd_stun, []}
]}.

%%
%% s2s_use_starttls: Enable STARTTLS + Dialback for S2S connections.
%% Allowed values are: false optional required required_trusted
%% You must specify a certificate file.
%%
{s2s_use_starttls, optional}.
%%
%% s2s_certfile: Specify a certificate file.
%%
% {s2s_certfile, "priv/ssl/cybros.io.pem"}.

%% https://www.openssl.org/docs/apps/ciphers.html#CIPHER_STRINGS
% {s2s_ciphers, "DEFAULT:!EXPORT:!LOW:!SSLv2"}.

%%
%% domain_certfile: Specify a different certificate for each served hostname.
%%
% {domain_certfile, "example.org", "/path/to/example_org.pem"}.
% {domain_certfile, "example.com", "/path/to/example_com.pem"}.

%%
%% S2S whitelist or blacklist
%%
%% Default s2s policy for undefined hosts.
%%
{s2s_default_policy, deny}.

%%
%% Allow or deny communication with specific servers.
%%
% {{s2s_host, "goodhost.org"}, allow}.
% {{s2s_host, "badhost.org"}, deny}.
{outgoing_s2s_port, 5269}.

%%
%% IP addresses predefined for specific hosts to skip DNS lookups.
%% Ports defined here take precedence over outgoing_s2s_port.
%% Examples:
%%
% {{s2s_addr, "example-host.net"}, {127,0,0,1}}.
% {{s2s_addr, "example-host.net"}, {{127,0,0,1}, 5269}}.
% {{s2s_addr, "example-host.net"}, {{127,0,0,1}, 5269}}.

%%
%% Outgoing S2S options
%%
%% Preferred address families (which to try first) and connect timeout
%% in milliseconds.
%%
% {outgoing_s2s_options, [ipv4, ipv6], 10000}.
%%


%%%.   ==============
%%%'   SESSION BACKEND

{sm_backend, {mnesia, []}}.


%%%.   ==============
%%%'   AUTHENTICATION

%% Advertised SASL mechanisms
{sasl_mechanisms, [cyrsasl_plain]}.

%%
%% auth_method: Method used to authenticate the users.
%% The default method is the internal.
%% If you want to use a different method,
%% comment this line and enable the correct ones.
%%
{auth_method, http}.
{auth_opts, [
  %% Store the plain passwords or hashed for SCRAM:
  {password_format, plain},
  % {password_format, scram}
  % {scram_iterations, 4096} % default

  %%
  %% For auth_http:
  %% auth_http requires {http, Host | global, auth, ..., ...} outgoing pool.
  % {basic_auth, "mongooseim:48Fcdc@7"}
  {connection_pool_size, 10}
]}.

%%%.   ==============
%%%'   OUTGOING CONNECTIONS (e.g. DB)

%% Here you may configure all outgoing connections used by MongooseIM,
%% e.g. to RDBMS (such as MySQL), Riak or external HTTP components.
%% Default MongooseIM configuration uses only Mnesia (non-Mnesia extensions are disabled),
%% so no options here are uncommented out of the box.
%% This section includes configuration examples; for comprehensive guide
%% please consult MongooseIM documentation, page "Outgoing connections":
%% - doc/advanced-configuration/outgoing-connections.md
%% - https://mongooseim.readthedocs.io/en/latest/advanced-configuration/outgoing-connections/

{outgoing_pools, [
  {http, global, auth, [
    {workers, 10}
  ], [
    {server, "http://127.0.0.1:3000"},
    {path_prefix, "/internal/ejabberd/"}
  ]},
  {http, global, default, [
    {workers, 50}
  ], [
    {server, "http://127.0.0.1:3000"},
    {path_prefix, "/internal/ejabberd/"}
  ]},
  {rdbms, global, default, [
    {workers, 10}
  ], [
    {server, {pgsql, "127.0.0.1", 5432, "mongooseim", "mongooseim", "Passw0rd"}}
  ]},
  {redis, global, default, [
    {strategy, random_worker}
  ], [
    {host, "127.0.0.1"},
    {port, 6379},
    {database, 2}
  ]}
]}.

%% == Extra options ==
%%
%% If you use PostgreSQL, have a large database, and need a
%% faster but inexact replacement for "select count(*) from users"
%%
{pgsql_users_number_estimate, true}.
%%
%% rdbms_server_type specifies what database is used over the RDBMS layer
%% Can take values mssql, pgsql, mysql
%% In some cases (for example for MAM with pgsql) it is required to set proper value.
%%
{rdbms_server_type, pgsql}.

%%%.   ===============
%%%'   TRAFFIC SHAPERS

%%
%% The "normal" shaper limits traffic speed to 1000 B/s
%%
{shaper, normal, {maxrate, 1000}}.

%%
%% The "fast" shaper limits traffic speed to 50000 B/s
%%
{shaper, fast, {maxrate, 50000}}.

%%
%% This option specifies the maximum number of elements in the queue
%% of the FSM. Refer to the documentation for details.
%%
{max_fsm_queue, 1000}.

%%%.   ====================
%%%'   ACCESS CONTROL LISTS

%%
%% The 'admin' ACL grants administrative privileges to XMPP accounts.
%% You can put here as many accounts as you want.
%%
% {acl, admin, {user, "alice", "localhost"}}.
% {acl, admin, {user, "a", "localhost"}}.

%%
%% Blocked users
%%
% {acl, blocked, {user, "baduser", "example.org"}}.
% {acl, blocked, {user, "test"}}.

%%
%% Local users: don't modify this line.
%%
{acl, local, {user_regexp, ""}}.

%%
%% More examples of ACLs
%%
% {acl, jabberorg, {server, "jabber.org"}}.
% {acl, aleksey, {user, "aleksey", "jabber.ru"}}.
% {acl, test, {user_regexp, "^test"}}.
% {acl, test, {user_glob, "test*"}}.

%%
%% Define specific ACLs in a virtual host.
%%
% {host_config, "localhost", [
%   {acl, admin, {user, "bob-local", "localhost"}}
% ]}.

%%%.   ============
%%%'   ACCESS RULES

%% Maximum number of simultaneous sessions allowed for a single user:
{access, max_user_sessions, [{10, all}]}.

%% Maximum number of offline messages that users can have:
{access, max_user_offline_messages, [{5000, admin}, {1000, all}]}.

%% This rule allows access only for local users:
{access, local, [{allow, local}]}.

%% Only non-blocked users can use c2s connections:
{access, c2s, [
  {deny, blocked},
  {allow, all}
]}.

%% For C2S connections, all users except admins use the "normal" shaper
{access, c2s_shaper, [
  {none, admin},
  {normal, all}
]}.

%% All S2S connections use the "fast" shaper
{access, s2s_shaper, [{fast, all}]}.

%% Admins of this server are also admins of the MUC service:
% {access, muc_admin, [{allow, admin}]}.

%% Only accounts of the local ejabberd server can create rooms:
% {access, muc_create, [{allow, local}]}.

%% All users are allowed to use the MUC service:
% {access, muc, [{allow, all}]}.

%% In-band registration allows registration of any possible username.
%% To disable in-band registration, replace 'allow' with 'deny'.
{access, register, [{deny, all}]}.

%% By default the frequency of account registrations from the same IP
%% is limited to 1 account every 10 minutes. To disable, specify: infinity
{registration_timeout, infinity}.

%% Default settings for MAM.
%% To set non-standard value, replace 'default' with 'allow' or 'deny'.
%% Only user can access his/her archive by default.
%% An online user can read room's archive by default.
%% Only an owner can change settings and purge messages by default.
%% Empty list (i.e. `[]`) means `[{deny, all}]`.
{access, mam_set_prefs, [{default, all}]}.
{access, mam_get_prefs, [{default, all}]}.
{access, mam_lookup_messages, [{default, all}]}.
{access, mam_purge_single_message, [{default, all}]}.
{access, mam_purge_multiple_messages, [{default, all}]}.

%% 1 command of the specified type per second.
{shaper, mam_shaper, {maxrate, 1}}.
%% This shaper is primeraly for Mnesia overload protection during stress testing.
%% The limit is 1000 operations of each type per second.
{shaper, mam_global_shaper, {maxrate, 1000}}.

{access, mam_set_prefs_shaper, [{mam_shaper, all}]}.
{access, mam_get_prefs_shaper, [{mam_shaper, all}]}.
{access, mam_lookup_messages_shaper, [{mam_shaper, all}]}.
{access, mam_purge_single_message_shaper, [{mam_shaper, all}]}.
{access, mam_purge_multiple_messages_shaper, [{mam_shaper, all}]}.

{access, mam_set_prefs_global_shaper, [{mam_global_shaper, all}]}.
{access, mam_get_prefs_global_shaper, [{mam_global_shaper, all}]}.
{access, mam_lookup_messages_global_shaper, [{mam_global_shaper, all}]}.
{access, mam_purge_single_message_global_shaper, [{mam_global_shaper, all}]}.
{access, mam_purge_multiple_messages_global_shaper, [{mam_global_shaper, all}]}.

%%
%% Define specific Access Rules in a virtual host.
%%
% {host_config, "localhost", [
%   {access, c2s, [{allow, admin}, {deny, all}]},
%   {access, register, [{deny, all}]}
% ]}.

%%%.   ================
%%%'   DEFAULT LANGUAGE

%%
%% language: Default language used for server messages.
%%
{language, "en"}.

%%
%% Set a different default language in a virtual host.
%%
% {host_config, "localhost",
%   [{language, "ru"}]
% }.

%%%.   ================
%%%'   MISCELLANEOUS

{all_metrics_are_global, false}.

%%%.   ========
%%%'   SERVICES

%% Unlike modules, services are started per node and provide either features which are not
%% related to any particular host, or backend stuff which is used by modules.
%% This is handled by `mongoose_service` module.

{services, [
  {service_admin_extra, [
    {submods, [
      node,
      % accounts,
      sessions,
      % vcard,
      gdpr,
      roster,
      last,
      private,
      stanza,
      stats
    ]}
  ]},
  {service_cache, []}
]}.

%%%.   =======
%%%'   MODULES

%%
%% Modules enabled in all mongooseim virtual hosts.
%% For list of possible modules options, check documentation.
%%
{modules, [
  {mod_adhoc, []},
  {mod_disco, [{users_can_see_hidden_services, false}]},
  {mod_commands, []},
  {mod_muc_light_commands, []},
  {mod_last, []},
  {mod_stream_management, [
    % default 100
    % size of a buffer of unacked messages
    % {buffer_max, 100}

    % default 1 - server sends the ack request after each stanza
    % {ack_freq, 1}

    % default: 600 seconds
    % {resume_timeout, 600}
  ]},
  {mod_muc_light, [
    {host, "muclight.@HOST@"},
    {backend, rdbms},
    {access_create, none},
    {all_can_invite, true},
    {all_can_configure, false},
    {blocking, false},
    {max_occupants, 500},
    {rooms_per_page, 50},
    {rooms_in_rosters, true}
  ]},
  {mod_inbox, [
    {backend, rdbms},
    {reset_markers, [displayed, received, acknowledged]},
    {aff_changes, true},
    {remove_on_kicked, true},
    {groupchat, [muclight]}
  ]},
  {mod_offline, [{access_max_user_messages, max_user_offline_messages}]},
  {mod_privacy, []},
  {mod_blocking, []},
  {mod_roster, [
    {backend, rdbms},
    {versioning, true},
    {store_current_id, true}
  ]},
  {mod_sic, []},
  {mod_bosh, []},
  {mod_carboncopy, []},
  {mod_mam_meta, [
    {backend, rdbms},

    % When set to simple, stores messages in XML and full JIDs.
    % When set to internal, stores messages and JIDs in internal format.
    {rdbms_message_format, simple},

    %% Store user preferences in Mnesia (recommended).
    %% The preferences store will be called each time, as a message is routed.
    %% That is why Mnesia is better suited for this job.
    {user_prefs_store, mnesia},

    % Disable full text search in message archive
    {full_text_search, false},

    %% Enables a pool of asynchronous writers. (default)
    %% Messages will be grouped together based on archive id.
    {async_writer, true},

    %% Cache information about users (default)
    {cache_users, true},

    %% Enable archivization for private messages (default)
    {pm, [
      {archive_groupchats, false}
    ]},

    %%
    %% Message Archive Management (MAM) for multi-user chats (MUC).
    %% Enable XEP-0313 for "muc.@HOST@".
    %%
    {muc, [
      {host, "muclight.@HOST@"}
    ]}

    %% Do not use a <stanza-id/> element (by default stanzaid is used)
    % {no_stanzaid_element, true}
  ]},
  {mod_event_pusher, [
    {backends, [
      {http, [
        {pool_name, default},
        {path, "/notifications"},
        {callback_module, mod_event_pusher_http_defaults}
      ]}
    ]}
  ]}
]}.

%%
%% Enable modules with custom options in a specific virtual host
%%
%%{host_config, "localhost",
%% [{ {add, modules},
%%   [
%%    {mod_some_module, []}
%%   ]
%%  }
%% ]}.

%%%.
%%%'

%%% $Id$

%%% Local Variables:
%%% mode: erlang
%%% End:
%%% vim: set filetype=erlang tabstop=8 foldmarker=%%%',%%%. foldmethod=marker:
%%%.
@jasl
Copy link
Contributor Author

jasl commented Jul 8, 2019

I'm not sure it was worked at MongooseIM 3.2.0, I forgot backup.

@jasl
Copy link
Contributor Author

jasl commented Jul 8, 2019

I never wrote Erlang before, I tried to build my own callback_module based on mod_event_pusher_http_defaults.erl

%%%----------------------------------------------------------------------
%%% File    : mod_event_pusher_http_custom.erl
%%% Author  : Baibossynv Valery <baibossynov.valery@gmail.com>
%%% Purpose : Message passing via http
%%% Created : 23 Feb 2075 by Piotr Nosek
%%%----------------------------------------------------------------------

-module(mod_event_pusher_http_custom).
-author("baibossynov.valery@gmail.com").

-behaviour(mod_event_pusher_http).

%% API
-export([should_make_req/6, prepare_body/7, prepare_headers/7]).

%% @doc This function determines whether to send http notification or not.
%% Can be reconfigured by creating a custom module implementing should_make_req/3
%% and adding it to mod_event_pusher_http settings as {callback_module}
%% Default behaviour is to send all chat messages with non-empty body.
should_make_req(_Acc, out, _Packet, _From, _To, _Opts) ->
    true;
should_make_req(Acc, in, Packet, From, To, _Opts) ->
    true.

prepare_body(_Acc, _Dir, Host, Message, Sender, Receiver, _Opts) ->
    cow_qs:qs([{<<"author">>, Sender},
        {<<"server">>, Host}, {<<"receiver">>, Receiver}, {<<"message">>, Message}]).

prepare_headers(_Acc, _Dir, _Host, _Sender, _Receiver, _Message, _Opts) ->
    [{<<"Content-Type">>, <<"application/x-www-form-urlencoded">>}].

then make clean && ./rebar3 compile && make rel then config mongooseim.cfg

  {mod_event_pusher, [
    {backends, [
      {http, [
        {pool_name, external_http_api},
        {path, "/notifications"},
        {callback_module, mod_event_pusher_http_custom}
      ]}
    ]}
  ]}

Still no luck, looks like it doesn't listen messages from MUC light domain.

Is this by design? any tip to solve this? many thanks!

@fenek fenek added the community Non ESL issues and PRs label Jul 11, 2019
@fenek fenek self-assigned this Sep 3, 2019
@fenek
Copy link
Member

fenek commented Sep 3, 2019

Hi @jasl

Is this issue stil valid? If so, with your custom module, do you see any errors in the logs? It is true that default HTTP backend callback ignores everything except for 1-1 chat messages. The custom code should at least push something.

@jasl
Copy link
Contributor Author

jasl commented Sep 3, 2019

@fenek I'll retry now

@jasl
Copy link
Contributor Author

jasl commented Sep 3, 2019

@fenek The issue still valid on latest master commit with my custom pusher module

  {mod_event_pusher, [
    {backends, [
      {http, [
        {pool_name, external_http_api},
        {path, "/notifications"},
        {callback_module, mod_event_pusher_http_custom}
      ]}
    ]}
  ]}

I try this using Rest API

POST /api/muc-lights/muclight.im.cybros.io/test_room1/messages HTTP/1.1
Content-Type: application/json; charset=utf-8
Host: localhost:8088
Connection: close
User-Agent: Paw/3.1.8 (Macintosh; OS X/10.14.6) GCDHTTPRequest
Content-Length: 48

{"from":"william@im.cybros.io","body":"Yoooooo"}

MongooseIM foreground outputs

Root: /Users/jasl/Workspaces/im/MongooseIM/_build/prod/rel/mongooseim
Exec: /Users/jasl/Workspaces/im/MongooseIM/_build/prod/rel/mongooseim/erts-10.4.4/bin/erlexec -noshell -noinput +Bd -boot /Users/jasl/Workspaces/im/MongooseIM/_build/prod/rel/mongooseim/releases/3.4.0/mongooseim -embedded -config /Users/jasl/Workspaces/im/MongooseIM/_build/prod/rel/mongooseim/etc/app.config -args_file /Users/jasl/Workspaces/im/MongooseIM/_build/prod/rel/mongooseim/etc/vm.args -args_file /Users/jasl/Workspaces/im/MongooseIM/_build/prod/rel/mongooseim/etc/vm.dist.args -- foreground
15:15:47.214 [info] msg: "Starting reporters with []\n", options: []
15:15:47.328 [notice] Changed loglevel of log/ejabberd.log to info
15:15:47.357 [info] Application mnesia exited with reason: stopped
15:15:47.749 [notice] Changed loglevel of log/ejabberd.log to info
15:15:47.849 [info] event=starting_pool, pool=http, host=global, tag=auth, pool_opts=[{workers,10}], conn_opts=[{server,"http://127.0.0.1:3000"},{path_prefix,"/internal/ejabberd/"}]
15:15:47.866 [info] event=starting_pool, pool=http, host=global, tag=external_http_api, pool_opts=[{workers,50}], conn_opts=[{server,"http://127.0.0.1:3000"},{path_prefix,"/internal/ejabberd/"}]
15:15:47.867 [info] event=starting_pool, pool=rdbms, host=global, tag=default, pool_opts=[{workers,10}], conn_opts=[{server,{pgsql,"127.0.0.1",5432,"mongooseim","mongooseim","Passw0rd"}}]
15:15:48.174 [info] event=service_startup,status=starting,service=service_cache,options=[]
15:15:48.178 [info] event=service_startup,status=started,service=service_cache
15:15:48.178 [info] event=service_startup,status=starting,service=service_admin_extra,options=[{submods,[node,sessions,gdpr,roster,last,private,stanza,stats]}]
15:15:48.199 [info] event=service_startup,status=started,service=service_admin_extra
15:15:48.413 [info] mod_stream_management starting
15:15:48.549 [info] ejabberd 3.4.0-230-g112e78438 is started in the node mongooseim@localhost

My Rails backend output

=> Booting Puma
=> Rails 6.0.0 application starting in development
=> Run `rails server --help` for more startup options
Puma starting in single mode...
* Version 4.1.0 (ruby 2.6.3-p62), codename: Fourth and One
* Min threads: 5, max threads: 5
* Environment: development
* Listening on tcp://localhost:3000
Use Ctrl-C to stop
Started GET "/internal/ejabberd/user_exists?user=handsome&server=im.cybros.io&pass=" for 127.0.0.1 at 2019-09-03 15:15:53 +0800
   (3.6ms)  SELECT "schema_migrations"."version" FROM "schema_migrations" ORDER BY "schema_migrations"."version" ASC
Processing by Internal::EjabberdController#user_exists as HTML
  Parameters: {"user"=>"handsome", "server"=>"im.cybros.io", "pass"=>""}
  User Load (5.2ms)  SELECT "users".* FROM "users" WHERE "users"."uid" = $1 LIMIT $2  [["uid", "handsome"], ["LIMIT", 1]]
  ↳ app/controllers/internal/ejabberd_controller.rb:13:in `user_exists'
Completed 200 OK in 108ms (Views: 0.1ms | ActiveRecord: 56.0ms | Allocations: 11222)


Started GET "/internal/ejabberd/user_exists?user=william&server=im.cybros.io&pass=" for 127.0.0.1 at 2019-09-03 15:15:53 +0800
Processing by Internal::EjabberdController#user_exists as HTML
  Parameters: {"user"=>"william", "server"=>"im.cybros.io", "pass"=>""}
  User Load (2.8ms)  SELECT "users".* FROM "users" WHERE "users"."uid" = $1 LIMIT $2  [["uid", "william"], ["LIMIT", 1]]
  ↳ app/controllers/internal/ejabberd_controller.rb:13:in `user_exists'
Completed 200 OK in 4ms (Views: 0.1ms | ActiveRecord: 2.8ms | Allocations: 593)

@fenek
Copy link
Member

fenek commented Sep 9, 2019

Given that only some MUC Light messages are sent (i.e. no other events are pushed via HTTP for sure), what are the values of these metrics?

  • im.cybros.io.mod_http_notifications.sent.one
  • im.cybros.io.mod_http_notifications.failed.one

@jasl
Copy link
Contributor Author

jasl commented Sep 9, 2019

@fenek

I'm not trying metrics yet.
Is there any easy way to get these metrics? Do I need configure a Grafana?

@jasl
Copy link
Contributor Author

jasl commented Oct 5, 2019

after upgrade to 3.5.0, the issue still exists, Here's a Debug level log output

Root: /Users/jasl/Workspaces/im/MongooseIM/_build/prod/rel/mongooseim
Exec: /Users/jasl/Workspaces/im/MongooseIM/_build/prod/rel/mongooseim/erts-10.5.1/bin/erlexec -noshell -noinput +Bd -boot /Users/jasl/Workspaces/im/MongooseIM/_build/prod/rel/mongooseim/releases/3.5.0/mongooseim -embedded -config /Users/jasl/Workspaces/im/MongooseIM/_build/prod/rel/mongooseim/etc/app.config -args_file /Users/jasl/Workspaces/im/MongooseIM/_build/prod/rel/mongooseim/etc/vm.args -args_file /Users/jasl/Workspaces/im/MongooseIM/_build/prod/rel/mongooseim/etc/vm.dist.args -- foreground
03:07:31.989 [info] msg: "Starting reporters with []\n", options: []
03:07:32.151 [notice] Changed loglevel of log/ejabberd.log to info
03:07:32.177 [info] Application mnesia exited with reason: stopped
03:07:32.464 [notice] Changed loglevel of log/ejabberd.log to debug
03:07:32.518 [info] event=starting_pool, pool=http, host=global, tag=auth, pool_opts=[{workers,10}], conn_opts=[{server,"http://127.0.0.1:3000"},{path_prefix,"/internal/ejabberd/"}]
03:07:32.530 [info] event=starting_pool, pool=http, host=global, tag=external_http_api, pool_opts=[{workers,50}], conn_opts=[{server,"http://127.0.0.1:3000"},{path_prefix,"/internal/ejabberd/"}]
03:07:32.531 [info] event=starting_pool, pool=rdbms, host=global, tag=default, pool_opts=[{workers,10}], conn_opts=[{server,{pgsql,"127.0.0.1",5432,"mongooseim","mongooseim","Passw0rd"}}]
03:07:32.782 [info] event=service_startup,status=starting,service=service_admin_extra,options=[{submods,[node,sessions,gdpr,roster,last,private,stanza,stats]}]
03:07:32.797 [info] event=service_startup,status=started,service=service_admin_extra
03:07:32.798 [debug] Module mod_mam_rdbms_user started for <<"im.cybros.io">>.
03:07:32.801 [debug] Module mod_mam_rdbms_async_pool_writer started for <<"im.cybros.io">>.
03:07:32.803 [debug] Module mod_adhoc started for <<"im.cybros.io">>.
03:07:32.803 [debug] Module mod_sic started for <<"im.cybros.io">>.
03:07:32.819 [debug] Module mod_privacy started for <<"im.cybros.io">>.
03:07:32.819 [debug] mod_mam starting
03:07:32.819 [debug] Module mod_mam started for <<"im.cybros.io">>.
03:07:32.820 [debug] Module mod_disco started for <<"im.cybros.io">>.
03:07:32.820 [debug] Module mod_event_pusher_http started for <<"im.cybros.io">>.
03:07:32.821 [debug] Module mod_mam_muc_rdbms_async_pool_writer started for <<"im.cybros.io">>.
03:07:32.821 [debug] Module mod_event_pusher_hook_translator started for <<"im.cybros.io">>.
03:07:32.822 [debug] Module mod_event_pusher started for <<"im.cybros.io">>.
03:07:32.822 [debug] Module mod_commands started for <<"im.cybros.io">>.
03:07:32.823 [debug] Module mod_muc_light_commands started for <<"im.cybros.io">>.
03:07:32.823 [debug] mod_mam_muc starting
03:07:32.827 [debug] Module mod_mam_muc started for <<"im.cybros.io">>.
03:07:32.827 [debug] Module mod_blocking started for <<"im.cybros.io">>.
03:07:32.827 [debug] Module mod_mam_rdbms_arch started for <<"im.cybros.io">>.
03:07:32.839 [debug] Module mod_bosh started for <<"im.cybros.io">>.
03:07:32.839 [debug] Module mod_mam_cache_user started for <<"im.cybros.io">>.
03:07:32.862 [debug] Module mod_roster started for <<"im.cybros.io">>.
03:07:32.872 [debug] Module mod_last started for <<"im.cybros.io">>.
03:07:32.885 [debug] Module mod_offline started for <<"im.cybros.io">>.
03:07:32.885 [debug] Module mod_mam_muc_rdbms_arch started for <<"im.cybros.io">>.
03:07:32.885 [debug] Module mod_mam_meta started for <<"im.cybros.io">>.
03:07:32.886 [debug] Module mod_csi started for <<"im.cybros.io">>.
03:07:32.920 [debug] Module mod_muc_light started for <<"im.cybros.io">>.
03:07:32.920 [info] mod_stream_management starting
03:07:32.922 [debug] Module mod_stream_management started for <<"im.cybros.io">>.
03:07:32.943 [debug] event=upsert_query, query=INSERT INTO inbox (luser, lserver, remote_bare_jid, content, unread_count, msg_id, timestamp) VALUES (?, ?, ?, ?, ?, ?, ?) ON CONFLICT (luser, lserver, remote_bare_jid) DO UPDATE SET content = ?, unread_count = ?, msg_id = ?, timestamp = ?
03:07:32.946 [debug] Module mod_inbox started for <<"im.cybros.io">>.
03:07:32.946 [debug] Module mod_carboncopy started for <<"im.cybros.io">>.
03:07:32.971 [debug] Configured Cowboy Routes: [{'_',[{"/http-bind",mod_bosh,[]},{"/ws-xmpp",mod_websockets,[{ejabberd_service,[{access,all},{shaper_rule,fast},{ip,{127,0,0,1}}]},{timeout,600000},{ping_rate,2000}]}]}]
03:07:32.987 [debug] Configured Cowboy Routes: [{"localhost",[{["/api",["/",<<"muc-lights">>,["/:domain"],["/",<<"config">>]]],mongoose_api_admin,[{command_category,<<"muc-lights">>},{command_subcategory,<<"config">>}]},{["/api",["/",<<"muc-lights">>,["/:domain","/:name"],["/",<<"messages">>]]],mongoose_api_admin,[{command_category,<<"muc-lights">>},{command_subcategory,<<"messages">>}]},{["/api",["/",<<"messages">>,["/:caller"],[]]],mongoose_api_admin,[{command_category,<<"messages">>},{command_subcategory,undefined}]},{["/api",["/",<<"messages">>,[],[]]],mongoose_api_admin,[{command_category,<<"messages">>},{command_subcategory,undefined}]},{["/api",["/",<<"contacts">>,[],[]]],mongoose_api_admin,[{command_category,<<"contacts">>},{command_subcategory,undefined}]},{["/api",["/",<<"sessions">>,["/:host"],[]]],mongoose_api_admin,[{command_category,<<"sessions">>},{command_subcategory,undefined}]},{["/api",["/",<<"users">>,["/:host"],[]]],mongoose_api_admin,[{command_category,<<"users">>},{command_subcategory,undefined}]},{["/api",["/",<<"sessions">>,["/:host","/:user","/:res"],[]]],mongoose_api_admin,[{command_category,<<"sessions">>},{command_subcategory,undefined}]},{["/api",["/",<<"users">>,["/:host","/:user"],[]]],mongoose_api_admin,[{command_category,<<"users">>},{command_subcategory,undefined}]},{["/api",["/",<<"contacts">>,["/:caller","/:jid"],[]]],mongoose_api_admin,[{command_category,<<"contacts">>},{command_subcategory,undefined}]},{["/api",["/",<<"muc-lights">>,["/:domain","/:name"],["/",<<"participants">>]]],mongoose_api_admin,[{command_category,<<"muc-lights">>},{command_subcategory,<<"participants">>}]},{["/api",["/",<<"muc-lights">>,["/:domain"],[]]],mongoose_api_admin,[{command_category,<<"muc-lights">>},{command_subcategory,undefined}]},{["/api",["/",<<"contacts">>,["/:caller","/:jid"],["/",<<"manage">>]]],mongoose_api_admin,[{command_category,<<"contacts">>},{command_subcategory,<<"manage">>}]},{["/api",["/",<<"muc-lights">>,["/:domain","/:name","/:owner"],["/",<<"management">>]]],mongoose_api_admin,[{command_category,<<"muc-lights">>},{command_subcategory,<<"management">>}]},{["/api",["/",<<"contacts">>,["/:caller","/:jid"],[]]],mongoose_api_admin,[{command_category,<<"contacts">>},{command_subcategory,undefined}]},{["/api",["/",<<"contacts">>,["/:caller","/:jids"],["/",<<"multiple">>]]],mongoose_api_admin,[{command_category,<<"contacts">>},{command_subcategory,<<"multiple">>}]},{["/api",["/",<<"commands">>,[],[]]],mongoose_api_admin,[{command_category,<<"commands">>},{command_subcategory,undefined}]},{["/api",["/",<<"messages">>,["/:caller","/:with"],[]]],mongoose_api_admin,[{command_category,<<"messages">>},{command_subcategory,undefined}]},{["/api",["/",<<"users">>,[],[]]],mongoose_api_admin,[{command_category,<<"users">>},{command_subcategory,undefined}]},{["/api",["/",<<"muc-lights">>,["/:domain"],[]]],mongoose_api_admin,[{command_category,<<"muc-lights">>},{command_subcategory,undefined}]},{["/api",["/",<<"users">>,["/:host","/:user"],[]]],mongoose_api_admin,[{command_category,<<"users">>},{command_subcategory,undefined}]},{["/api",["/",<<"contacts">>,["/:caller"],[]]],mongoose_api_admin,[{command_category,<<"contacts">>},{command_subcategory,undefined}]}]},{'_',[]}]
03:07:32.996 [info] ejabberd 3.5.0-1-g4b674dab3 is started in the node mongooseim@localhost
03:07:44.717 [debug] route
	from {jid,<<"william">>,<<"im.cybros.io">>,<<>>,<<"william">>,<<"im.cybros.io">>,<<>>}
	to {jid,<<"test_room1">>,<<"muclight.im.cybros.io">>,<<>>,<<"test_room1">>,<<"muclight.im.cybros.io">>,<<>>}
	packet #{lserver => <<"im.cybros.io">>,mongoose_acc => true,non_strippable => {set,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}},origin_location => {ejabberd_router,route,117},origin_pid => <0.1040.0>,origin_stanza => <<"<message type='groupchat'><body>Yoooooo</body></message>">>,ref => #Ref<0.2008143725.2447900674.50813>,stanza => #{element => {xmlel,<<"message">>,[{<<"type">>,<<"groupchat">>}],[{xmlel,<<"body">>,[],[{xmlcdata,<<"Yoooooo">>}]}]},from_jid => {jid,<<"william">>,<<"im.cybros.io">>,<<>>,<<"william">>,<<"im.cybros.io">>,<<>>},name => <<"message">>,ref => #Ref<0.2008143725.2447900674.50812>,to_jid => {jid,<<"test_room1">>,<<"muclight.im.cybros.io">>,<<>>,<<"test_room1">>,<<"muclight.im.cybros.io">>,<<>>},type => <<"groupchat">>},timestamp => {1570,302464,717643}}
03:07:44.717 [debug] Using module mongoose_router_global
03:07:44.722 [debug] filter passed
03:07:44.722 [debug] routing skipped
03:07:44.722 [debug] Using module mongoose_router_localdomain
03:07:44.725 [debug] filter passed
03:07:44.739 [debug] Incoming room packet.
03:07:44.764 [debug] route
	from {jid,<<"test_room1">>,<<"muclight.im.cybros.io">>,<<"william@im.cybros.io">>,<<"test_room1">>,<<"muclight.im.cybros.io">>,<<"william@im.cybros.io">>}
	to {jid,<<"handsome">>,<<"im.cybros.io">>,<<>>,<<"handsome">>,<<"im.cybros.io">>,<<>>}
	packet #{lserver => <<"muclight.im.cybros.io">>,mongoose_acc => true,non_strippable => {set,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}},origin_location => {ejabberd_router,route,117},origin_pid => <0.1040.0>,origin_stanza => <<"<message to='handsome@im.cybros.io' id='1570-302464-731076' type='groupchat' from='test_room1@muclight.im.cybros.io/william@im.cybros.io'><body>Yoooooo</body><stanza-id by='test_room1@muclight.im.cybros.io' id='B51EH2CUJ4G1' xmlns='urn:xmpp:sid:0'/></message>">>,ref => #Ref<0.2008143725.2447900674.50829>,stanza => #{element => {xmlel,<<"message">>,[{<<"to">>,<<"handsome@im.cybros.io">>},{<<"id">>,<<"1570-302464-731076">>},{<<"type">>,<<"groupchat">>},{<<"from">>,<<"test_room1@muclight.im.cybros.io/william@im.cybros.io">>}],[{xmlel,<<"body">>,[],[{xmlcdata,<<"Yoooooo">>}]},{xmlel,<<"stanza-id">>,[{<<"by">>,<<"test_room1@muclight.im.cybros.io">>},{<<"id">>,<<"B51EH2CUJ4G1">>},{<<"xmlns">>,<<"urn:xmpp:sid:0">>}],[]}]},from_jid => {jid,<<"test_room1">>,<<"muclight.im.cybros.io">>,<<"william@im.cybros.io">>,<<"test_room1">>,<<"muclight.im.cybros.io">>,<<"william@im.cybros.io">>},name => <<"message">>,ref => #Ref<0.2008143725.2447900674.50828>,to_jid => {jid,<<"handsome">>,<<"im.cybros.io">>,<<>>,<<"handsome">>,<<"im.cybros.io">>,<<>>},type => <<"groupchat">>},timestamp => {1570,302464,764382}}
03:07:44.764 [debug] Using module mongoose_router_global
03:07:44.764 [debug] filter passed
03:07:44.764 [debug] routing skipped
03:07:44.764 [debug] Using module mongoose_router_localdomain
03:07:44.764 [debug] filter passed
03:07:44.766 [debug] Making request 'user_exists' for user handsome@im.cybros.io...
03:07:44.778 [debug] Request result: 200: <<"true">>
03:07:44.788 [debug] Receive packet
    from {jid,<<"test_room1">>,<<"muclight.im.cybros.io">>,<<"william@im.cybros.io">>,<<"test_room1">>,<<"muclight.im.cybros.io">>,<<"william@im.cybros.io">>}
    to {jid,<<"handsome">>,<<"im.cybros.io">>,<<>>,<<"handsome">>,<<"im.cybros.io">>,<<>>}
    packet {xmlel,<<"message">>,[{<<"to">>,<<"handsome@im.cybros.io">>},{<<"id">>,<<"1570-302464-731076">>},{<<"type">>,<<"groupchat">>},{<<"from">>,<<"test_room1@muclight.im.cybros.io/william@im.cybros.io">>}],[{xmlel,<<"body">>,[],[{xmlcdata,<<"Yoooooo">>}]},{xmlel,<<"stanza-id">>,[{<<"by">>,<<"test_room1@muclight.im.cybros.io">>},{<<"id">>,<<"B51EH2CUJ4G1">>},{<<"xmlns">>,<<"urn:xmpp:sid:0">>}],[]}]}.
03:07:44.793 [debug] local route
	from {jid,<<"test_room1">>,<<"muclight.im.cybros.io">>,<<"william@im.cybros.io">>,<<"test_room1">>,<<"muclight.im.cybros.io">>,<<"william@im.cybros.io">>}
	to {jid,<<"handsome">>,<<"im.cybros.io">>,<<>>,<<"handsome">>,<<"im.cybros.io">>,<<>>}
	packet {xmlel,<<"message">>,[{<<"to">>,<<"hand"...>>},{<<"id">>,<<...>>},{<<...>>,...},{...}],[{xmlel,<<...>>,...},{xmlel,...}]}
03:07:44.793 [debug] session manager
	from {jid,<<"test_room1">>,<<"muclight.im.cybros.io">>,<<"william@im.cybros.io">>,<<"test_room1">>,<<"muclight.im.cybros.io">>,<<"william@im.cybros.io">>}
	to {jid,<<"handsome">>,<<"im.cybros.io">>,<<>>,<<"handsome">>,<<"im.cybros.io">>,<<>>}
	packet #{lserver => <<"muclight.im.cybros.io">>,mongoose_acc => true,non_strippable => {set,0,16,16,...},origin_location => {ejabberd_router,route,117},origin_pid => <0.1040.0>,origin_stanza => <<"<mes"...>>,ref => #Ref<0.2008143725.2447900674.50829>,...}
03:07:44.793 [debug] routing done
03:07:44.799 [debug] route
	from {jid,<<"test_room1">>,<<"muclight.im.cybros.io">>,<<"william@im.cybros.io">>,<<"test_room1">>,<<"muclight.im.cybros.io">>,<<"william@im.cybros.io">>}
	to {jid,<<"william">>,<<"im.cybros.io">>,<<>>,<<"william">>,<<"im.cybros.io">>,<<>>}
	packet #{lserver => <<"muclight.im.cybros.io">>,mongoose_acc => true,non_strippable => {set,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}},origin_location => {ejabberd_router,route,117},origin_pid => <0.1040.0>,origin_stanza => <<"<message to='william@im.cybros.io' id='1570-302464-731076' type='groupchat' from='test_room1@muclight.im.cybros.io/william@im.cybros.io'><body>Yoooooo</body><stanza-id by='test_room1@muclight.im.cybros.io' id='B51EH2CUJ4G1' xmlns='urn:xmpp:sid:0'/></message>">>,ref => #Ref<0.2008143725.2447900673.41392>,stanza => #{element => {xmlel,<<"message">>,[{<<"to">>,<<"william@im.cybros.io">>},{<<"id">>,<<"1570-302464-731076">>},{<<"type">>,<<"groupchat">>},{<<"from">>,<<"test_room1@muclight.im.cybros.io/william@im.cybros.io">>}],[{xmlel,<<"body">>,[],[{xmlcdata,<<"Yoooooo">>}]},{xmlel,<<"stanza-id">>,[{<<"by">>,<<"test_room1@muclight.im.cybros.io">>},{<<"id">>,<<"B51EH2CUJ4G1">>},{<<"xmlns">>,<<"urn:xmpp:sid:0">>}],[]}]},from_jid => {jid,<<"test_room1">>,<<"muclight.im.cybros.io">>,<<"william@im.cybros.io">>,<<"test_room1">>,<<"muclight.im.cybros.io">>,<<"william@im.cybros.io">>},name => <<"message">>,ref => #Ref<0.2008143725.2447900673.41391>,to_jid => {jid,<<"william">>,<<"im.cybros.io">>,<<>>,<<"william">>,<<"im.cybros.io">>,<<>>},type => <<"groupchat">>},timestamp => {1570,302464,799694}}
03:07:44.799 [debug] Using module mongoose_router_global
03:07:44.799 [debug] filter passed
03:07:44.800 [debug] routing skipped
03:07:44.800 [debug] Using module mongoose_router_localdomain
03:07:44.800 [debug] filter passed
03:07:44.800 [debug] Making request 'user_exists' for user william@im.cybros.io...
03:07:44.807 [debug] Request result: 200: <<"true">>
03:07:44.815 [debug] Receive packet
    from {jid,<<"test_room1">>,<<"muclight.im.cybros.io">>,<<"william@im.cybros.io">>,<<"test_room1">>,<<"muclight.im.cybros.io">>,<<"william@im.cybros.io">>}
    to {jid,<<"william">>,<<"im.cybros.io">>,<<>>,<<"william">>,<<"im.cybros.io">>,<<>>}
    packet {xmlel,<<"message">>,[{<<"to">>,<<"william@im.cybros.io">>},{<<"id">>,<<"1570-302464-731076">>},{<<"type">>,<<"groupchat">>},{<<"from">>,<<"test_room1@muclight.im.cybros.io/william@im.cybros.io">>}],[{xmlel,<<"body">>,[],[{xmlcdata,<<"Yoooooo">>}]},{xmlel,<<"stanza-id">>,[{<<"by">>,<<"test_room1@muclight.im.cybros.io">>},{<<"id">>,<<"B51EH2CUJ4G1">>},{<<"xmlns">>,<<"urn:xmpp:sid:0">>}],[]}]}.
03:07:44.816 [debug] local route
	from {jid,<<"test_room1">>,<<"muclight.im.cybros.io">>,<<"william@im.cybros.io">>,<<"test_room1">>,<<"muclight.im.cybros.io">>,<<"william@im.cybros.io">>}
	to {jid,<<"william">>,<<"im.cybros.io">>,<<>>,<<"william">>,<<"im.cybros.io">>,<<>>}
	packet {xmlel,<<"message">>,[{<<"to">>,<<"will"...>>},{<<"id">>,<<...>>},{<<...>>,...},{...}],[{xmlel,<<...>>,...},{xmlel,...}]}
03:07:44.816 [debug] session manager
	from {jid,<<"test_room1">>,<<"muclight.im.cybros.io">>,<<"william@im.cybros.io">>,<<"test_room1">>,<<"muclight.im.cybros.io">>,<<"william@im.cybros.io">>}
	to {jid,<<"william">>,<<"im.cybros.io">>,<<>>,<<"william">>,<<"im.cybros.io">>,<<>>}
	packet #{lserver => <<"muclight.im.cybros.io">>,mongoose_acc => true,non_strippable => {set,0,16,16,...},origin_location => {ejabberd_router,route,117},origin_pid => <0.1040.0>,origin_stanza => <<"<mes"...>>,ref => #Ref<0.2008143725.2447900673.41392>,...}
03:07:44.816 [debug] routing done
03:07:44.816 [debug] routing done
03:07:46.764 [debug] Flushed 1 entries.

@fenek
Copy link
Member

fenek commented Oct 7, 2019

This is actually very strange because with your mod_event_pusher_http_custom I was able to get these notifications to be sent to a REST endpoint.

The config I've used is:

{outgoing_pools, [
  {http, global, httppush, [{workers, 50}], [{server, "http://localhost:80"}]}
]}.

with default nginx from Docker running on port 80, so I can see if any requests are made.

 {mod_event_pusher, [
   {backends, [
     {http, [
        {pool_name, httppush},
        {path, "/notifications"},
        {callback_module, mod_event_pusher_http_custom}
      ]}
    ]}
   ]}

Obviously I'm getting 404 errors but at least something is delivered to the REST endpoint:

2019/10/07 16:04:06 [error] 7#7: *3 open() "/usr/share/nginx/html/notifications" failed (2: No such file or directory), client: 172.17.0.1, server: localhost, request: "POST /notifications HTTP/1.1", host: "localhost"
172.17.0.1 - - [07/Oct/2019:16:04:06 +0000] "POST /notifications HTTP/1.1" 404 153 "-" "-" "-"
172.17.0.1 - - [07/Oct/2019:16:05:38 +0000] "POST /notifications HTTP/1.1" 404 153 "-" "-" "-"
2019/10/07 16:05:38 [error] 7#7: *4 open() "/usr/share/nginx/html/notifications" failed (2: No such file or directory), client: 172.17.0.1, server: localhost, request: "POST /notifications HTTP/1.1", host: "localhost"

@fenek
Copy link
Member

fenek commented Oct 7, 2019

Could you please try the same? I mean calling local web server to eliminate a problem with the real endpoint or the network.

@jasl
Copy link
Contributor Author

jasl commented Oct 7, 2019

@fenek
Of course, thank for your patient, I'll try to build a clean dev environment and try again.

@fenek
Copy link
Member

fenek commented Oct 28, 2019

@jasl

Any news on this issue? :)

@jasl
Copy link
Contributor Author

jasl commented Oct 28, 2019

@fenek Sorry I'm not working on this in these weeks, maybe next week I would have time to try.

Update (hope not disturb people): I'm too busy these months, will look this at December

@jasl
Copy link
Contributor Author

jasl commented Jan 10, 2020

Sorry for respond long time! It's no problem now.

I simply bought a new computer and the problem disappeared.

@jasl jasl closed this as completed Jan 10, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
community Non ESL issues and PRs
Projects
None yet
Development

No branches or pull requests

2 participants