Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Identify only stored users on offline routing #3395

Merged
merged 1 commit into from
Nov 10, 2021

Conversation

NelsonVides
Copy link
Collaborator

When we arrive at this point, it is because the user we're sending a
message to has not been found to be online, so here the routing mech
needs to verify if the user actually exists, to know if it should go
the 'offline_message_hook' or the 'service_unavailable' way. But, it
would actually be incorrect to verify for anonymous users at this point:
first, we have already verified that the receiver is not online, and
anonymous accounts are ephemeral, they only exist as long as the user is
online, so checking for anonymous accounts is unnecessary and redundant.
To make things worse, there's then a race condition here, where the
anonymous account that wasn't connected before, is connected by the time
we check again, and then this code takes the wrong path!

This fix not only curates all that Anonymous mess, but also gives a
chance to use the users cache at this point more opportunistically.

When we arrive at this point, it is because the user we're sending a
message to has not been found to be online, so here the routing mech
needs to verify if the user actually exists, to know if it should go
the 'offline_message_hook' or the 'service_unavailable' way. But, it
would actually be incorrect to verify for anonymous users at this point:
first, we have already verified that the receiver is not online, and
anonymous accounts are ephemeral, they only exist as long as the user is
online, so checking for anonymous accounts is unnecessary and redundant.
To make things worse, there's then a race condition here, where the
anonymous account that wasn't connected before, is connected by the time
we check again, and then this code takes the wrong path!

This fix not only curates all that Anonymous mess, but also gives a
chance to use the users cache at this point more opportunistically.
Copy link
Contributor

@arcusfelis arcusfelis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ок

@codecov
Copy link

codecov bot commented Nov 10, 2021

Codecov Report

Merging #3395 (8196b28) into master (8f92708) will increase coverage by 0.04%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #3395      +/-   ##
==========================================
+ Coverage   80.69%   80.74%   +0.04%     
==========================================
  Files         397      397              
  Lines       32199    32200       +1     
==========================================
+ Hits        25984    26000      +16     
+ Misses       6215     6200      -15     
Impacted Files Coverage Δ
src/ejabberd_sm.erl 84.36% <100.00%> (+0.05%) ⬆️
src/ejabberd.erl 45.00% <0.00%> (-10.00%) ⬇️
src/mongoose_lib.erl 82.22% <0.00%> (-2.23%) ⬇️
src/mod_muc_log.erl 78.11% <0.00%> (-0.26%) ⬇️
src/pubsub/mod_pubsub_db_rdbms.erl 95.34% <0.00%> (+0.25%) ⬆️
src/pubsub/mod_pubsub_db_mnesia.erl 92.85% <0.00%> (+0.42%) ⬆️
src/auth/ejabberd_auth_rdbms.erl 56.96% <0.00%> (+0.60%) ⬆️
src/mod_bosh.erl 94.36% <0.00%> (+2.11%) ⬆️
src/domain/mongoose_domain_sql.erl 85.71% <0.00%> (+2.85%) ⬆️
src/global_distrib/mod_global_distrib_receiver.erl 84.44% <0.00%> (+3.33%) ⬆️
... and 1 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 8f92708...8196b28. Read the comment docs.

@mongoose-im
Copy link
Collaborator

mongoose-im commented Nov 10, 2021

small_tests_24 / small_tests / 8196b28
Reports root / small


internal_mnesia_24 / internal_mnesia / 8196b28
Reports root/ big
OK: 1587 / Failed: 0 / User-skipped: 297 / Auto-skipped: 0


small_tests_23 / small_tests / 8196b28
Reports root / small


dynamic_domains_pgsql_mnesia_24 / pgsql_mnesia / 8196b28
Reports root/ big
OK: 2722 / Failed: 0 / User-skipped: 186 / Auto-skipped: 0


dynamic_domains_pgsql_mnesia_23 / pgsql_mnesia / 8196b28
Reports root/ big
OK: 2721 / Failed: 1 / User-skipped: 186 / Auto-skipped: 0

domain_removal_SUITE:last_removal:last_removal
{error,
  {test_case_failed,
    {has_stanzas_but_shouldnt,
      {client,
        <<"alicE_last_removal_31.201594@domain.example.com/res1">>,
        escalus_tcp,<0.2296.2>,
        [{event_manager,<0.2292.2>},
         {server,<<"domain.example.com">>},
         {username,<<"alicE_last_removal_31.201594">>},
         {resource,<<"res1">>}],
        [{event_client,
           [{event_manager,<0.2292.2>},
            {server,<<"domain.example.com">>},
            {username,<<"alicE_last_removal_31.201594">>},
            {resource,<<"res1">>}]},
         {resource,<<"res1">>},
         {username,<<"alicE_last_removal_31.201594">>},
         {server,<<"domain.example.com">>},
         {host,<<"localhost">>},
         {port,5222},
         {auth,{escalus_auth,auth_plain}},
         {wspath,undefined},
         {username,<<"alicE_last_removal_31.201594">>},
         {server,<<"domain.example.com">>},
         {host,<<"localhost">>},
         {password,<<"matygrysa">>},
         {stream_id,<<"4a45549e26109e90">>}]},
      [{xmlel,<<"presence">>,
         [{<<"from">>,
           <<"alicE_last_removal_31.201594@domain.example.com/res1">>},
          {<<"to">>,
           <<"alice_last_removal_31.201594@domain.example.com/res1">>},
          {<<"xml:lang">>,<<"en">>}],
         []}]}}}

Report log


dynamic_domains_mysql_redis_24 / mysql_redis / 8196b28
Reports root/ big
OK: 2711 / Failed: 1 / User-skipped: 203 / Auto-skipped: 0

mam_SUITE:rdbms_simple_prefs_cases:messages_filtered_when_prefs_default_policy_is_never
{error,{test_case_failed,"ASSERT EQUAL\n\tExpected []\n\tValue [ok,ok]\n"}}

Report log


dynamic_domains_mssql_mnesia_24 / odbc_mssql_mnesia / 8196b28
Reports root/ big
OK: 2722 / Failed: 0 / User-skipped: 186 / Auto-skipped: 0


ldap_mnesia_24 / ldap_mnesia / 8196b28
Reports root/ big
OK: 1514 / Failed: 0 / User-skipped: 370 / Auto-skipped: 0


ldap_mnesia_23 / ldap_mnesia / 8196b28
Reports root/ big
OK: 1514 / Failed: 0 / User-skipped: 370 / Auto-skipped: 0


pgsql_mnesia_24 / pgsql_mnesia / 8196b28
Reports root/ big
OK: 3121 / Failed: 0 / User-skipped: 183 / Auto-skipped: 0


elasticsearch_and_cassandra_24 / elasticsearch_and_cassandra_mnesia / 8196b28
Reports root/ big
OK: 1892 / Failed: 0 / User-skipped: 297 / Auto-skipped: 0


mysql_redis_24 / mysql_redis / 8196b28
Reports root/ big
OK: 3110 / Failed: 1 / User-skipped: 200 / Auto-skipped: 0

mam_SUITE:rdbms_cache_prefs_cases:messages_filtered_when_prefs_default_policy_is_never
{error,{test_case_failed,"ASSERT EQUAL\n\tExpected []\n\tValue [ok]\n"}}

Report log


pgsql_mnesia_23 / pgsql_mnesia / 8196b28
Reports root/ big
OK: 3121 / Failed: 0 / User-skipped: 183 / Auto-skipped: 0


mssql_mnesia_24 / odbc_mssql_mnesia / 8196b28
Reports root/ big
OK: 3121 / Failed: 0 / User-skipped: 183 / Auto-skipped: 0


riak_mnesia_24 / riak_mnesia / 8196b28
Reports root/ big
OK: 1738 / Failed: 0 / User-skipped: 298 / Auto-skipped: 0


dynamic_domains_pgsql_mnesia_23 / pgsql_mnesia / 8196b28
Reports root/ big
OK: 2722 / Failed: 0 / User-skipped: 186 / Auto-skipped: 0

Copy link
Member

@chrzaszcz chrzaszcz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

@chrzaszcz chrzaszcz merged commit 3042745 into master Nov 10, 2021
@chrzaszcz chrzaszcz deleted the does_user_exist_corrections branch November 10, 2021 12:17
@Premwoik Premwoik modified the milestones: 5.1.0, 5.0.0 May 25, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants