-
Notifications
You must be signed in to change notification settings - Fork 2.9k
/
global.erl
3354 lines (3019 loc) · 130 KB
/
global.erl
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
%%
%% %CopyrightBegin%
%%
%% Copyright Ericsson AB 1996-2024. All Rights Reserved.
%%
%% Licensed under the Apache License, Version 2.0 (the "License");
%% you may not use this file except in compliance with the License.
%% You may obtain a copy of the License at
%%
%% http://www.apache.org/licenses/LICENSE-2.0
%%
%% Unless required by applicable law or agreed to in writing, software
%% distributed under the License is distributed on an "AS IS" BASIS,
%% WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
%% See the License for the specific language governing permissions and
%% limitations under the License.
%%
%% %CopyrightEnd%
%%
-module(global).
-moduledoc """
A global name registration facility.
This module consists of the following services:
- Registration of global names
- Global locks
- Maintenance of the fully connected network
[](){: #prevent_overlapping_partitions }
As of OTP 25, `global` will by default prevent overlapping partitions due to
network issues by actively disconnecting from nodes that reports that they have
lost connections to other nodes. This will cause fully connected partitions to
form instead of leaving the network in a state with overlapping partitions.
> #### Warning {: .warning }
>
> Prevention of overlapping partitions can be disabled using the
> [`prevent_overlapping_partitions`](kernel_app.md#prevent_overlapping_partitions)
> Kernel parameter, making `global` behave like it used to do. This is,
> however, problematic for all applications expecting a fully connected network
> to be provided, such as for example `mnesia`, but also for `global` itself. A
> network of overlapping partitions might cause the internal state of `global`
> to become inconsistent. Such an inconsistency can remain even after such
> partitions have been brought together to form a fully connected network again.
> The effect on other applications that expects that a fully connected network
> is maintained may vary, but they might misbehave in very subtle hard to detect
> ways during such a partitioning. Since you might get hard to detect issues
> without this fix, you are _strongly_ advised _not_ to disable this fix. Also
> note that this fix _has_ to be enabled on _all_ nodes in the network in order
> to work properly.
> #### Note {: .info }
>
> None of the above services will be reliably delivered unless both of the
> kernel parameters [`connect_all`](kernel_app.md#connect_all) and
> [`prevent_overlapping_partitions`](kernel_app.md#prevent_overlapping_partitions)
> are enabled. Calls to the `global` API will, however, _not_ fail even though
> one or both of them are disabled. You will just get unreliable results.
These services are controlled through the process `global_name_server` that
exists on every node. The global name server starts automatically when a node is
started. With the term _global_ is meant over a system consisting of many Erlang
nodes.
The ability to globally register names is a central concept in the programming
of distributed Erlang systems. In this module, the equivalent of the
[`register/2`](`register/2`) and [`whereis/1`](`whereis/1`) BIFs (for local name
registration) are provided, but for a network of Erlang nodes. A registered name
is an alias for a process identifier (pid). The global name server monitors
globally registered pids. If a process terminates, the name is also globally
unregistered.
The registered names are stored in replica global name tables on every node.
There is no central storage point. Thus, the translation of a name to a pid is
fast, as it is always done locally. For any action resulting in a change to the
global name table, all tables on other nodes are automatically updated.
Global locks have lock identities and are set on a specific resource. For
example, the specified resource can be a pid. When a global lock is set, access
to the locked resource is denied for all resources other than the lock
requester.
Both the registration and lock services are atomic. All nodes involved in these
actions have the same view of the information.
The global name server also performs the critical task of continuously
monitoring changes in node configuration. If a node that runs a globally
registered process goes down, the name is globally unregistered. To this end,
the global name server subscribes to `nodeup` and `nodedown` messages sent from
module `net_kernel`. Relevant Kernel application variables in this context are
[`net_setuptime`](kernel_app.md#net_setuptime), [`net_ticktime`](kernel_app.md#net_ticktime),
and [`dist_auto_connect`](kernel_app.md#dist_auto_connect).
The name server also maintains a fully connected network. For example, if node
`N1` connects to node `N2` (which is already connected to `N3`), the global name
servers on the nodes `N1` and `N3` ensure that also `N1` and `N3` are connected.
In this case, the name registration service cannot be used, but the lock
mechanism still works.
If the global name server fails to connect nodes (`N1` and `N3` in the example),
a warning event is sent to the error logger. The presence of such an event does
not exclude the nodes to connect later (you can, for example, try command
`rpc:call(N1, net_adm, ping, [N2])` in the Erlang shell), but it indicates a
network problem.
> #### Note {: .info }
>
> If the fully connected network is not set up properly, try first to increase
> the value of `net_setuptime`.
## See Also
`m:global_group`, `m:net_kernel`
""".
-behaviour(gen_server).
%% Global provides global registration of process names. The names are
%% dynamically kept up to date with the entire network. Global can
%% operate in two modes: in a fully connected network, or in a
%% non-fully connected network. In the latter case, the name
%% registration mechanism won't work.
%% As a separate service Global also provides global locks.
%% External exports
-export([start/0, start_link/0, stop/0, sync/0, sync/1,
whereis_name/1, register_name/2,
register_name/3, register_name_external/2, register_name_external/3,
unregister_name_external/1,re_register_name/2, re_register_name/3,
unregister_name/1, registered_names/0, send/2,
set_lock/1, set_lock/2, set_lock/3,
del_lock/1, del_lock/2,
trans/2, trans/3, trans/4,
random_exit_name/3, random_notify_name/3, notify_all_name/3,
disconnect/0]).
%% Internal exports
-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2,
code_change/3, resolve_it/4, get_locker/0]).
-export([info/0]).
-include_lib("stdlib/include/ms_transform.hrl").
%% Set this variable to 'allow' to allow several names of a process.
%% This is for backward compatibility only; the functionality is broken.
-define(WARN_DUPLICATED_NAME, global_multi_name_action).
%% Undocumented Kernel variable.
-define(N_CONNECT_RETRIES, global_connect_retries).
-define(DEFAULT_N_CONNECT_RETRIES, 0).
%% Time that we keep information about multicasted lost_connection
%% messages...
-define(lost_conn_info_cleanup_time, 60*60*1000).
%%% In certain places in the server, calling io:format hangs everything,
%%% so we'd better use erlang:display/1.
%%% my_tracer is used in testsuites
%% uncomment this if tracing is wanted
%%-define(DEBUG, true).
-ifdef(DEBUG).
-define(trace(T), erlang:display({format, node(), cs(), T})).
cs() ->
{_Big, Small, Tiny} = erlang:timestamp(),
(Small rem 100) * 100 + (Tiny div 10000).
%-define(trace(T), (catch my_tracer ! {node(), {line,?LINE}, T})).
-else.
-define(trace(_), ok).
-endif.
-define(MAX_64BIT_SMALL_INT, ((1 bsl 59) - 1)).
-define(MIN_64BIT_SMALL_INT, (-(1 bsl 59))).
%% These are the protocol versions:
%% Vsn 1 is the original protocol.
%% Vsn 2 is enhanced with code to take care of registration of names from
%% non erlang nodes, e.g. C-nodes.
%% Vsn 3 is enhanced with a tag in the synch messages to distinguish
%% different synch sessions from each other, see OTP-2766.
%% Vsn 4 uses a single, permanent, locker process, but works like vsn 3
%% when communicating with vsn 3 nodes. (-R10B)
%% Vsn 5 uses an ordered list of self() and HisTheLocker when locking
%% nodes in the own partition. (R11B-)
%% Vsn 6 does not send any message between locker processes on different
%% nodes, but uses the server as a proxy.
%% Vsn 7 - propagate global versions between nodes, so we always know
%% versions of known nodes
%% - optional "prevent overlapping partitions" fix supported
%% Vsn 8 - "verify connection" part of the protocol preventing
%% deadlocks in connection setup due to locker processes
%% being out of sync
%% - "prevent overlapping partitions" fix also for systems
%% configured to use global groups
%% Current version of global does not support vsn 4 or earlier.
-define(vsn, 8).
%% Version when the "verify connection" part of the protocol
%% was introduced.
-define(verify_connect_vsn, 8).
%% Version where "prevent overlapping partitions" fix for global groups
%% was introduced.
-define(gg_pop_vsn, 8).
%% Version when the "prevent overlapping partitions" fix was introduced.
-define(pop_vsn, 7).
%% Version when the "propagate global protocol versions" feature
%% was introduced.
-define(pgpv_vsn, 7).
%%-----------------------------------------------------------------
%% connect_all = boolean() - true if we are supposed to set up a
%% fully connected net
%% known = #{} - Map of known nodes including protocol version
%% as well as some other information. See state
%% record declaration below for more info.
%% synced = [Node] - all nodes that have the same names as us
%% resolvers = [{Node, MyTag, Resolver}] -
%% the tag separating different synch sessions,
%% and the pid of the name resolver process
%% syncers = [pid()] - all current syncers processes
%% node_name = atom() - our node name (can change if distribution
%% is started/stopped dynamically)
%%
%% In addition to these, we keep info about messages arrived in
%% the process dictionary:
%% {pre_connect, Node} = {Vsn, InitMsg} - init_connect msgs that
%% arrived before nodeup
%% {wait_lock, Node} = {exchange, NameList, _NamelistExt} | lock_is_set
%% - see comment below (handle_cast)
%% {save_ops, Node} = {resolved, HisKnown, NamesExt, Res} | [operation()]
%% - save the ops between exchange and resolved
%% {prot_vsn, Node} = Vsn - the exchange protocol version (not used now)
%% {sync_tag_my, Node} = My tag, used at synchronization with Node
%% {sync_tag_his, Node} = The Node's tag, used at synchronization
%% {lock_id, Node} = The resource locking the partitions
%%-----------------------------------------------------------------
-record(conf, {connect_all :: boolean(),
prevent_over_part :: boolean()
}).
-record(state, {conf = #conf{} :: #conf{},
known = #{} :: #{
%% Known connected node with protocol
%% version as value
node() => non_neg_integer(),
%% Connecting node, not yet known, with
%% protocol version as value
{pending, node()} => non_neg_integer(),
%% Node currently being removed
{removing, node()} => yes,
%% Connection id of connected nodes
{connection_id, node()} => integer()
},
synced = [] :: [node()],
resolvers = [],
syncers = [] :: [pid()],
node_name = node() :: node(),
the_locker, the_registrar, trace,
global_lock_down = false :: boolean()
}).
-type state() :: #state{}.
%%% There are also ETS tables used for bookkeeping of locks and names
%%% (the first position is the key):
%%%
%%% global_locks (set): {ResourceId, LockRequesterId, [{Pid,ref()]}
%%% Pid is locking ResourceId, ref() is the monitor ref.
%%% global_names (set): {Name, Pid, Method, ref()}
%%% Registered names. ref() is the monitor ref.
%%% global_names_ext (set): {Name, Pid, RegNode}
%%% External registered names (C-nodes).
%%%
%%% Helper tables:
%%% global_pid_names (bag): {Pid, Name} | {ref(), Name}
%%% Name(s) registered for Pid.
%%% There is one {Pid, Name} and one {ref(), Name} for every Pid.
%%% ref() is the same ref() as in global_names.
%%% global_pid_ids (bag): {Pid, ResourceId} | {ref(), ResourceId}
%%% Resources locked by Pid.
%%% ref() is the same ref() as in global_locks.
%%%
%%% global_lost_connections (set):
%%% {{NodeA, NodeB}, {ExtendedCreationA, OpIdA, Timer}
%%% Information about lost connections (reported by NodeA) used by
%%% the "prevent overlapping partitions" fix. The timer is used is
%%% used to remove the entry when not needed anymore.
%%%
%%% global_pid_names is a 'bag' for backward compatibility.
%%% (Before vsn 5 more than one name could be registered for a process.)
%%%
%%% R11B-3 (OTP-6341): The list of pids in the table 'global_locks'
%%% was replaced by a list of {Pid, Ref}, where Ref is a monitor ref.
%%% It was necessary to use monitors to fix bugs regarding locks that
%%% were never removed. The signal {async_del_lock, ...} has been
%%% kept for backward compatibility. It can be removed later.
%%%
%%% R11B-4 (OTP-6428): Monitors are used for registered names.
%%% The signal {delete_name, ...} has been kept for backward compatibility.
%%% It can be removed later as can the deleter process.
%%% An extra process calling erlang:monitor() is sometimes created.
%%% The new_nodes messages has been augmented with the global lock id.
%%%
%%% R14A (OTP-8527): The deleter process has been removed.
%%%
%%% Erlang/OTP 22.0: The extra process calling erlang:monitor() is removed.
-doc false.
start() ->
gen_server:start({local, global_name_server}, ?MODULE, [], []).
-doc false.
start_link() ->
gen_server:start_link({local, global_name_server}, ?MODULE, [], []).
-doc false.
stop() ->
gen_server:call(global_name_server, stop, infinity).
-doc """
Synchronizes the global name server with all nodes known to this node.
These are the nodes that are returned from [`nodes()`](`erlang:nodes/0`). When
this function returns, the global name server receives global information from
all nodes. This function can be called when new nodes are added to the network.
The only possible error reason `Reason` is
`{"global_groups definition error", Error}`.
""".
-spec sync() -> 'ok' | {'error', Reason :: term()}.
sync() ->
case check_sync_nodes() of
{error, _} = Error ->
Error;
SyncNodes ->
gen_server:call(global_name_server, {sync, SyncNodes}, infinity)
end.
-doc false.
-spec sync([node()]) -> 'ok' | {'error', Reason :: term()}.
sync(Nodes) ->
case check_sync_nodes(Nodes) of
{error, _} = Error ->
Error;
SyncNodes ->
gen_server:call(global_name_server, {sync, SyncNodes}, infinity)
end.
-doc """
Sends message `Msg` to the pid globally registered as `Name`.
If `Name` is not a globally registered name, the calling function exits with
reason `{badarg, {Name, Msg}}`.
""".
-spec send(Name, Msg) -> Pid when
Name :: term(),
Msg :: term(),
Pid :: pid().
send(Name, Msg) ->
case whereis_name(Name) of
Pid when is_pid(Pid) ->
Pid ! Msg,
Pid;
undefined ->
exit({badarg, {Name, Msg}})
end.
%% See OTP-3737.
-doc """
Returns the pid with the globally registered name `Name`. Returns `undefined` if
the name is not globally registered.
""".
-spec whereis_name(Name) -> pid() | 'undefined' when
Name :: term().
whereis_name(Name) ->
where(Name).
%%-----------------------------------------------------------------
%% Method = function(Name, Pid1, Pid2) -> Pid | Pid2 | none
%% Method is called if a name conflict is detected when two nodes
%% are connecting to each other. It is supposed to return one of
%% the Pids or 'none'. If a pid is returned, that pid is
%% registered as Name on all nodes. If 'none' is returned, the
%% Name is unregistered on all nodes. If anything else is returned,
%% the Name is unregistered as well.
%% Method is called once at one of the nodes where the processes reside
%% only. If different Methods are used for the same name, it is
%% undefined which one of them is used.
%% Method blocks the name registration, but does not affect global locking.
%%-----------------------------------------------------------------
-doc(#{equiv => register_name(Name, Pid, fun random_exit_name/3)}).
-spec register_name(Name, Pid) -> 'yes' | 'no' when
Name :: term(),
Pid :: pid().
register_name(Name, Pid) when is_pid(Pid) ->
register_name(Name, Pid, fun random_exit_name/3).
-type method() :: fun((Name :: term(), Pid :: pid(), Pid2 :: pid()) ->
pid() | 'none').
-doc """
Globally associates name `Name` with a pid, that is, globally notifies all nodes
of a new global name in a network of Erlang nodes.
When new nodes are added to the network, they are informed of the globally
registered names that already exist. The network is also informed of any global
names in newly connected nodes. If any name clashes are discovered, function
`Resolve` is called. Its purpose is to decide which pid is correct. If the
function crashes, or returns anything other than one of the pids, the name is
unregistered. This function is called once for each name clash.
> #### Warning {: .warning }
>
> If you plan to change code without restarting your system, you must use an
> external fun (`fun Module:Function/Arity`) as function `Resolve`. If you use a
> local fun, you can never replace the code for the module that the fun belongs
> to.
Three predefined resolve functions exist:
[`random_exit_name/3`](`random_exit_name/3`),
[`random_notify_name/3`](`random_notify_name/3`), and
[`notify_all_name/3`](`notify_all_name/3`).
This function is completely synchronous, that is, when this function returns,
the name is either registered on all nodes or none.
The function returns `yes` if successful, `no` if it fails. For example, `no` is
returned if an attempt is made to register an already registered process or to
register a process with a name that is already in use.
> #### Note {: .info }
>
> Releases up to and including Erlang/OTP R10 did not check if the process was
> already registered. The global name table could therefore become inconsistent.
> The old (buggy) behavior can be chosen by giving the Kernel application
> variable `global_multi_name_action` the value `allow`.
If a process with a registered name dies, or the node goes down, the name is
unregistered on all nodes.
""".
-spec register_name(Name, Pid, Resolve) -> 'yes' | 'no' when
Name :: term(),
Pid :: pid(),
Resolve :: method().
register_name(Name, Pid, Method0) when is_pid(Pid) ->
Method = allow_tuple_fun(Method0),
Fun = fun(Nodes) ->
case (where(Name) =:= undefined) andalso check_dupname(Name, Pid) of
true ->
_ = gen_server:multi_call(Nodes,
global_name_server,
{register, Name, Pid, Method}),
yes;
_ ->
no
end
end,
?trace({register_name, self(), Name, Pid, Method}),
gen_server:call(global_name_server, {registrar, Fun}, infinity).
check_dupname(Name, Pid) ->
case ets:lookup(global_pid_names, Pid) of
[] ->
true;
PidNames ->
case application:get_env(kernel, ?WARN_DUPLICATED_NAME) of
{ok, allow} ->
true;
_ ->
S = "global: ~w registered under several names: ~tw\n",
Names = [Name | [Name1 || {_Pid, Name1} <- PidNames]],
logger:log(error, S, [Pid, Names]),
false
end
end.
-doc "Removes the globally registered name `Name` from the network of Erlang nodes.".
-spec unregister_name(Name) -> _ when
Name :: term().
unregister_name(Name) ->
case where(Name) of
undefined ->
ok;
_ ->
Fun = fun(Nodes) ->
_ = gen_server:multi_call(Nodes,
global_name_server,
{unregister, Name}),
ok
end,
?trace({unregister_name, self(), Name}),
gen_server:call(global_name_server, {registrar, Fun}, infinity)
end.
-doc(#{equiv => re_register_name(Name, Pid, fun random_exit_name/3)}).
-spec re_register_name(Name, Pid) -> 'yes' when
Name :: term(),
Pid :: pid().
re_register_name(Name, Pid) when is_pid(Pid) ->
re_register_name(Name, Pid, fun random_exit_name/3).
-doc """
Atomically changes the registered name `Name` on all nodes to refer to `Pid`.
Function `Resolve` has the same behavior as in
[`register_name/2,3`](`register_name/2`).
""".
-spec re_register_name(Name, Pid, Resolve) -> 'yes' when
Name :: term(),
Pid :: pid(),
Resolve :: method().
re_register_name(Name, Pid, Method0) when is_pid(Pid) ->
Method = allow_tuple_fun(Method0),
Fun = fun(Nodes) ->
_ = gen_server:multi_call(Nodes,
global_name_server,
{register, Name, Pid, Method}),
yes
end,
?trace({re_register_name, self(), Name, Pid, Method}),
gen_server:call(global_name_server, {registrar, Fun}, infinity).
-doc "Returns a list of all globally registered names.".
-spec registered_names() -> [Name] when
Name :: term().
registered_names() ->
MS = ets:fun2ms(fun({Name,_Pid,_M,_R}) -> Name end),
ets:select(global_names, MS).
%%-----------------------------------------------------------------
%% The external node (e.g. a C-node) registers the name on an Erlang
%% node which links to the process (an Erlang node has to be used
%% since there is no global_name_server on the C-node). If the Erlang
%% node dies the name is to be unregistered on all nodes. Normally
%% node(Pid) is compared to the node that died, but that does not work
%% for external nodes (the process does not run on the Erlang node
%% that died). Therefore a table of all names registered by external
%% nodes is kept up-to-date on all nodes.
%%
%% Note: if the Erlang node dies an EXIT signal is also sent to the
%% C-node due to the link between the global_name_server and the
%% registered process. [This is why the link has been kept despite
%% the fact that monitors do the job now.]
%%-----------------------------------------------------------------
-doc false.
register_name_external(Name, Pid) when is_pid(Pid) ->
register_name_external(Name, Pid, fun random_exit_name/3).
-doc false.
register_name_external(Name, Pid, Method) when is_pid(Pid) ->
Fun = fun(Nodes) ->
case where(Name) of
undefined ->
_ = gen_server:multi_call(Nodes,
global_name_server,
{register_ext, Name, Pid,
Method, node()}),
yes;
_Pid -> no
end
end,
?trace({register_name_external, self(), Name, Pid, Method}),
gen_server:call(global_name_server, {registrar, Fun}, infinity).
-doc false.
unregister_name_external(Name) ->
unregister_name(Name).
-doc "A lock id used to set or delete lock `ResourceId` on behalf of `LockRequesterId`.".
-type id() :: {ResourceId :: term(), LockRequesterId :: term()}.
-doc(#{equiv => set_lock(Id, [node() | nodes()], infinity)}).
-spec set_lock(Id) -> boolean() when
Id :: id().
set_lock(Id) ->
set_lock(Id, [node() | nodes()], infinity, 1).
-type retries() :: non_neg_integer() | 'infinity'.
-doc(#{equiv => set_lock(Id, Nodes, infinity)}).
-spec set_lock(Id, Nodes) -> boolean() when
Id :: id(),
Nodes :: [node()].
set_lock(Id, Nodes) ->
set_lock(Id, Nodes, infinity, 1).
-doc """
Sets a lock on the specified nodes on using `t:id/0`.
If a lock already exists on `ResourceId` for another requester than `LockRequesterId`,
and `Retries` is not equal to `0`, the process sleeps for a while and tries to
execute the action later. When `Retries` attempts have been made, `false` is
returned, otherwise `true`. If `Retries` is `infinity`, `true` is eventually
returned (unless the lock is never released).
This function is completely synchronous.
If a process that holds a lock dies, or the node goes down, the locks held by
the process are deleted.
The global name server keeps track of all processes sharing the same lock, that
is, if two processes set the same lock, both processes must delete the lock.
This function does not address the problem of a deadlock. A deadlock can never
occur as long as processes only lock one resource at a time. A deadlock can
occur if some processes try to lock two or more resources. It is up to the
application to detect and rectify a deadlock.
> #### Note {: .info }
>
> Avoid the following values of `ResourceId`, otherwise Erlang/OTP does not work
> properly:
>
> - `dist_ac`
> - `global`
> - `mnesia_adjust_log_writes`
> - `mnesia_table_lock`
""".
-spec set_lock(Id, Nodes, Retries) -> boolean() when
Id :: id(),
Nodes :: [node()],
Retries :: retries().
set_lock(Id, Nodes, Retries) when is_integer(Retries), Retries >= 0 ->
set_lock(Id, Nodes, Retries, 1);
set_lock(Id, Nodes, infinity) ->
set_lock(Id, Nodes, infinity, 1).
set_lock({_ResourceId, _LockRequesterId}, [], _Retries, _Times) ->
true;
set_lock({_ResourceId, _LockRequesterId} = Id, Nodes, Retries, Times) ->
?trace({set_lock,{me,self()},Id,{nodes,Nodes},
{retries,Retries}, {times,Times}}),
case set_lock_on_nodes(Id, Nodes) of
true ->
?trace({set_lock_true, Id}),
true;
false=Reply when Retries =:= 0 ->
Reply;
false ->
random_sleep(Times),
set_lock(Id, Nodes, dec(Retries), Times+1)
end.
-doc(#{equiv => del_lock(Id, [node() | nodes()])}).
-spec del_lock(Id) -> 'true' when
Id :: id().
del_lock(Id) ->
del_lock(Id, [node() | nodes()]).
-doc "Deletes the lock `Id` synchronously.".
-spec del_lock(Id, Nodes) -> 'true' when
Id :: id(),
Nodes :: [node()].
del_lock({_ResourceId, _LockRequesterId} = Id, Nodes) ->
?trace({del_lock, {me,self()}, Id, {nodes,Nodes}}),
_ = gen_server:multi_call(Nodes, global_name_server, {del_lock, Id}),
true.
-type trans_fun() :: function() | {module(), atom()}.
-doc(#{equiv => trans(Id, Fun, [node() | nodes()], infinity)}).
-spec trans(Id, Fun) -> Res | aborted when
Id :: id(),
Fun :: trans_fun(),
Res :: term().
trans(Id, Fun) -> trans(Id, Fun, [node() | nodes()], infinity).
-doc(#{equiv => trans(Id, Fun, Nodes, infinity)}).
-spec trans(Id, Fun, Nodes) -> Res | aborted when
Id :: id(),
Fun :: trans_fun(),
Nodes :: [node()],
Res :: term().
trans(Id, Fun, Nodes) -> trans(Id, Fun, Nodes, infinity).
-doc """
Sets a lock on `Id` (using `set_lock/3`).
If this succeeds, `Fun()` is evaluated and the result `Res` is returned.
Returns `aborted` if the lock attempt fails. If `Retries` is set to `infinity`,
the transaction does not abort.
`infinity` is the default setting and is used if no value is specified for
`Retries`.
""".
-spec trans(Id, Fun, Nodes, Retries) -> Res | aborted when
Id :: id(),
Fun :: trans_fun(),
Nodes :: [node()],
Retries :: retries(),
Res :: term().
trans(Id, Fun, Nodes, Retries) ->
case set_lock(Id, Nodes, Retries) of
true ->
try
Fun()
after
del_lock(Id, Nodes)
end;
false ->
aborted
end.
-doc false.
info() ->
gen_server:call(global_name_server, info, infinity).
-doc """
Disconnect from all other nodes known to `global`.
A list of node names (in an unspecified order) is returned which corresponds to
the nodes that were disconnected. All disconnect operations performed have completed when
`global:disconnect/0` returns.
The disconnects will be made in such a way that only the current node will be
removed from the cluster of `global` nodes. If
[`prevent_overlapping_partitions`] is
enabled and you disconnect, from other nodes in the cluster of `global` nodes,
by other means, `global` on the other nodes may partition the remaining nodes in
order to ensure that no overlapping partitions appear. Even if
[`prevent_overlapping_partitions`] is disabled, you should preferably use
`global:disconnect/0` in order to remove current node from a cluster of `global`
nodes, since you otherwise likely _will_ create overlapping partitions which
might [cause problems](`m:global#prevent_overlapping_partitions`).
Note that if the node is going to be halted, there is _no_ need to remove it
from a cluster of `global` nodes explicitly by calling `global:disconnect/0`
before halting it. The removal from the cluster is taken care of automatically
when the node halts regardless of whether [`prevent_overlapping_partitions`] is
enabled or not.
If current node has been configured to be part of a
[_global group_](`m:global_group`), only connected and/or synchronized nodes in
that group are known to `global`, so `global:disconnect/0` will _only_
disconnect from those nodes. If current node is _not_ part of a _global group_,
all [connected visible nodes](`erlang:nodes/0`) will be known to `global`, so
`global:disconnect/0` will disconnect from all those nodes.
Note that information about connected nodes does not instantaneously reach
`global`, so the caller might see a node part of the result returned by
[`nodes()`](`erlang:nodes/0`) while it still is not known to `global`. The
disconnect operation will, however, still not cause any overlapping partitions
when [`prevent_overlapping_partitions`] is enabled. If
[`prevent_overlapping_partitions`] is disabled, overlapping partitions might form
in this case.
Note that when [`prevent_overlapping_partitions`] is enabled, you may see warning
reports on other nodes when they detect that current node has disconnected.
These are in this case completely harmless and can be ignored.
[`prevent_overlapping_partitions`]: kernel_app.md#prevent_overlapping_partitions
""".
-doc(#{since => <<"OTP 25.1">>}).
-spec disconnect() -> [node()].
disconnect() ->
gen_server:call(global_name_server, disconnect, infinity).
%%%-----------------------------------------------------------------
%%% Call-back functions from gen_server
%%%-----------------------------------------------------------------
-doc false.
-spec init([]) -> {'ok', state()}.
init([]) ->
_ = process_flag(async_dist, true),
process_flag(trap_exit, true),
%% Monitor all 'nodeup'/'nodedown' messages of visible nodes.
%% In case
%%
%% * no global group is configured, we use these as is. This
%% way we know that 'nodeup' comes before any traffic from
%% the node on the newly established connection and 'nodedown'
%% comes after any traffic on this connection from the node.
%%
%% * global group is configured, we ignore 'nodeup' and instead
%% rely on 'group_nodeup' messages passed by global_group and
%% filter 'nodedown' based on if the node is part of our group
%% or not. We need to be prepared for traffic from the node
%% on the newly established connection arriving before the
%% 'group_nodeup'. 'nodedown' will however not arrive until
%% all traffic from the node on this connection has arrived.
%%
%% In case a connection goes down and then up again, the
%% 'nodedown' for the old connection is nowadays guaranteed to
%% be delivered before the 'nodeup' for the new connection.
%%
%% By keeping track of connection_id for all connections we
%% can differentiate between different instances of connections
%% to the same node.
ok = net_kernel:monitor_nodes(true, #{connection_id => true}),
%% There are most likely no connected nodes at this stage,
%% but check to make sure...
Known = lists:foldl(fun ({N, #{connection_id := CId}}, Cs) ->
Cs#{{connection_id, N} => CId}
end,
#{},
nodes(visible, #{connection_id => true})),
_ = ets:new(global_locks, [set, named_table, protected]),
_ = ets:new(global_names, [set, named_table, protected,
{read_concurrency, true}]),
_ = ets:new(global_names_ext, [set, named_table, protected]),
_ = ets:new(global_pid_names, [bag, named_table, protected]),
_ = ets:new(global_pid_ids, [bag, named_table, protected]),
_ = ets:new(global_lost_connections, [set, named_table, protected]),
_ = ets:new(global_node_resources, [set, named_table, protected]),
%% This is for troubleshooting only.
DoTrace = os:getenv("GLOBAL_HIGH_LEVEL_TRACE") =:= "TRUE",
T0 = case DoTrace of
true ->
send_high_level_trace(),
[];
false ->
no_trace
end,
Ca = case application:get_env(kernel, connect_all) of
{ok, CaBool} when is_boolean(CaBool) ->
CaBool;
{ok, CaInvalid} ->
error({invalid_parameter_value, connect_all, CaInvalid});
undefined ->
CaBool = case init:get_argument(connect_all) of
{ok, [["false" | _] | _]} ->
false;
_ ->
true
end,
ok = application:set_env(kernel, connect_all, CaBool,
[{timeout, infinity}]),
CaBool
end,
POP = case application:get_env(kernel,
prevent_overlapping_partitions) of
{ok, PopBool} when is_boolean(PopBool) ->
PopBool;
{ok, PopInvalid} ->
error({invalid_parameter_value,
prevent_overlapping_partitions,
PopInvalid});
undefined ->
true
end,
S = #state{the_locker = start_the_locker(DoTrace),
known = Known,
trace = T0,
the_registrar = start_the_registrar(),
conf = #conf{connect_all = Ca,
prevent_over_part = POP}},
_ = rand:seed(default,
(erlang:monotonic_time(nanosecond) rem 1000000000)
+ (erlang:system_time(nanosecond) rem 1000000000)),
CreX = ((rand:uniform(?MAX_64BIT_SMALL_INT - ?MIN_64BIT_SMALL_INT)
- 1) band (bnot ((1 bsl 32) -1))),
put(creation_extension, CreX),
{ok, trace_message(S, {init, node()}, [])}.
%%-----------------------------------------------------------------
%% Connection algorithm
%% ====================
%% This algorithm solves the problem with partitioned nets as well.
%%
%% The main idea in the algorithm is that when two nodes connect, they
%% try to set a lock in their own partition (i.e. all nodes already
%% known to them; partitions are not necessarily disjoint). When the
%% lock is set in each partition, these two nodes send each other a
%% list with all registered names in resp partition (*). If no conflict
%% is found, the name tables are just updated. If a conflict is found,
%% a resolve function is called once for each conflict. The result of
%% the resolving is sent to the other node. When the names are
%% exchanged, all other nodes in each partition are informed of the
%% other nodes, and they ping each other to form a fully connected
%% net.
%%
%% A few remarks:
%%
%% (*) When this information is being exchanged, no one is allowed to
%% change the global register table. All calls to register etc are
%% protected by a lock. If a registered process dies during this
%% phase the name is unregistered on the local node immediately,
%% but the unregistration on other nodes will take place when the
%% deleter manages to acquire the lock. This is necessary to
%% prevent names from spreading to nodes where they cannot be
%% deleted.
%%
%% - It is assumed that nodeups and nodedowns arrive in an orderly
%% fashion: for every node, nodeup is followed by nodedown, and vice
%% versa. "Double" nodeups and nodedowns must never occur. It is
%% the responsibility of net_kernel to assure this.
%%
%% - There is always a delay between the termination of a registered
%% process and the removal of the name from Global's tables. This
%% delay can sometimes be quite substantial. Global guarantees that
%% the name will eventually be removed, but there is no
%% synchronization between nodes; the name can be removed from some
%% node(s) long before it is removed from other nodes.
%%
%% - Global cannot handle problems with the distribution very well.
%% Depending on the value of the kernel variable 'net_ticktime' long
%% delays may occur. This does not affect the handling of locks but
%% will block name registration.
%%
%% - Old synch session messages may linger on in the message queue of
%% global_name_server after the sending node has died. The tags of
%% such messages do not match the current tag (if there is one),
%% which makes it possible to discard those messages and cancel the
%% corresponding lock.
%%
%% - The lockers begin locking operations as soon as the init_connect
%% messages has been exchanged and do not wait for init_connect_ack.
%% They could even complete before init_connect_ack messages are
%% received. The init_connect_ack messages are only there to confirm
%% that both nodes has the same view of which connect session is
%% ongoing. If lockers get out of sync, the lock will not be able
%% to be aquired on both nodes. The out of sync lock operation will
%% be detected when the init_connect_ack message is received and the
%% operation can be cancelled and then restarted.
%%
%% Suppose nodes A and B connect, and C is connected to A.
%% Here's the algorithm's flow:
%%
%% Node A
%% ------
%% << {nodeup, B}
%% TheLocker ! {nodeup, ..., Node, ...} (there is one locker per node)
%% B ! {init_connect, ..., {..., TheLockerAtA, ...}}
%% << {init_connect, TheLockerAtB}
%% B ! {init_connect_ack, ...}
%% << {init_connect_ack, ...}
%% [The lockers try to set the lock]
%% << {lock_is_set, B, ...}
%% [Now, lock is set in both partitions]
%% B ! {exchange, A, Names, ...}
%% << {exchange, B, Names, ...}
%% [solve conflict]
%% B ! {resolved, A, ResolvedA, KnownAtA, ...}
%% << {resolved, B, ResolvedB, KnownAtB, ...}
%% C ! {new_nodes, ResolvedAandB, [B]}
%%
%% When cancelling a connect, also the remote node is nowadays also
%% informed using:
%% B ! {cancel_connect, ...}
%%
%% Node C
%% ------
%% << {new_nodes, ResolvedOps, NewNodes}
%% [insert Ops]
%% ping(NewNodes)
%% << {nodeup, B}
%% <ignore this one>
%%
%% Several things can disturb this picture.
%%
%% First, the init_connect message may arrive _before_ the nodeup
%% message due to delay in net_kernel. We handle this by keeping track
%% of these messages in the pre_connect variable in our state.
%%
%% Of course we must handle that some node goes down during the
%% connection.
%%
%%-----------------------------------------------------------------
%% Messages in the protocol
%% ========================
%% 1. Between global_name_servers on connecting nodes
%% {init_connect, Vsn, Node, InitMsg}
%% InitMsg = {locker, _Unused, HisKnown, HisTheLocker}
%% {exchange, Node, ListOfNames, _ListOfNamesExt, Tag}
%% {resolved, Node, HisOps, HisKnown, _Unused, ListOfNamesExt, Tag}
%% HisKnown = list of known nodes in Node's partition
%% 2. Between global_name_server and my locker (the local locker)
%% {nodeup, Node, Tag}
%% {his_the_locker, Pid, {Vsn, HisKnown}, MyKnown}
%% {cancel, Node, Tag, Fun | no_fun}
%% {add_to_known, Nodes}
%% {remove_from_known, Nodes}
%% {lock_set, Pid, LockIsSet, HisKnown} (acting as proxy)
%% 3. Between lockers on connecting nodes (via proxy global_name_server)
%% {lock_set, Pid, LockIsSet, HisKnown}
%% loop until both lockers have LockIsSet =:= true,
%% then send to global_name_server {lock_is_set, Node, Tag, LockId}
%% 4. Connecting node's global_name_server informs other nodes in the same
%% partition about hitherto unknown nodes in the other partition
%% {new_nodes, Node, Ops, ListOfNamesExt, NewNodes, ExtraInfo}
%% 5. Between global_name_server and resolver
%% {resolve, NameList, Node} to resolver
%% {exchange_ops, Node, Tag, Ops, Resolved} from resolver
%% 6. sync protocol, between global_name_servers in different partitions
%% {in_sync, Node, IsKnown}
%% sent by each node to all new nodes (Node becomes known to them)