Skip to content
Permalink
Browse files
More term cleanup for tutorial website (#2059)
  • Loading branch information
qqu0127 committed Apr 27, 2022
1 parent fc35b54 commit 8824d22558a062dfd6ae7b8947224ed6a8e8709e
Show file tree
Hide file tree
Showing 19 changed files with 315 additions and 315 deletions.
@@ -72,7 +72,7 @@ When the idealstate mode is set to AUTO_REBALANCE, Helix controls both the locat
"IDEAL_STATE_MODE" : "AUTO_REBALANCE",
"NUM_PARTITIONS" : "3",
"REPLICAS" : "2",
"STATE_MODEL_DEF_REF" : "MasterSlave",
"STATE_MODEL_DEF_REF" : "LeaderStandby",
}
"listFields" : {
"MyResource_0" : [],
@@ -92,20 +92,20 @@ If there are 3 nodes in the cluster, then Helix will internally compute the idea
"simpleFields" : {
"NUM_PARTITIONS" : "3",
"REPLICAS" : "2",
"STATE_MODEL_DEF_REF" : "MasterSlave",
"STATE_MODEL_DEF_REF" : "LeaderStandby",
},
"mapFields" : {
"MyResource_0" : {
"N1" : "MASTER",
"N2" : "SLAVE",
"N1" : "LEADER",
"N2" : "STANDBY",
},
"MyResource_1" : {
"N2" : "MASTER",
"N3" : "SLAVE",
"N2" : "LEADER",
"N3" : "STANDBY",
},
"MyResource_2" : {
"N3" : "MASTER",
"N1" : "SLAVE",
"N3" : "LEADER",
"N1" : "STANDBY",
}
}
}
@@ -125,7 +125,7 @@ When the idealstate mode is set to AUTO, Helix only controls STATE of the replic
"IDEAL_STATE_MODE" : "AUTO",
"NUM_PARTITIONS" : "3",
"REPLICAS" : "2",
"STATE_MODEL_DEF_REF" : "MasterSlave",
"STATE_MODEL_DEF_REF" : "LeaderStandby",
}
"listFields" : {
"MyResource_0" : [node1, node2],
@@ -136,7 +136,7 @@ When the idealstate mode is set to AUTO, Helix only controls STATE of the replic
}
}
```
In this mode when node1 fails, unlike in AUTO-REBALANCE mode the partition is not moved from node1 to others nodes in the cluster. Instead, Helix will decide to change the state of MyResource_0 in N2 based on the system constraints. For example, if a system constraint specified that there should be 1 Master and if the Master failed, then node2 will be made the new master.
In this mode when node1 fails, unlike in AUTO-REBALANCE mode the partition is not moved from node1 to others nodes in the cluster. Instead, Helix will decide to change the state of MyResource_0 in N2 based on the system constraints. For example, if a system constraint specified that there should be 1 Master and if the Master failed, then node2 will be made the new leader.

#### CUSTOM

@@ -150,27 +150,27 @@ Within this callback, the application can recompute the idealstate. Helix will t
"IDEAL_STATE_MODE" : "CUSTOM",
"NUM_PARTITIONS" : "3",
"REPLICAS" : "2",
"STATE_MODEL_DEF_REF" : "MasterSlave",
"STATE_MODEL_DEF_REF" : "LeaderStandby",
},
"mapFields" : {
"MyResource_0" : {
"N1" : "MASTER",
"N2" : "SLAVE",
"N1" : "LEADER",
"N2" : "STANDBY",
},
"MyResource_1" : {
"N2" : "MASTER",
"N3" : "SLAVE",
"N2" : "LEADER",
"N3" : "STANDBY",
},
"MyResource_2" : {
"N3" : "MASTER",
"N1" : "SLAVE",
"N3" : "LEADER",
"N1" : "STANDBY",
}
}
}
```

For example, the current state of the system might be 'MyResource_0' -> {N1:MASTER,N2:SLAVE} and the application changes the ideal state to 'MyResource_0' -> {N1:SLAVE,N2:MASTER}. Helix will not blindly issue MASTER-->SLAVE to N1 and SLAVE-->MASTER to N2 in parallel since it might result in a transient state where both N1 and N2 are masters.
Helix will first issue MASTER-->SLAVE to N1 and after its completed it will issue SLAVE-->MASTER to N2.
For example, the current state of the system might be 'MyResource_0' -> {N1:LEADER,N2:STANDBY} and the application changes the ideal state to 'MyResource_0' -> {N1:STANDBY,N2:LEADER}. Helix will not blindly issue LEADER-->STANDBY to N1 and STANDBY-->LEADER to N2 in parallel since it might result in a transient state where both N1 and N2 are leaders.
Helix will first issue LEADER-->STANDBY to N1 and after its completed it will issue STANDBY-->LEADER to N2.


### State Machine Configuration
@@ -186,10 +186,10 @@ Apart from providing the state machine configuration, one can specify the constr

For example one can say
Master:1. Max number of replicas in Master state at any time is 1.
OFFLINE-SLAVE:5 Max number of Offline-Slave transitions that can happen concurrently in the system
OFFLINE-STANDBY:5 Max number of Offline-Slave transitions that can happen concurrently in the system

STATE PRIORITY
Helix uses greedy approach to satisfy the state constraints. For example if the state machine configuration says it needs 1 master and 2 slaves but only 1 node is active, Helix must promote it to master. This behavior is achieved by providing the state priority list as MASTER,SLAVE.
Helix uses greedy approach to satisfy the state constraints. For example if the state machine configuration says it needs 1 leader and 2 standbys but only 1 node is active, Helix must promote it to leader. This behavior is achieved by providing the state priority list as LEADER,STANDBY.

STATE TRANSITION PRIORITY
Helix tries to fire as many transitions as possible in parallel to reach the stable state without violating constraints. By default Helix simply sorts the transitions alphabetically and fires as many as it can without violating the constraints.
@@ -262,7 +262,7 @@ Since all state changes in the system are triggered through transitions, Helix c
Helix allows applications to set threshold on transitions. The threshold can be set at the multiple scopes.

* MessageType e.g STATE_TRANSITION
* TransitionType e.g SLAVE-MASTER
* TransitionType e.g STANDBY-LEADER
* Resource e.g database
* Node i.e per node max transitions in parallel.

@@ -161,7 +161,7 @@ chmod +x *.sh
* Add a resource to cluster

```
curl -d 'jsonParameters={"command":"addResource","resourceGroupName":"MyDB","partitions":"8","stateModelDefRef":"MasterSlave" }' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups
curl -d 'jsonParameters={"command":"addResource","resourceGroupName":"MyDB","partitions":"8","stateModelDefRef":"LeaderStandby" }' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups
```

* _/clusters/{clusterName}/resourceGroups/{resourceName}_
@@ -202,15 +202,15 @@ chmod +x *.sh
"NUM_PARTITIONS" : "8",
"REBALANCE_MODE" : "SEMI_AUTO",
"REPLICAS" : "0",
"STATE_MODEL_DEF_REF" : "MasterSlave",
"STATE_MODEL_DEF_REF" : "LeaderStandby",
"STATE_MODEL_FACTORY_NAME" : "DEFAULT"
},
"listFields" : {
},
"mapFields" : {
"MyDB_0" : {
"localhost_1001" : "MASTER",
"localhost_1002" : "SLAVE"
"localhost_1001" : "LEADER",
"localhost_1002" : "STANDBY"
}
}
}
@@ -57,7 +57,7 @@ Helix has four options for rebalancing, in increasing order of customization by

When the rebalance mode is set to FULL_AUTO, Helix controls both the location of the replica along with the state. This option is useful for applications where creation of a replica is not expensive.

For example, consider this system that uses a MasterSlave state model, with 3 partitions and 2 replicas in the ideal state.
For example, consider this system that uses a LeaderStandby state model, with 3 partitions and 2 replicas in the ideal state.

```
{
@@ -66,7 +66,7 @@ For example, consider this system that uses a MasterSlave state model, with 3 pa
"REBALANCE_MODE" : "FULL_AUTO",
"NUM_PARTITIONS" : "3",
"REPLICAS" : "2",
"STATE_MODEL_DEF_REF" : "MasterSlave",
"STATE_MODEL_DEF_REF" : "LeaderStandby",
}
"listFields" : {
"MyResource_0" : [],
@@ -78,28 +78,28 @@ For example, consider this system that uses a MasterSlave state model, with 3 pa
}
```

If there are 3 nodes in the cluster, then Helix will balance the masters and slaves equally. The ideal state is therefore:
If there are 3 nodes in the cluster, then Helix will balance the leaders and standbys equally. The ideal state is therefore:

```
{
"id" : "MyResource",
"simpleFields" : {
"NUM_PARTITIONS" : "3",
"REPLICAS" : "2",
"STATE_MODEL_DEF_REF" : "MasterSlave",
"STATE_MODEL_DEF_REF" : "LeaderStandby",
},
"mapFields" : {
"MyResource_0" : {
"N1" : "MASTER",
"N2" : "SLAVE",
"N1" : "LEADER",
"N2" : "STANDBY",
},
"MyResource_1" : {
"N2" : "MASTER",
"N3" : "SLAVE",
"N2" : "LEADER",
"N3" : "STANDBY",
},
"MyResource_2" : {
"N3" : "MASTER",
"N1" : "SLAVE",
"N3" : "LEADER",
"N1" : "STANDBY",
}
}
}
@@ -112,7 +112,7 @@ When one node fails, Helix redistributes its 15 tasks to the remaining 3 nodes,

When the application needs to control the placement of the replicas, use the SEMI_AUTO rebalance mode.

Example: In the ideal state below, the partition \'MyResource_0\' is constrained to be placed only on node1 or node2. The choice of _state_ is still controlled by Helix. That means MyResource_0.MASTER could be on node1 and MyResource_0.SLAVE on node2, or vice-versa but neither would be placed on node3.
Example: In the ideal state below, the partition \'MyResource_0\' is constrained to be placed only on node1 or node2. The choice of _state_ is still controlled by Helix. That means MyResource_0.LEADER could be on node1 and MyResource_0.STANDBY on node2, or vice-versa but neither would be placed on node3.

```
{
@@ -121,7 +121,7 @@ Example: In the ideal state below, the partition \'MyResource_0\' is constrained
"REBALANCE_MODE" : "SEMI_AUTO",
"NUM_PARTITIONS" : "3",
"REPLICAS" : "2",
"STATE_MODEL_DEF_REF" : "MasterSlave",
"STATE_MODEL_DEF_REF" : "LeaderStandby",
}
"listFields" : {
"MyResource_0" : [node1, node2],
@@ -133,16 +133,16 @@ Example: In the ideal state below, the partition \'MyResource_0\' is constrained
}
```

The MasterSlave state model requires that a partition has exactly one MASTER at all times, and the other replicas should be SLAVEs. In this simple example with 2 replicas per partition, there would be one MASTER and one SLAVE. Upon failover, a SLAVE has to assume mastership, and a new SLAVE will be generated.
The LeaderStandby state model requires that a partition has exactly one LEADER at all times, and the other replicas should be STANDBYs. In this simple example with 2 replicas per partition, there would be one LEADER and one STANDBY. Upon failover, a STANDBY has to assume leadership, and a new STANDBY will be generated.

In this mode when node1 fails, unlike in FULL_AUTO mode the partition is _not_ moved from node1 to node3. Instead, Helix will decide to change the state of MyResource_0 on node2 from SLAVE to MASTER, based on the system constraints.
In this mode when node1 fails, unlike in FULL_AUTO mode the partition is _not_ moved from node1 to node3. Instead, Helix will decide to change the state of MyResource_0 on node2 from STANDBY to LEADER, based on the system constraints.

### CUSTOMIZED

Helix offers a third mode called CUSTOMIZED, in which the application controls the placement _and_ state of each replica. The application needs to implement a callback interface that Helix invokes when the cluster state changes.
Within this callback, the application can recompute the idealstate. Helix will then issue appropriate transitions such that _Idealstate_ and _Currentstate_ converges.

Here\'s an example, again with 3 partitions, 2 replicas per partition, and the MasterSlave state model:
Here\'s an example, again with 3 partitions, 2 replicas per partition, and the LeaderStandby state model:

```
{
@@ -151,26 +151,26 @@ Here\'s an example, again with 3 partitions, 2 replicas per partition, and the M
"REBALANCE_MODE" : "CUSTOMIZED",
"NUM_PARTITIONS" : "3",
"REPLICAS" : "2",
"STATE_MODEL_DEF_REF" : "MasterSlave",
"STATE_MODEL_DEF_REF" : "LeaderStandby",
},
"mapFields" : {
"MyResource_0" : {
"N1" : "MASTER",
"N2" : "SLAVE",
"N1" : "LEADER",
"N2" : "STANDBY",
},
"MyResource_1" : {
"N2" : "MASTER",
"N3" : "SLAVE",
"N2" : "LEADER",
"N3" : "STANDBY",
},
"MyResource_2" : {
"N3" : "MASTER",
"N1" : "SLAVE",
"N3" : "LEADER",
"N1" : "STANDBY",
}
}
}
```

Suppose the current state of the system is 'MyResource_0' \-\> {N1:MASTER, N2:SLAVE} and the application changes the ideal state to 'MyResource_0' \-\> {N1:SLAVE,N2:MASTER}. While the application decides which node is MASTER and which is SLAVE, Helix will not blindly issue MASTER\-\-\>SLAVE to N1 and SLAVE\-\-\>MASTER to N2 in parallel, since that might result in a transient state where both N1 and N2 are masters, which violates the MasterSlave constraint that there is exactly one MASTER at a time. Helix will first issue MASTER\-\-\>SLAVE to N1 and after it is completed, it will issue SLAVE\-\-\>MASTER to N2.
Suppose the current state of the system is 'MyResource_0' \-\> {N1:LEADER, N2:STANDBY} and the application changes the ideal state to 'MyResource_0' \-\> {N1:STANDBY,N2:LEADER}. While the application decides which node is LEADER and which is STANDBY, Helix will not blindly issue LEADER\-\-\>STANDBY to N1 and STANDBY\-\-\>LEADER to N2 in parallel, since that might result in a transient state where both N1 and N2 are leaders, which violates the LeaderStandby constraint that there is exactly one LEADER at a time. Helix will first issue LEADER\-\-\>STANDBY to N1 and after it is completed, it will issue STANDBY\-\-\>LEADER to N2.

### USER_DEFINED

0 comments on commit 8824d22

Please sign in to comment.