@@ -111,11 +111,11 @@ The `ConsensusType` is represented by three values: `Type`, `Metadata`, and
111
111
changed while in maintenance mode.
112
112
* ` Metadata ` will be empty if the ` Type ` is kafka, but must carry valid Raft
113
113
metadata if the ` ConsensusType ` is ` etcdraft ` . More on this below.
114
- * ` State ` is either ` NORMAL ` , when the channel is processing transactions, or
115
- ` MAINTENANCE ` , during the migration process.
114
+ * ` State ` is either ` STATE_NORMAL ` , when the channel is processing transactions, or
115
+ ` STATE_MAINTENANCE ` , during the migration process.
116
116
117
117
In the first step of the channel configuration update, only change the ` State `
118
- from ` NORMAL ` to ` MAINTENANCE ` . Do not change the ` Type ` or the ` Metadata ` field
118
+ from ` STATE_NORMAL ` to ` STATE_MAINTENANCE ` . Do not change the ` Type ` or the ` Metadata ` field
119
119
yet. Note that the ` Type ` should currently be ` kafka ` .
120
120
121
121
While in maintenance mode, normal transactions, config updates unrelated to
@@ -131,7 +131,7 @@ continue the migration process).
131
131
** Verify** that each ordering service node has entered maintenance mode on each
132
132
of the channels. This can be done by fetching the last config block and making
133
133
sure that the ` Type ` , ` Metadata ` , ` State ` on each channel is ` kafka ` , empty
134
- (recall that there is no metadata for Kafka), and ` MAINTENANCE ` , respectively.
134
+ (recall that there is no metadata for Kafka), and ` STATE_MAINTENANCE ` , respectively.
135
135
136
136
If the channels have been updated successfully, the ordering service is now
137
137
ready for backup.
@@ -153,7 +153,7 @@ service and then the ordering service nodes.
153
153
154
154
The next step in the migration process is another channel configuration update
155
155
for each channel. In this configuration update, switch the ` Type ` to ` etcdraft `
156
- (for Raft) while keeping the ` State ` in ` MAINTENANCE ` , and fill in the
156
+ (for Raft) while keeping the ` State ` in ` STATE_MAINTENANCE ` , and fill in the
157
157
` Metadata ` configuration. It is highly recommended that the ` Metadata ` configuration be
158
158
identical on all channels. If you want to establish different consenter sets
159
159
with different nodes, you will be able to reconfigure the ` Metadata ` configuration
@@ -168,7 +168,7 @@ Then, validate that each ordering service node has committed the `ConsensusType`
168
168
change configuration update by pulling and inspecting the configuration of each
169
169
channel.
170
170
171
- Note: the transaction that changes the ` ConsensusType ` must be the last
171
+ Note: For each channel, the transaction that changes the ` ConsensusType ` must be the last
172
172
configuration transaction before restarting the nodes (in the next step). If
173
173
some other configuration transaction happens after this step, the nodes will
174
174
most likely crash on restart, or result in undefined behavior.
@@ -180,7 +180,13 @@ Note: exit of maintenance mode **must** be done **after** restart.
180
180
After the ` ConsensusType ` update has been completed on each channel, stop all
181
181
ordering service nodes, stop all Kafka brokers and Zookeepers, and then restart
182
182
only the ordering service nodes. They should restart as Raft nodes, form a cluster per
183
- channel, and elect a leader on each channel. Make sure to ** validate** that a
183
+ channel, and elect a leader on each channel.
184
+
185
+ ** Note** : Since Raft-based ordering service requires mutual TLS between orderer nodes,
186
+ ** additional configurations** are required before you start them again, see
187
+ [ Section: Local Configuration] ( ./raft_configuration.md#local-configuration ) for more details.
188
+
189
+ After restart process finished, make sure to ** validate** that a
184
190
leader has been elected on each channel by inspecting the node logs (you can see
185
191
what to look for below). This will confirm that the process has been completed
186
192
successfully.
@@ -206,11 +212,11 @@ In this example `node 2` reports that a leader was elected (the leader is
206
212
207
213
Perform another channel configuration update on each channel (sending the config
208
214
update to the same ordering node you have been sending configuration updates to
209
- until now), switching the ` State ` from ` MAINTENANCE ` to ` NORMAL ` . Start with the
215
+ until now), switching the ` State ` from ` STATE_MAINTENANCE ` to ` STATE_NORMAL ` . Start with the
210
216
system channel, as usual. If it succeeds on the ordering system channel,
211
217
migration is likely to succeed on all channels. To verify, fetch the last config
212
218
block of the system channel from the ordering node, verifying that the ` State `
213
- is now ` NORMAL ` . For completeness, verify this on each ordering node.
219
+ is now ` STATE_NORMAL ` . For completeness, verify this on each ordering node.
214
220
215
221
When this process is completed, the ordering service is now ready to accept all
216
222
transactions on all channels. If you stopped your peers and application as
@@ -236,7 +242,7 @@ There are a few states which might indicate migration has failed:
236
242
237
243
1 . Some nodes crash or shutdown.
238
244
2 . There is no record of a successful leader election per channel in the logs.
239
- 3 . The attempt to flip to ` NORMAL ` mode on the system channel fails.
245
+ 3 . The attempt to flip to ` STATE_NORMAL ` mode on the system channel fails.
240
246
241
247
<!-- - Licensed under Creative Commons Attribution 4.0 International License
242
248
https://creativecommons.org/licenses/by/4.0/) -->
0 commit comments