Skip to content

Commit 7a46e60

Browse files
committed
updates from summit
1 parent ae77df1 commit 7a46e60

File tree

1 file changed

+37
-23
lines changed

1 file changed

+37
-23
lines changed

docs/specifications/servers-of-happiness.rst

Lines changed: 37 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -194,39 +194,53 @@ Calculating Share Placements
194194

195195
We calculate share placement like so:
196196

197-
1. Query *2N* servers for existing shares.
197+
0. Start with an ordered list of servers. Maybe *2N* of them.
198198

199-
2. Construct a bipartite graph of *readonly* servers to shares, where an edge
200-
exists between an arbitrary readonly server S and an arbitrary share T if
201-
and only if S currently holds T.
199+
1. Query all servers for existing shares.
202200

203-
3. Calculate a maximum matching graph of that bipartite graph. There may be
204-
more than one maximum matching for this graph; we choose one of them
205-
arbitrarily.
201+
2. Construct a bipartite graph G1 of *readonly* servers to pre-existing
202+
shares, where an edge exists between an arbitrary readonly server S and an
203+
arbitrary share T if and only if S currently holds T.
206204

207-
4. Construct a bipartite graph of servers (whether readonly or readwrite) to
208-
shares, removing any servers and shares used in the maximum matching graph
209-
from step 3. Let an edge exist between server S and share T if and only if
210-
S already holds T.
205+
3. Calculate a maximum matching graph of G1 (a set of S->T edges that has or
206+
is-tied-for the highest "happiness score"). There is a clever efficient
207+
algorithm for this, named "Ford-Fulkerson". There may be more than one
208+
maximum matching for this graph; we choose one of them arbitrarily, but
209+
prefer earlier servers. Call this particular placement M1. The placement
210+
maps shares to servers, where each share appears at most once, and each
211+
server appears at most once.
211212

212-
5. Calculate the maximum matching graph of the new graph.
213+
4. Construct a bipartite graph G2 of readwrite servers to pre-existing
214+
shares. Then remove any edge (from G2) that uses a server or a share found
215+
in M1. Let an edge exist between server S and share T if and only if S
216+
already holds T.
213217

214-
6. Construct a bipartite graph of servers (whether readonly or readwrite) to
215-
share, removing any servers and shares used in the maximum matching graphs
216-
from steps 3 and 5. Let an edge exist between server S and share T if and
217-
only if S *could* hold T (i.e. S is readwrite and S has enough available
218-
space to hold a share of at least T's size).
218+
5. Calculate a maximum matching graph of G2, call this M2, again preferring
219+
earlier servers.
219220

220-
7. Calculate the maximum matching graph of the new graph.
221+
6. Construct a bipartite graph G3 of (only readwrite) servers to shares. Let
222+
an edge exist between server S and share T if and only if S already has T,
223+
or *could* hold T (i.e. S has enough available space to hold a share of at
224+
least T's size). Then remove (from G3) any servers and shares used in M1
225+
or M2 (note that we retain servers/shares that were in G1/G2 but *not* in
226+
the M1/M2 subsets)
221227

222-
8. Renew the shares on their respective servers from steps 3 and 5.
228+
7. Calculate a maximum matching graph of G3, call this M3, preferring earlier
229+
servers. The final placement table is the union of M1+M2+M3.
223230

224-
9. Place share T on server S if an edge exists between S and T in the maximum
225-
matching graph from step 7.
231+
8. Renew the shares on their respective servers from M1 and M2.
226232

227-
10. If any placements from step 7 fail, remove the server from the set of
228-
possible servers and regenerate the matchings. XXX go back to step 4?
233+
9. Upload share T to server S if an edge exists between S and T in M3.
229234

235+
10. If any placements from step 9 fail, mark the server as read-only. Go back
236+
to step 2 (since we may discover a server is/has-become read-only, or has
237+
failed, during step 9).
238+
239+
Rationale (Step 4): when we see pre-existing shares on read-only servers, we
240+
prefer to rely upon those (rather than the ones on read-write servers), so we
241+
can maybe use the read-write servers for new shares. If we picked the
242+
read-write server's share, then we couldn't re-use that server for new ones
243+
(we only rely upon each server for one share, more or less).
230244

231245
Properties of Upload Strategy of Happiness
232246
------------------------------------------

0 commit comments

Comments
 (0)