New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spread ushards fairly across nodes #5

Merged
merged 1 commit into from Sep 2, 2011

Conversation

Projects
None yet
2 participants
@rnewson
Member

rnewson commented Aug 26, 2011

Spread ushards across as many nodes as possible.

BugzID: 12058

@kocolosk

This comment has been minimized.

Show comment
Hide comment
@kocolosk

kocolosk Aug 29, 2011

Member

I can't quite tell if this patch does spread the shards evenly across nodes or not. The Node is appended to the end of the list regardless of whether it matched any Shards. Shouldn't we be retrying the Node first if it doesn't match on a particular iteration? Also, should we be rotating the node list a bit based on the DbName so that we don't choose too many shards from the nodes that sort first in the list?

Member

kocolosk commented Aug 29, 2011

I can't quite tell if this patch does spread the shards evenly across nodes or not. The Node is appended to the end of the list regardless of whether it matched any Shards. Shouldn't we be retrying the Node first if it doesn't match on a particular iteration? Also, should we be rotating the node list a bit based on the DbName so that we don't choose too many shards from the nodes that sort first in the list?

@rnewson

This comment has been minimized.

Show comment
Hide comment
@rnewson

rnewson Aug 29, 2011

Member

Good ideas, will implement.

Sent from my iPhone

On 29 Aug 2011, at 18:33, kocolosk
reply@reply.github.com
wrote:

I can't quite tell if this patch does spread the shards evenly across nodes or not. The Node is appended to the end of the list regardless of whether it matched any Shards. Shouldn't we be retrying the Node first if it doesn't match on a particular iteration? Also, should we be rotating the node list a bit based on the DbName so that we don't choose too many shards from the nodes that sort first in the list?

Reply to this email directly or view it on GitHub:
#5 (comment)

Member

rnewson commented Aug 29, 2011

Good ideas, will implement.

Sent from my iPhone

On 29 Aug 2011, at 18:33, kocolosk
reply@reply.github.com
wrote:

I can't quite tell if this patch does spread the shards evenly across nodes or not. The Node is appended to the end of the list regardless of whether it matched any Shards. Shouldn't we be retrying the Node first if it doesn't match on a particular iteration? Also, should we be rotating the node list a bit based on the DbName so that we don't choose too many shards from the nodes that sort first in the list?

Reply to this email directly or view it on GitHub:
#5 (comment)

@kocolosk

This comment has been minimized.

Show comment
Hide comment
@kocolosk

kocolosk Aug 31, 2011

Member

So, there's still something slightly funny about this. Maybe it's not a big deal. If the node at the head of the Nodes list doesn't match, we retry it the next time around. That's cool. But the node that did get assigned as the owner for this shard still sits in the same place in the list, so it may end up with too many shards. Right?

Member

kocolosk commented Aug 31, 2011

So, there's still something slightly funny about this. Maybe it's not a big deal. If the node at the head of the Nodes list doesn't match, we retry it the next time around. That's cool. But the node that did get assigned as the owner for this shard still sits in the same place in the list, so it may end up with too many shards. Right?

Robert Newson
Spread ushards fairly across nodes
Spread ushards across as many nodes as possible.

BugzID: 12058
@kocolosk

This comment has been minimized.

Show comment
Hide comment
@kocolosk

kocolosk Aug 31, 2011

Member

Summarizing IRC, current patch looks good, but we want to determine if the range_not_available exception is the optimal way to bubble this error up to the HTTP layer in practice.

Member

kocolosk commented Aug 31, 2011

Summarizing IRC, current patch looks good, but we want to determine if the range_not_available exception is the optimal way to bubble this error up to the HTTP layer in practice.

@rnewson

This comment has been minimized.

Show comment
Hide comment
@rnewson

rnewson Sep 1, 2011

Member

This is how the error is currently marshalled to the HTTP layer;

< HTTP/1.1 500 Internal Server Error
< X-Couch-Request-ID: bbe24f0f
< Server: CouchDB/1.1.0 (Erlang OTP/R14B03)
< Date: Thu, 01 Sep 2011 13:30:43 GMT
< Content-Type: text/plain;charset=utf-8
< Content-Length: 67
< Cache-Control: must-revalidate
<
{"error":"range_not_available","reason":"[1431655765,2863311529]"}

Member

rnewson commented Sep 1, 2011

This is how the error is currently marshalled to the HTTP layer;

< HTTP/1.1 500 Internal Server Error
< X-Couch-Request-ID: bbe24f0f
< Server: CouchDB/1.1.0 (Erlang OTP/R14B03)
< Date: Thu, 01 Sep 2011 13:30:43 GMT
< Content-Type: text/plain;charset=utf-8
< Content-Length: 67
< Cache-Control: must-revalidate
<
{"error":"range_not_available","reason":"[1431655765,2863311529]"}

@kocolosk kocolosk merged commit 8ab35f8 into master Sep 2, 2011

iilyak pushed a commit that referenced this pull request Jul 15, 2015

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment