-
Notifications
You must be signed in to change notification settings - Fork 473
Completed draft of start a local cluster #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
start-a-local-cluster.md
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mention also that nothing is stored persistently, so there's no ability to continue where you left off if you restart the server.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And it's not that it's not suited for actual development. Better to say, "so it's suitable only for limited testing and development."
LGTM. |
Thanks, @spencerkimball. Pushed edits based on your feedback. One question I have, for you or @mberhault:
|
start-a-local-cluster.md
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In secure mode, you must specify https://<your local host>:26257
. This will also give a security warning that must be clicked past.
Thanks, @bdarnell. Updated that bit. Let me know if you see any other issues. |
start-a-local-cluster.md
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you're using /temp/data/
here, but temp/data
in two places above.
I would say either just plain data
, or /tmp/data
Good catches, @mberhault. I've updated the file and am going to merge it with master now. |
Completed draft of start a local cluster
It was broken for a few main reasons: 1. There was no guarantee that important system data wouldn't end up on the two nodes that we're bringing down, since we were only using two localities. 2. Even if we used 3 localities, the pattern of starting up a 3-node cluster first and then adding more nodes means that the system ranges may not be properly spread across the diversities, since in v1.1 we don't proactively move data around to improve diversity. 3. A read-only query isn't guaranteed to hang even if a range is unavailable. If we only kill the 2 non-leaseholders, the leaseholder will still be able to keep extending its lease (via node liveness) and serve reads. To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3 localities. To fix cockroachdb#2, I've spun up all the nodes before running cockroach init. We can go back to the old way of doing this once the labs use v2.0. To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
It was broken for a few main reasons: 1. There was no guarantee that important system data wouldn't end up on the two nodes that we're bringing down, since we were only using two localities. 2. Even if we used 3 localities, the pattern of starting up a 3-node cluster first and then adding more nodes means that the system ranges may not be properly spread across the diversities, since in v1.1 we don't proactively move data around to improve diversity. 3. A read-only query isn't guaranteed to hang even if a range is unavailable. If we only kill the 2 non-leaseholders, the leaseholder will still be able to keep extending its lease (via node liveness) and serve reads. To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3 localities. To fix cockroachdb#2, I've spun up all the nodes before running cockroach init. We can go back to the old way of doing this once the labs use v2.0. To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
It was broken for a few main reasons: 1. There was no guarantee that important system data wouldn't end up on the two nodes that we're bringing down, since we were only using two localities. 2. Even if we used 3 localities, the pattern of starting up a 3-node cluster first and then adding more nodes means that the system ranges may not be properly spread across the diversities, since in v1.1 we don't proactively move data around to improve diversity. 3. A read-only query isn't guaranteed to hang even if a range is unavailable. If we only kill the 2 non-leaseholders, the leaseholder will still be able to keep extending its lease (via node liveness) and serve reads. To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3 localities. To fix cockroachdb#2, I've spun up all the nodes before running cockroach init. We can go back to the old way of doing this once the labs use v2.0. To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
It was broken for a few main reasons: 1. There was no guarantee that important system data wouldn't end up on the two nodes that we're bringing down, since we were only using two localities. 2. Even if we used 3 localities, the pattern of starting up a 3-node cluster first and then adding more nodes means that the system ranges may not be properly spread across the diversities, since in v1.1 we don't proactively move data around to improve diversity. 3. A read-only query isn't guaranteed to hang even if a range is unavailable. If we only kill the 2 non-leaseholders, the leaseholder will still be able to keep extending its lease (via node liveness) and serve reads. To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3 localities. To fix cockroachdb#2, I've spun up all the nodes before running cockroach init. We can go back to the old way of doing this once the labs use v2.0. To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
It was broken for a few main reasons: 1. There was no guarantee that important system data wouldn't end up on the two nodes that we're bringing down, since we were only using two localities. 2. Even if we used 3 localities, the pattern of starting up a 3-node cluster first and then adding more nodes means that the system ranges may not be properly spread across the diversities, since in v1.1 we don't proactively move data around to improve diversity. 3. A read-only query isn't guaranteed to hang even if a range is unavailable. If we only kill the 2 non-leaseholders, the leaseholder will still be able to keep extending its lease (via node liveness) and serve reads. To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3 localities. To fix cockroachdb#2, I've spun up all the nodes before running cockroach init. We can go back to the old way of doing this once the labs use v2.0. To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
It was broken for a few main reasons: 1. There was no guarantee that important system data wouldn't end up on the two nodes that we're bringing down, since we were only using two localities. 2. Even if we used 3 localities, the pattern of starting up a 3-node cluster first and then adding more nodes means that the system ranges may not be properly spread across the diversities, since in v1.1 we don't proactively move data around to improve diversity. 3. A read-only query isn't guaranteed to hang even if a range is unavailable. If we only kill the 2 non-leaseholders, the leaseholder will still be able to keep extending its lease (via node liveness) and serve reads. To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3 localities. To fix cockroachdb#2, I've spun up all the nodes before running cockroach init. We can go back to the old way of doing this once the labs use v2.0. To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
It was broken for a few main reasons: 1. There was no guarantee that important system data wouldn't end up on the two nodes that we're bringing down, since we were only using two localities. 2. Even if we used 3 localities, the pattern of starting up a 3-node cluster first and then adding more nodes means that the system ranges may not be properly spread across the diversities, since in v1.1 we don't proactively move data around to improve diversity. 3. A read-only query isn't guaranteed to hang even if a range is unavailable. If we only kill the 2 non-leaseholders, the leaseholder will still be able to keep extending its lease (via node liveness) and serve reads. To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3 localities. To fix cockroachdb#2, I've spun up all the nodes before running cockroach init. We can go back to the old way of doing this once the labs use v2.0. To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
* migration tool documentation * Update arangosync-migration-tool.md * Update arangosync-migration-tool.md * update * Update deployments.md * Update deployments.md * Update arangosync-migration-tool.md * Update arangosync-migration-tool.md * Remove line breaks from rendered hint block HTML to avoid literal </div> in output * Undo hint block workaround * Proper indentation for sublists * Suggestions from code review cockroachdb#1 Co-authored-by: Nikita Vaniasin <nikita.vanyasin@gmail.com> * Suggestions from code review * Moved to Oasis manual and other adjustements * moved images to oasis * fix broken links * second attempt * added download links and applied suggestions * Applied suggestions Co-authored-by: ansoboleva <93702078+ansoboleva@users.noreply.github.com> Co-authored-by: Simran Spiller <simran@arangodb.com> Co-authored-by: Nikita Vaniasin <nikita.vanyasin@gmail.com>
I have a first draft of "Start a Local Cluster". Please review http://cockroach-draft-docs.s3-website-us-east-1.amazonaws.com/start-a-local-cluster.html and send thoughts and feedback.
@spencerkimball, I've assigned this to Marc, but please send along your suggestions, too.