-
Notifications
You must be signed in to change notification settings - Fork 473
docs for installing cockroachdb #2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
jseldess
pushed a commit
that referenced
this pull request
Jan 15, 2016
docs for installing cockroachdb
Closed
Merged
Amruta-Ranade
added a commit
that referenced
this pull request
Dec 20, 2017
# This is the 1st commit message: Incorporating comments # The commit message #2 will be skipped: # Cross-referenced openssl in deployment docs
Amruta-Ranade
added a commit
that referenced
this pull request
Dec 20, 2017
# This is the 1st commit message: Incorporated comments # The commit message #2 will be skipped: # Added licensing content and update sudebar nav
a-robinson
added a commit
to a-robinson/docs
that referenced
this pull request
Feb 12, 2018
It was broken for a few main reasons: 1. There was no guarantee that important system data wouldn't end up on the two nodes that we're bringing down, since we were only using two localities. 2. Even if we used 3 localities, the pattern of starting up a 3-node cluster first and then adding more nodes means that the system ranges may not be properly spread across the diversities, since in v1.1 we don't proactively move data around to improve diversity. 3. A read-only query isn't guaranteed to hang even if a range is unavailable. If we only kill the 2 non-leaseholders, the leaseholder will still be able to keep extending its lease (via node liveness) and serve reads. To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3 localities. To fix cockroachdb#2, I've spun up all the nodes before running cockroach init. We can go back to the old way of doing this once the labs use v2.0. To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
a-robinson
added a commit
to a-robinson/docs
that referenced
this pull request
Feb 12, 2018
It was broken for a few main reasons: 1. There was no guarantee that important system data wouldn't end up on the two nodes that we're bringing down, since we were only using two localities. 2. Even if we used 3 localities, the pattern of starting up a 3-node cluster first and then adding more nodes means that the system ranges may not be properly spread across the diversities, since in v1.1 we don't proactively move data around to improve diversity. 3. A read-only query isn't guaranteed to hang even if a range is unavailable. If we only kill the 2 non-leaseholders, the leaseholder will still be able to keep extending its lease (via node liveness) and serve reads. To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3 localities. To fix cockroachdb#2, I've spun up all the nodes before running cockroach init. We can go back to the old way of doing this once the labs use v2.0. To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
a-robinson
added a commit
to a-robinson/docs
that referenced
this pull request
Feb 12, 2018
It was broken for a few main reasons: 1. There was no guarantee that important system data wouldn't end up on the two nodes that we're bringing down, since we were only using two localities. 2. Even if we used 3 localities, the pattern of starting up a 3-node cluster first and then adding more nodes means that the system ranges may not be properly spread across the diversities, since in v1.1 we don't proactively move data around to improve diversity. 3. A read-only query isn't guaranteed to hang even if a range is unavailable. If we only kill the 2 non-leaseholders, the leaseholder will still be able to keep extending its lease (via node liveness) and serve reads. To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3 localities. To fix cockroachdb#2, I've spun up all the nodes before running cockroach init. We can go back to the old way of doing this once the labs use v2.0. To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
a-robinson
added a commit
to a-robinson/docs
that referenced
this pull request
Feb 12, 2018
It was broken for a few main reasons: 1. There was no guarantee that important system data wouldn't end up on the two nodes that we're bringing down, since we were only using two localities. 2. Even if we used 3 localities, the pattern of starting up a 3-node cluster first and then adding more nodes means that the system ranges may not be properly spread across the diversities, since in v1.1 we don't proactively move data around to improve diversity. 3. A read-only query isn't guaranteed to hang even if a range is unavailable. If we only kill the 2 non-leaseholders, the leaseholder will still be able to keep extending its lease (via node liveness) and serve reads. To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3 localities. To fix cockroachdb#2, I've spun up all the nodes before running cockroach init. We can go back to the old way of doing this once the labs use v2.0. To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
a-robinson
added a commit
to a-robinson/docs
that referenced
this pull request
Feb 12, 2018
It was broken for a few main reasons: 1. There was no guarantee that important system data wouldn't end up on the two nodes that we're bringing down, since we were only using two localities. 2. Even if we used 3 localities, the pattern of starting up a 3-node cluster first and then adding more nodes means that the system ranges may not be properly spread across the diversities, since in v1.1 we don't proactively move data around to improve diversity. 3. A read-only query isn't guaranteed to hang even if a range is unavailable. If we only kill the 2 non-leaseholders, the leaseholder will still be able to keep extending its lease (via node liveness) and serve reads. To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3 localities. To fix cockroachdb#2, I've spun up all the nodes before running cockroach init. We can go back to the old way of doing this once the labs use v2.0. To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
a-robinson
added a commit
to a-robinson/docs
that referenced
this pull request
Feb 13, 2018
It was broken for a few main reasons: 1. There was no guarantee that important system data wouldn't end up on the two nodes that we're bringing down, since we were only using two localities. 2. Even if we used 3 localities, the pattern of starting up a 3-node cluster first and then adding more nodes means that the system ranges may not be properly spread across the diversities, since in v1.1 we don't proactively move data around to improve diversity. 3. A read-only query isn't guaranteed to hang even if a range is unavailable. If we only kill the 2 non-leaseholders, the leaseholder will still be able to keep extending its lease (via node liveness) and serve reads. To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3 localities. To fix cockroachdb#2, I've spun up all the nodes before running cockroach init. We can go back to the old way of doing this once the labs use v2.0. To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
a-robinson
added a commit
to a-robinson/docs
that referenced
this pull request
Feb 13, 2018
It was broken for a few main reasons: 1. There was no guarantee that important system data wouldn't end up on the two nodes that we're bringing down, since we were only using two localities. 2. Even if we used 3 localities, the pattern of starting up a 3-node cluster first and then adding more nodes means that the system ranges may not be properly spread across the diversities, since in v1.1 we don't proactively move data around to improve diversity. 3. A read-only query isn't guaranteed to hang even if a range is unavailable. If we only kill the 2 non-leaseholders, the leaseholder will still be able to keep extending its lease (via node liveness) and serve reads. To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3 localities. To fix cockroachdb#2, I've spun up all the nodes before running cockroach init. We can go back to the old way of doing this once the labs use v2.0. To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
ericharmeling
pushed a commit
that referenced
this pull request
Mar 31, 2020
PK changes doc update #2
Amruta-Ranade
pushed a commit
that referenced
this pull request
Oct 6, 2020
Replication zone variables
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.