Skip to content

Conversation

jseldess
Copy link
Contributor

No description provided.

jseldess pushed a commit that referenced this pull request Jan 15, 2016
@jseldess jseldess merged commit a1803c1 into master Jan 15, 2016
@jseldess jseldess deleted the jseldess/install-cockroachdb branch January 15, 2016 03:54
@dianasaur323 dianasaur323 mentioned this pull request Nov 18, 2016
@jseldess jseldess mentioned this pull request Nov 28, 2016
Amruta-Ranade added a commit that referenced this pull request Dec 20, 2017
# This is the 1st commit message:

Incorporating comments

# The commit message #2 will be skipped:

# Cross-referenced openssl in deployment docs
Amruta-Ranade added a commit that referenced this pull request Dec 20, 2017
# This is the 1st commit message:

Incorporated comments

# The commit message #2 will be skipped:

# Added licensing content and update sudebar nav
a-robinson added a commit to a-robinson/docs that referenced this pull request Feb 12, 2018
It was broken for a few main reasons:

1. There was no guarantee that important system data wouldn't end up on
the two nodes that we're bringing down, since we were only using two
localities.
2. Even if we used 3 localities, the pattern of starting up a 3-node
cluster first and then adding more nodes means that the system ranges
may not be properly spread across the diversities, since in v1.1 we
don't proactively move data around to improve diversity.
3. A read-only query isn't guaranteed to hang even if a range is
unavailable. If we only kill the 2 non-leaseholders, the leaseholder
will still be able to keep extending its lease (via node liveness)
and serve reads.

To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3
localities.
To fix cockroachdb#2, I've spun up all the nodes before running cockroach init.
We can go back to the old way of doing this once the labs use v2.0.
To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
a-robinson added a commit to a-robinson/docs that referenced this pull request Feb 12, 2018
It was broken for a few main reasons:

1. There was no guarantee that important system data wouldn't end up on
the two nodes that we're bringing down, since we were only using two
localities.
2. Even if we used 3 localities, the pattern of starting up a 3-node
cluster first and then adding more nodes means that the system ranges
may not be properly spread across the diversities, since in v1.1 we
don't proactively move data around to improve diversity.
3. A read-only query isn't guaranteed to hang even if a range is
unavailable. If we only kill the 2 non-leaseholders, the leaseholder
will still be able to keep extending its lease (via node liveness)
and serve reads.

To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3
localities.
To fix cockroachdb#2, I've spun up all the nodes before running cockroach init.
We can go back to the old way of doing this once the labs use v2.0.
To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
@cockroach-teamcity
Copy link
Member

This change is Reviewable

a-robinson added a commit to a-robinson/docs that referenced this pull request Feb 12, 2018
It was broken for a few main reasons:

1. There was no guarantee that important system data wouldn't end up on
the two nodes that we're bringing down, since we were only using two
localities.
2. Even if we used 3 localities, the pattern of starting up a 3-node
cluster first and then adding more nodes means that the system ranges
may not be properly spread across the diversities, since in v1.1 we
don't proactively move data around to improve diversity.
3. A read-only query isn't guaranteed to hang even if a range is
unavailable. If we only kill the 2 non-leaseholders, the leaseholder
will still be able to keep extending its lease (via node liveness)
and serve reads.

To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3
localities.
To fix cockroachdb#2, I've spun up all the nodes before running cockroach init.
We can go back to the old way of doing this once the labs use v2.0.
To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
a-robinson added a commit to a-robinson/docs that referenced this pull request Feb 12, 2018
It was broken for a few main reasons:

1. There was no guarantee that important system data wouldn't end up on
the two nodes that we're bringing down, since we were only using two
localities.
2. Even if we used 3 localities, the pattern of starting up a 3-node
cluster first and then adding more nodes means that the system ranges
may not be properly spread across the diversities, since in v1.1 we
don't proactively move data around to improve diversity.
3. A read-only query isn't guaranteed to hang even if a range is
unavailable. If we only kill the 2 non-leaseholders, the leaseholder
will still be able to keep extending its lease (via node liveness)
and serve reads.

To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3
localities.
To fix cockroachdb#2, I've spun up all the nodes before running cockroach init.
We can go back to the old way of doing this once the labs use v2.0.
To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
a-robinson added a commit to a-robinson/docs that referenced this pull request Feb 12, 2018
It was broken for a few main reasons:

1. There was no guarantee that important system data wouldn't end up on
the two nodes that we're bringing down, since we were only using two
localities.
2. Even if we used 3 localities, the pattern of starting up a 3-node
cluster first and then adding more nodes means that the system ranges
may not be properly spread across the diversities, since in v1.1 we
don't proactively move data around to improve diversity.
3. A read-only query isn't guaranteed to hang even if a range is
unavailable. If we only kill the 2 non-leaseholders, the leaseholder
will still be able to keep extending its lease (via node liveness)
and serve reads.

To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3
localities.
To fix cockroachdb#2, I've spun up all the nodes before running cockroach init.
We can go back to the old way of doing this once the labs use v2.0.
To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
a-robinson added a commit to a-robinson/docs that referenced this pull request Feb 13, 2018
It was broken for a few main reasons:

1. There was no guarantee that important system data wouldn't end up on
the two nodes that we're bringing down, since we were only using two
localities.
2. Even if we used 3 localities, the pattern of starting up a 3-node
cluster first and then adding more nodes means that the system ranges
may not be properly spread across the diversities, since in v1.1 we
don't proactively move data around to improve diversity.
3. A read-only query isn't guaranteed to hang even if a range is
unavailable. If we only kill the 2 non-leaseholders, the leaseholder
will still be able to keep extending its lease (via node liveness)
and serve reads.

To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3
localities.
To fix cockroachdb#2, I've spun up all the nodes before running cockroach init.
We can go back to the old way of doing this once the labs use v2.0.
To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
a-robinson added a commit to a-robinson/docs that referenced this pull request Feb 13, 2018
It was broken for a few main reasons:

1. There was no guarantee that important system data wouldn't end up on
the two nodes that we're bringing down, since we were only using two
localities.
2. Even if we used 3 localities, the pattern of starting up a 3-node
cluster first and then adding more nodes means that the system ranges
may not be properly spread across the diversities, since in v1.1 we
don't proactively move data around to improve diversity.
3. A read-only query isn't guaranteed to hang even if a range is
unavailable. If we only kill the 2 non-leaseholders, the leaseholder
will still be able to keep extending its lease (via node liveness)
and serve reads.

To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3
localities.
To fix cockroachdb#2, I've spun up all the nodes before running cockroach init.
We can go back to the old way of doing this once the labs use v2.0.
To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
lnhsingh pushed a commit that referenced this pull request Mar 15, 2018
# This is the 1st commit message:

Merge pull request #2676 from cockroachdb/MattJ_PR

Improve IMPORT 2.0 Docs

minor edits / clarifications

# This is the commit message #2:

minor edits

# This is the commit message #3:

edits based on feedback

# This is the commit message #4:

edit example
ericharmeling pushed a commit that referenced this pull request Mar 31, 2020
lnhsingh pushed a commit that referenced this pull request Sep 17, 2020
Amruta-Ranade pushed a commit that referenced this pull request Oct 6, 2020
Simran-B pushed a commit to Simran-B/docs that referenced this pull request Aug 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants