Skip to content

Conversation

jseldess
Copy link
Contributor

I have a first draft of "Start a Local Cluster". Please review http://cockroach-draft-docs.s3-website-us-east-1.amazonaws.com/start-a-local-cluster.html and send thoughts and feedback.

@spencerkimball, I've assigned this to Marc, but please send along your suggestions, too.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mention also that nothing is stored persistently, so there's no ability to continue where you left off if you restart the server.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And it's not that it's not suited for actual development. Better to say, "so it's suitable only for limited testing and development."

@spencerkimball
Copy link
Member

LGTM.

@jseldess
Copy link
Contributor Author

Thanks, @spencerkimball. Pushed edits based on your feedback. One question I have, for you or @mberhault:

  • Under "Standard Mode", I'm also planning to combine steps 2 and 4 into a single step for creating all certificates. This will reduce number of steps by 1.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In secure mode, you must specify https://<your local host>:26257. This will also give a security warning that must be clicked past.

@jseldess
Copy link
Contributor Author

Thanks, @bdarnell. Updated that bit. Let me know if you see any other issues.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you're using /temp/data/ here, but temp/data in two places above.
I would say either just plain data, or /tmp/data

@jseldess
Copy link
Contributor Author

Good catches, @mberhault. I've updated the file and am going to merge it with master now.

jseldess pushed a commit that referenced this pull request Jan 15, 2016
Completed draft of start a local cluster
@jseldess jseldess merged commit 5980fd9 into master Jan 15, 2016
@jseldess jseldess deleted the jseldess/create-a-local-cluster branch January 15, 2016 03:43
jseldess pushed a commit that referenced this pull request May 30, 2016
jseldess pushed a commit that referenced this pull request Oct 16, 2016
a-robinson added a commit to a-robinson/docs that referenced this pull request Feb 12, 2018
It was broken for a few main reasons:

1. There was no guarantee that important system data wouldn't end up on
the two nodes that we're bringing down, since we were only using two
localities.
2. Even if we used 3 localities, the pattern of starting up a 3-node
cluster first and then adding more nodes means that the system ranges
may not be properly spread across the diversities, since in v1.1 we
don't proactively move data around to improve diversity.
3. A read-only query isn't guaranteed to hang even if a range is
unavailable. If we only kill the 2 non-leaseholders, the leaseholder
will still be able to keep extending its lease (via node liveness)
and serve reads.

To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3
localities.
To fix cockroachdb#2, I've spun up all the nodes before running cockroach init.
We can go back to the old way of doing this once the labs use v2.0.
To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
a-robinson added a commit to a-robinson/docs that referenced this pull request Feb 12, 2018
It was broken for a few main reasons:

1. There was no guarantee that important system data wouldn't end up on
the two nodes that we're bringing down, since we were only using two
localities.
2. Even if we used 3 localities, the pattern of starting up a 3-node
cluster first and then adding more nodes means that the system ranges
may not be properly spread across the diversities, since in v1.1 we
don't proactively move data around to improve diversity.
3. A read-only query isn't guaranteed to hang even if a range is
unavailable. If we only kill the 2 non-leaseholders, the leaseholder
will still be able to keep extending its lease (via node liveness)
and serve reads.

To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3
localities.
To fix cockroachdb#2, I've spun up all the nodes before running cockroach init.
We can go back to the old way of doing this once the labs use v2.0.
To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
@cockroach-teamcity
Copy link
Member

This change is Reviewable

a-robinson added a commit to a-robinson/docs that referenced this pull request Feb 12, 2018
It was broken for a few main reasons:

1. There was no guarantee that important system data wouldn't end up on
the two nodes that we're bringing down, since we were only using two
localities.
2. Even if we used 3 localities, the pattern of starting up a 3-node
cluster first and then adding more nodes means that the system ranges
may not be properly spread across the diversities, since in v1.1 we
don't proactively move data around to improve diversity.
3. A read-only query isn't guaranteed to hang even if a range is
unavailable. If we only kill the 2 non-leaseholders, the leaseholder
will still be able to keep extending its lease (via node liveness)
and serve reads.

To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3
localities.
To fix cockroachdb#2, I've spun up all the nodes before running cockroach init.
We can go back to the old way of doing this once the labs use v2.0.
To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
a-robinson added a commit to a-robinson/docs that referenced this pull request Feb 12, 2018
It was broken for a few main reasons:

1. There was no guarantee that important system data wouldn't end up on
the two nodes that we're bringing down, since we were only using two
localities.
2. Even if we used 3 localities, the pattern of starting up a 3-node
cluster first and then adding more nodes means that the system ranges
may not be properly spread across the diversities, since in v1.1 we
don't proactively move data around to improve diversity.
3. A read-only query isn't guaranteed to hang even if a range is
unavailable. If we only kill the 2 non-leaseholders, the leaseholder
will still be able to keep extending its lease (via node liveness)
and serve reads.

To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3
localities.
To fix cockroachdb#2, I've spun up all the nodes before running cockroach init.
We can go back to the old way of doing this once the labs use v2.0.
To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
a-robinson added a commit to a-robinson/docs that referenced this pull request Feb 12, 2018
It was broken for a few main reasons:

1. There was no guarantee that important system data wouldn't end up on
the two nodes that we're bringing down, since we were only using two
localities.
2. Even if we used 3 localities, the pattern of starting up a 3-node
cluster first and then adding more nodes means that the system ranges
may not be properly spread across the diversities, since in v1.1 we
don't proactively move data around to improve diversity.
3. A read-only query isn't guaranteed to hang even if a range is
unavailable. If we only kill the 2 non-leaseholders, the leaseholder
will still be able to keep extending its lease (via node liveness)
and serve reads.

To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3
localities.
To fix cockroachdb#2, I've spun up all the nodes before running cockroach init.
We can go back to the old way of doing this once the labs use v2.0.
To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
a-robinson added a commit to a-robinson/docs that referenced this pull request Feb 13, 2018
It was broken for a few main reasons:

1. There was no guarantee that important system data wouldn't end up on
the two nodes that we're bringing down, since we were only using two
localities.
2. Even if we used 3 localities, the pattern of starting up a 3-node
cluster first and then adding more nodes means that the system ranges
may not be properly spread across the diversities, since in v1.1 we
don't proactively move data around to improve diversity.
3. A read-only query isn't guaranteed to hang even if a range is
unavailable. If we only kill the 2 non-leaseholders, the leaseholder
will still be able to keep extending its lease (via node liveness)
and serve reads.

To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3
localities.
To fix cockroachdb#2, I've spun up all the nodes before running cockroach init.
We can go back to the old way of doing this once the labs use v2.0.
To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
a-robinson added a commit to a-robinson/docs that referenced this pull request Feb 13, 2018
It was broken for a few main reasons:

1. There was no guarantee that important system data wouldn't end up on
the two nodes that we're bringing down, since we were only using two
localities.
2. Even if we used 3 localities, the pattern of starting up a 3-node
cluster first and then adding more nodes means that the system ranges
may not be properly spread across the diversities, since in v1.1 we
don't proactively move data around to improve diversity.
3. A read-only query isn't guaranteed to hang even if a range is
unavailable. If we only kill the 2 non-leaseholders, the leaseholder
will still be able to keep extending its lease (via node liveness)
and serve reads.

To fix cockroachdb#1, I've modified this to spin up a 9 node cluster across 3
localities.
To fix cockroachdb#2, I've spun up all the nodes before running cockroach init.
We can go back to the old way of doing this once the labs use v2.0.
To fix cockroachdb#3, I've switched from demo-ing a SELECT to using an INSERT.
Simran-B pushed a commit to Simran-B/docs that referenced this pull request Jul 23, 2019
lnhsingh pushed a commit that referenced this pull request Sep 17, 2020
Amruta-Ranade pushed a commit that referenced this pull request Oct 2, 2020
rmloveland added a commit that referenced this pull request May 1, 2025
Simran-B pushed a commit to Simran-B/docs that referenced this pull request Aug 21, 2025
Simran-B added a commit to Simran-B/docs that referenced this pull request Aug 21, 2025
* migration tool documentation

* Update arangosync-migration-tool.md

* Update arangosync-migration-tool.md

* update

* Update deployments.md

* Update deployments.md

* Update arangosync-migration-tool.md

* Update arangosync-migration-tool.md

* Remove line breaks from rendered hint block HTML to avoid literal </div> in output

* Undo hint block workaround

* Proper indentation for sublists

* Suggestions from code review cockroachdb#1

Co-authored-by: Nikita Vaniasin <nikita.vanyasin@gmail.com>

* Suggestions from code review

* Moved to Oasis manual and other adjustements

* moved images to oasis

* fix broken links

* second attempt

* added download links and applied suggestions

* Applied suggestions

Co-authored-by: ansoboleva <93702078+ansoboleva@users.noreply.github.com>
Co-authored-by: Simran Spiller <simran@arangodb.com>
Co-authored-by: Nikita Vaniasin <nikita.vanyasin@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants