diff --git a/_includes/metric-names.md b/_includes/metric-names.md new file mode 100644 index 00000000000..80098b223b9 --- /dev/null +++ b/_includes/metric-names.md @@ -0,0 +1,246 @@ +Name | Help +-----|----- +`addsstable.applications` | Number of SSTable ingestions applied (i.e., applied by Replicas) +`addsstable.copies` | Number of SSTable ingestions that required copying files during application +`addsstable.proposals` | Number of SSTable ingestions proposed (i.e., sent to Raft by lease holders) +`build.timestamp` | Build information +`capacity.available` | Available storage capacity +`capacity.reserved` | Capacity reserved for snapshots +`capacity.used` | Used storage capacity +`capacity` | Total storage capacity +`clock-offset.meannanos` | Mean clock offset with other nodes in nanoseconds +`clock-offset.stddevnanos` | Std dev clock offset with other nodes in nanoseconds +`compactor.compactingnanos` | Number of nanoseconds spent compacting ranges +`compactor.compactions.failure` | Number of failed compaction requests sent to the storage engine +`compactor.compactions.success` | Number of successful compaction requests sent to the storage engine +`compactor.suggestionbytes.compacted` | Number of logical bytes compacted from suggested compactions +`compactor.suggestionbytes.queued` | Number of logical bytes in suggested compactions in the queue +`compactor.suggestionbytes.skipped` | Number of logical bytes in suggested compactions which were not compacted +`distsender.batches.partial` | Number of partial batches processed +`distsender.batches` | Number of batches processed +`distsender.errors.notleaseholder` | Number of NotLeaseHolderErrors encountered +`distsender.rpc.sent.local` | Number of local RPCs sent +`distsender.rpc.sent.nextreplicaerror` | Number of RPCs sent due to per-replica errors +`distsender.rpc.sent` | Number of RPCs sent +`exec.error` | Number of batch KV requests that failed to execute on this node +`exec.latency` | Latency in nanoseconds of batch KV requests executed on this node +`exec.success` | Number of batch KV requests executed successfully on this node +`gcbytesage` | Cumulative age of non-live data in seconds +`gossip.bytes.received` | Number of received gossip bytes +`gossip.bytes.sent` | Number of sent gossip bytes +`gossip.connections.incoming` | Number of active incoming gossip connections +`gossip.connections.outgoing` | Number of active outgoing gossip connections +`gossip.connections.refused` | Number of refused incoming gossip connections +`gossip.infos.received` | Number of received gossip Info objects +`gossip.infos.sent` | Number of sent gossip Info objects +`intentage` | Cumulative age of intents in seconds +`intentbytes` | Number of bytes in intent KV pairs +`intentcount` | Count of intent keys +`keybytes` | Number of bytes taken up by keys +`keycount` | Count of all keys +`lastupdatenanos` | Time in nanoseconds since Unix epoch at which bytes/keys/intents metrics were last updated +`leases.epoch` | Number of replica leaseholders using epoch-based leases +`leases.error` | Number of failed lease requests +`leases.expiration` | Number of replica leaseholders using expiration-based leases +`leases.success` | Number of successful lease requests +`leases.transfers.error` | Number of failed lease transfers +`leases.transfers.success` | Number of successful lease transfers +`livebytes` | Number of bytes of live data (keys plus values) +`livecount` | Count of live keys +`liveness.epochincrements` | Number of times this node has incremented its liveness epoch +`liveness.heartbeatfailures` | Number of failed node liveness heartbeats from this node +`liveness.heartbeatlatency` | Node liveness heartbeat latency in nanoseconds +`liveness.heartbeatsuccesses` | Number of successful node liveness heartbeats from this node +`liveness.livenodes` | Number of live nodes in the cluster (will be 0 if this node is not itself live) +`node-id` | node ID with labels for advertised RPC and HTTP addresses +`queue.consistency.pending` | Number of pending replicas in the consistency checker queue +`queue.consistency.process.failure` | Number of replicas which failed processing in the consistency checker queue +`queue.consistency.process.success` | Number of replicas successfully processed by the consistency checker queue +`queue.consistency.processingnanos` | Nanoseconds spent processing replicas in the consistency checker queue +`queue.gc.info.abortspanconsidered` | Number of AbortSpan entries old enough to be considered for removal +`queue.gc.info.abortspangcnum` | Number of AbortSpan entries fit for removal +`queue.gc.info.abortspanscanned` | Number of transactions present in the AbortSpan scanned from the engine +`queue.gc.info.intentsconsidered` | Number of 'old' intents +`queue.gc.info.intenttxns` | Number of associated distinct transactions +`queue.gc.info.numkeysaffected` | Number of keys with GC'able data +`queue.gc.info.pushtxn` | Number of attempted pushes +`queue.gc.info.resolvesuccess` | Number of successful intent resolutions +`queue.gc.info.resolvetotal` | Number of attempted intent resolutions +`queue.gc.info.transactionspangcaborted` | Number of GC'able entries corresponding to aborted txns +`queue.gc.info.transactionspangccommitted` | Number of GC'able entries corresponding to committed txns +`queue.gc.info.transactionspangcpending` | Number of GC'able entries corresponding to pending txns +`queue.gc.info.transactionspanscanned` | Number of entries in transaction spans scanned from the engine +`queue.gc.pending` | Number of pending replicas in the GC queue +`queue.gc.process.failure` | Number of replicas which failed processing in the GC queue +`queue.gc.process.success` | Number of replicas successfully processed by the GC queue +`queue.gc.processingnanos` | Nanoseconds spent processing replicas in the GC queue +`queue.raftlog.pending` | Number of pending replicas in the Raft log queue +`queue.raftlog.process.failure` | Number of replicas which failed processing in the Raft log queue +`queue.raftlog.process.success` | Number of replicas successfully processed by the Raft log queue +`queue.raftlog.processingnanos` | Nanoseconds spent processing replicas in the Raft log queue +`queue.raftsnapshot.pending` | Number of pending replicas in the Raft repair queue +`queue.raftsnapshot.process.failure` | Number of replicas which failed processing in the Raft repair queue +`queue.raftsnapshot.process.success` | Number of replicas successfully processed by the Raft repair queue +`queue.raftsnapshot.processingnanos` | Nanoseconds spent processing replicas in the Raft repair queue +`queue.replicagc.pending` | Number of pending replicas in the replica GC queue +`queue.replicagc.process.failure` | Number of replicas which failed processing in the replica GC queue +`queue.replicagc.process.success` | Number of replicas successfully processed by the replica GC queue +`queue.replicagc.processingnanos` | Nanoseconds spent processing replicas in the replica GC queue +`queue.replicagc.removereplica` | Number of replica removals attempted by the replica gc queue +`queue.replicate.addreplica` | Number of replica additions attempted by the replicate queue +`queue.replicate.pending` | Number of pending replicas in the replicate queue +`queue.replicate.process.failure` | Number of replicas which failed processing in the replicate queue +`queue.replicate.process.success` | Number of replicas successfully processed by the replicate queue +`queue.replicate.processingnanos` | Nanoseconds spent processing replicas in the replicate queue +`queue.replicate.purgatory` | Number of replicas in the replicate queue's purgatory, awaiting allocation options +`queue.replicate.rebalancereplica` | Number of replica rebalancer-initiated additions attempted by the replicate queue +`queue.replicate.removedeadreplica` | Number of dead replica removals attempted by the replicate queue (typically in response to a node outage) +`queue.replicate.removereplica` | Number of replica removals attempted by the replicate queue (typically in response to a rebalancer-initiated addition) +`queue.replicate.transferlease` | Number of range lease transfers attempted by the replicate queue +`queue.split.pending` | Number of pending replicas in the split queue +`queue.split.process.failure` | Number of replicas which failed processing in the split queue +`queue.split.process.success` | Number of replicas successfully processed by the split queue +`queue.split.processingnanos` | Nanoseconds spent processing replicas in the split queue +`queue.tsmaintenance.pending` | Number of pending replicas in the time series maintenance queue +`queue.tsmaintenance.process.failure` | Number of replicas which failed processing in the time series maintenance queue +`queue.tsmaintenance.process.success` | Number of replicas successfully processed by the time series maintenance queue +`queue.tsmaintenance.processingnanos` | Nanoseconds spent processing replicas in the time series maintenance queue +`raft.commandsapplied` | Count of Raft commands applied +`raft.enqueued.pending` | Number of pending outgoing messages in the Raft Transport queue +`raft.heartbeats.pending` | Number of pending heartbeats and responses waiting to be coalesced +`raft.process.commandcommit.latency` | Latency histogram in nanoseconds for committing Raft commands +`raft.process.logcommit.latency` | Latency histogram in nanoseconds for committing Raft log entries +`raft.process.tickingnanos` | Nanoseconds spent in store.processRaft() processing replica.Tick() +`raft.process.workingnanos` | Nanoseconds spent in store.processRaft() working +`raft.rcvd.app` | Number of MsgApp messages received by this store +`raft.rcvd.appresp` | Number of MsgAppResp messages received by this store +`raft.rcvd.dropped` | Number of dropped incoming Raft messages +`raft.rcvd.heartbeat` | Number of (coalesced, if enabled) MsgHeartbeat messages received by this store +`raft.rcvd.heartbeatresp` | Number of (coalesced, if enabled) MsgHeartbeatResp messages received by this store +`raft.rcvd.prevote` | Number of MsgPreVote messages received by this store +`raft.rcvd.prevoteresp` | Number of MsgPreVoteResp messages received by this store +`raft.rcvd.prop` | Number of MsgProp messages received by this store +`raft.rcvd.snap` | Number of MsgSnap messages received by this store +`raft.rcvd.timeoutnow` | Number of MsgTimeoutNow messages received by this store +`raft.rcvd.transferleader` | Number of MsgTransferLeader messages received by this store +`raft.rcvd.vote` | Number of MsgVote messages received by this store +`raft.rcvd.voteresp` | Number of MsgVoteResp messages received by this store +`raft.ticks` | Number of Raft ticks queued +`raftlog.behind` | Number of Raft log entries followers on other stores are behind +`raftlog.truncated` | Number of Raft log entries truncated +`range.adds` | Number of range additions +`range.raftleadertransfers` | Number of raft leader transfers +`range.removes` | Number of range removals +`range.snapshots.generated` | Number of generated snapshots +`range.snapshots.normal-applied` | Number of applied snapshots +`range.snapshots.preemptive-applied` | Number of applied pre-emptive snapshots +`range.splits` | Number of range splits +`ranges.unavailable` | Number of ranges with fewer live replicas than needed for quorum +`ranges.underreplicated` | Number of ranges with fewer live replicas than the replication target +`ranges` | Number of ranges +`rebalancing.writespersecond` | Number of keys written (i.e., applied by raft) per second to the store, averaged over a large time period as used in rebalancing decisions +`replicas.commandqueue.combinedqueuesize` | Number of commands in all CommandQueues combined +`replicas.commandqueue.combinedreadcount` | Number of read-only commands in all CommandQueues combined +`replicas.commandqueue.combinedwritecount` | Number of read-write commands in all CommandQueues combined +`replicas.commandqueue.maxoverlaps` | Largest number of overlapping commands seen when adding to any CommandQueue +`replicas.commandqueue.maxreadcount` | Largest number of read-only commands in any CommandQueue +`replicas.commandqueue.maxsize` | Largest number of commands in any CommandQueue +`replicas.commandqueue.maxtreesize` | Largest number of intervals in any CommandQueue's interval tree +`replicas.commandqueue.maxwritecount` | Largest number of read-write commands in any CommandQueue +`replicas.leaders_not_leaseholders` | Number of replicas that are Raft leaders whose range lease is held by another store +`replicas.leaders` | Number of raft leaders +`replicas.leaseholders` | Number of lease holders +`replicas.quiescent` | Number of quiesced replicas +`replicas.reserved` | Number of replicas reserved for snapshots +`replicas` | Number of replicas +`requests.backpressure.split` | Number of backpressured writes waiting on a Range split +`requests.slow.commandqueue` | Number of requests that have been stuck for a long time in the command queue +`requests.slow.distsender` | Number of requests that have been stuck for a long time in the dist sender +`requests.slow.lease` | Number of requests that have been stuck for a long time acquiring a lease +`requests.slow.raft` | Number of requests that have been stuck for a long time in raft +`rocksdb.block.cache.hits` | Count of block cache hits +`rocksdb.block.cache.misses` | Count of block cache misses +`rocksdb.block.cache.pinned-usage` | Bytes pinned by the block cache +`rocksdb.block.cache.usage` | Bytes used by the block cache +`rocksdb.bloom.filter.prefix.checked` | Number of times the bloom filter was checked +`rocksdb.bloom.filter.prefix.useful` | Number of times the bloom filter helped avoid iterator creation +`rocksdb.compactions` | Number of table compactions +`rocksdb.flushes` | Number of table flushes +`rocksdb.memtable.total-size` | Current size of memtable in bytes +`rocksdb.num-sstables` | Number of rocksdb SSTables +`rocksdb.read-amplification` | Number of disk reads per query +`rocksdb.table-readers-mem-estimate` | Memory used by index and filter blocks +`round-trip-latency` | Distribution of round-trip latencies with other nodes in nanoseconds +`security.certificate.expiration.ca` | Expiration timestamp in seconds since Unix epoch for the CA certificate. 0 means no certificate or error. +`security.certificate.expiration.node` | Expiration timestamp in seconds since Unix epoch for the node certificate. 0 means no certificate or error. +`sql.bytesin` | Number of sql bytes received +`sql.bytesout` | Number of sql bytes sent +`sql.conns` | Number of active sql connections +`sql.ddl.count` | Number of SQL DDL statements +`sql.delete.count` | Number of SQL DELETE statements +`sql.distsql.exec.latency` | Latency in nanoseconds of DistSQL statement execution +`sql.distsql.flows.active` | Number of distributed SQL flows currently active +`sql.distsql.flows.total` | Number of distributed SQL flows executed +`sql.distsql.queries.active` | Number of distributed SQL queries currently active +`sql.distsql.queries.total` | Number of distributed SQL queries executed +`sql.distsql.select.count` | Number of DistSQL SELECT statements +`sql.distsql.service.latency` | Latency in nanoseconds of DistSQL request execution +`sql.exec.latency` | Latency in nanoseconds of SQL statement execution +`sql.insert.count` | Number of SQL INSERT statements +`sql.mem.current` | Current sql statement memory usage +`sql.mem.distsql.current` | Current sql statement memory usage for distsql +`sql.mem.distsql.max` | Memory usage per sql statement for distsql +`sql.mem.max` | Memory usage per sql statement +`sql.mem.session.current` | Current sql session memory usage +`sql.mem.session.max` | Memory usage per sql session +`sql.mem.txn.current` | Current sql transaction memory usage +`sql.mem.txn.max` | Memory usage per sql transaction +`sql.misc.count` | Number of other SQL statements +`sql.query.count` | Number of SQL queries +`sql.select.count` | Number of SQL SELECT statements +`sql.service.latency` | Latency in nanoseconds of SQL request execution +`sql.txn.abort.count` | Number of SQL transaction ABORT statements +`sql.txn.begin.count` | Number of SQL transaction BEGIN statements +`sql.txn.commit.count` | Number of SQL transaction COMMIT statements +`sql.txn.rollback.count` | Number of SQL transaction ROLLBACK statements +`sql.update.count` | Number of SQL UPDATE statements +`sys.cgo.allocbytes` | Current bytes of memory allocated by cgo +`sys.cgo.totalbytes` | Total bytes of memory allocated by cgo, but not released +`sys.cgocalls` | Total number of cgo call +`sys.cpu.sys.ns` | Total system cpu time in nanoseconds +`sys.cpu.sys.percent` | Current system cpu percentage +`sys.cpu.user.ns` | Total user cpu time in nanoseconds +`sys.cpu.user.percent` | Current user cpu percentage +`sys.fd.open` | Process open file descriptors +`sys.fd.softlimit` | Process open FD soft limit +`sys.gc.count` | Total number of GC runs +`sys.gc.pause.ns` | Total GC pause in nanoseconds +`sys.gc.pause.percent` | Current GC pause percentage +`sys.go.allocbytes` | Current bytes of memory allocated by go +`sys.go.totalbytes` | Total bytes of memory allocated by go, but not released +`sys.goroutines` | Current number of goroutines +`sys.rss` | Current process RSS +`sys.uptime` | Process uptime in seconds +`sysbytes` | Number of bytes in system KV pairs +`syscount` | Count of system KV pairs +`timeseries.write.bytes` | Total size in bytes of metric samples written to disk +`timeseries.write.errors` | Total errors encountered while attempting to write metrics to disk +`timeseries.write.samples` | Total number of metric samples written to disk +`totalbytes` | Total number of bytes taken up by keys and values including non-live data +`tscache.skl.read.pages` | Number of pages in the read timestamp cache +`tscache.skl.read.rotations` | Number of page rotations in the read timestamp cache +`tscache.skl.write.pages` | Number of pages in the write timestamp cache +`tscache.skl.write.rotations` | Number of page rotations in the write timestamp cache +`txn.abandons` | Number of abandoned KV transactions +`txn.aborts` | Number of aborted KV transactions +`txn.autoretries` | Number of automatic retries to avoid serializable restarts +`txn.commits1PC` | Number of committed one-phase KV transactions +`txn.commits` | Number of committed KV transactions (including 1PC) +`txn.durations` | KV transaction durations in nanoseconds +`txn.restarts.deleterange` | Number of restarts due to a forwarded commit timestamp and a DeleteRange command +`txn.restarts.possiblereplay` | Number of restarts due to possible replays of command batches at the storage layer +`txn.restarts.serializable` | Number of restarts due to a forwarded commit timestamp and isolation=SERIALIZABLE +`txn.restarts.writetooold` | Number of restarts due to a concurrent writer committing first +`txn.restarts` | Number of restarted KV transactions +`valbytes` | Number of bytes taken up by values +`valcount` | Count of all values diff --git a/_includes/v19.1/orchestration/kubernetes-prometheus-alertmanager.md b/_includes/v19.1/orchestration/kubernetes-prometheus-alertmanager.md index 3a8532095e8..795ec32f5df 100644 --- a/_includes/v19.1/orchestration/kubernetes-prometheus-alertmanager.md +++ b/_includes/v19.1/orchestration/kubernetes-prometheus-alertmanager.md @@ -42,7 +42,8 @@ If you're on Hosted GKE, before starting, make sure the email address associated {% include copy-clipboard.html %} ~~~ shell - $ kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.20/bundle.yaml + $ kubectl apply \ + -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.20/bundle.yaml ~~~ ~~~ @@ -68,7 +69,8 @@ If you're on Hosted GKE, before starting, make sure the email address associated {% include copy-clipboard.html %} ~~~ shell - $ kubectl apply -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/prometheus.yaml + $ kubectl apply \ + -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/prometheus.yaml ~~~ ~~~ @@ -115,7 +117,8 @@ Active monitoring helps you spot problems early, but it is also essential to sen {% include copy-clipboard.html %} ~~~ shell - $ kubectl create secret generic alertmanager-cockroachdb --from-file=alertmanager.yaml=alertmanager-config.yaml + $ kubectl create secret generic alertmanager-cockroachdb \ + --from-file=alertmanager.yaml=alertmanager-config.yaml ~~~ ~~~ @@ -139,7 +142,8 @@ Active monitoring helps you spot problems early, but it is also essential to sen {% include copy-clipboard.html %} ~~~ shell - $ kubectl apply -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/alertmanager.yaml + $ kubectl apply \ + -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/alertmanager.yaml ~~~ ~~~ @@ -168,7 +172,8 @@ Active monitoring helps you spot problems early, but it is also essential to sen {% include copy-clipboard.html %} ~~~ shell - $ kubectl apply -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/alert-rules.yaml + $ kubectl apply \ + -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/alert-rules.yaml ~~~ ~~~ diff --git a/_includes/v19.1/orchestration/kubernetes-remove-nodes-insecure.md b/_includes/v19.1/orchestration/kubernetes-remove-nodes-insecure.md index 06cce9aff79..00c4f01c84b 100644 --- a/_includes/v19.1/orchestration/kubernetes-remove-nodes-insecure.md +++ b/_includes/v19.1/orchestration/kubernetes-remove-nodes-insecure.md @@ -1,4 +1,4 @@ -To safely remove a node from your cluster, you must first decommission the node and only then adjust the `--replicas` value of your StatefulSet configuration to permanently remove it. This sequence is important because the decommissioning process lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node. +To safely remove a node from your cluster, you must first decommission the node and only then adjust the `Replicas` value of your StatefulSet configuration to permanently remove it. This sequence is important because the decommissioning process lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node. {{site.data.alerts.callout_danger}} If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see [Decommission Nodes](remove-nodes.html). @@ -9,17 +9,22 @@ If you remove nodes without first telling CockroachDB to decommission them, you
{% include copy-clipboard.html %} ~~~ shell - $ kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never \ - -- node status --insecure --host=cockroachdb-public + $ kubectl run cockroachdb -it \ + --image=cockroachdb/cockroach:{{page.release_info.version}} \ + --rm \ + --restart=Never \ + -- node status \ + --insecure \ + --host=cockroachdb-public ~~~ ~~~ id | address | build | started_at | updated_at | is_available | is_live +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+ - 1 | cockroachdb-0.cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true - 2 | cockroachdb-2.cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true - 3 | cockroachdb-1.cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true - 4 | cockroachdb-3.cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true + 1 | cockroachdb-0.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true + 2 | cockroachdb-2.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true + 3 | cockroachdb-1.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true + 4 | cockroachdb-3.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true (4 rows) ~~~ @@ -28,17 +33,22 @@ If you remove nodes without first telling CockroachDB to decommission them, you
{% include copy-clipboard.html %} ~~~ shell - $ kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never \ - -- node status --insecure --host=my-release-cockroachdb-public + $ kubectl run cockroachdb -it \ + --image=cockroachdb/cockroach:{{page.release_info.version}} \ + --rm \ + --restart=Never \ + -- node status \ + --insecure \ + --host=my-release-cockroachdb-public ~~~ ~~~ id | address | build | started_at | updated_at | is_available | is_live +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+ - 1 | my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true - 2 | my-release-cockroachdb-2.my-release-cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true - 3 | my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true - 4 | my-release-cockroachdb-3.my-release-cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true + 1 | my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true + 2 | my-release-cockroachdb-2.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true + 3 | my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true + 4 | my-release-cockroachdb-3.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true (4 rows) ~~~
@@ -52,16 +62,26 @@ If you remove nodes without first telling CockroachDB to decommission them, you
{% include copy-clipboard.html %} ~~~ shell - $ kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never \ - -- node decommission --insecure --host=cockroachdb-public + $ kubectl run cockroachdb -it \ + --image=cockroachdb/cockroach:{{page.release_info.version}} \ + --rm \ + --restart=Never \ + -- node decommission \ + --insecure \ + --host=cockroachdb-public ~~~
{% include copy-clipboard.html %} ~~~ shell - $ kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never \ - -- node decommission --insecure --host=my-release-cockroachdb-public + $ kubectl run cockroachdb -it \ + --image=cockroachdb/cockroach:{{page.release_info.version}} \ + --rm \ + --restart=Never \ + -- node decommission \ + --insecure \ + --host=my-release-cockroachdb-public ~~~
@@ -85,7 +105,7 @@ If you remove nodes without first telling CockroachDB to decommission them, you No more data reported on target nodes. Please verify cluster health before removing the nodes. ~~~ -3. Once the node has been decommissioned, use the `kubectl scale` command to remove a pod from your StatefulSet: +3. Once the node has been decommissioned, remove a pod from your StatefulSet:
{% include copy-clipboard.html %} @@ -101,10 +121,10 @@ If you remove nodes without first telling CockroachDB to decommission them, you
{% include copy-clipboard.html %} ~~~ shell - $ kubectl scale statefulset my-release-cockroachdb --replicas=3 - ~~~ - - ~~~ - statefulset "my-release-cockroachdb" scaled + $ helm upgrade \ + my-release \ + stable/cockroachdb \ + --set Replicas=3 \ + --reuse-values ~~~
diff --git a/_includes/v19.1/orchestration/kubernetes-remove-nodes-secure.md b/_includes/v19.1/orchestration/kubernetes-remove-nodes-secure.md index adf42307280..2d98eefbfef 100644 --- a/_includes/v19.1/orchestration/kubernetes-remove-nodes-secure.md +++ b/_includes/v19.1/orchestration/kubernetes-remove-nodes-secure.md @@ -1,4 +1,4 @@ -To safely remove a node from your cluster, you must first decommission the node and only then adjust the `--replicas` value of your StatefulSet configuration to permanently remove it. This sequence is important because the decommissioning process lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node. +To safely remove a node from your cluster, you must first decommission the node and only then adjust the `Replicas` value of your StatefulSet configuration to permanently remove it. This sequence is important because the decommissioning process lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node. {{site.data.alerts.callout_danger}} If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see [Decommission Nodes](remove-nodes.html). @@ -9,16 +9,19 @@ If you remove nodes without first telling CockroachDB to decommission them, you
{% include copy-clipboard.html %} ~~~ shell - $ kubectl exec -it cockroachdb-client-secure -- ./cockroach node status --certs-dir=/cockroach-certs --host=cockroachdb-public + $ kubectl exec -it cockroachdb-client-secure \ + -- ./cockroach node status \ + --certs-dir=/cockroach-certs \ + --host=cockroachdb-public ~~~ ~~~ id | address | build | started_at | updated_at | is_available | is_live +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+ - 1 | cockroachdb-0.cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true - 2 | cockroachdb-2.cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true - 3 | cockroachdb-1.cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true - 4 | cockroachdb-3.cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true + 1 | cockroachdb-0.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true + 2 | cockroachdb-2.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true + 3 | cockroachdb-1.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true + 4 | cockroachdb-3.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true (4 rows) ~~~
@@ -26,16 +29,19 @@ If you remove nodes without first telling CockroachDB to decommission them, you
{% include copy-clipboard.html %} ~~~ shell - $ kubectl exec -it cockroachdb-client-secure -- ./cockroach node status --certs-dir=/cockroach-certs --host=my-release-cockroachdb-public + $ kubectl exec -it cockroachdb-client-secure \ + -- ./cockroach node status \ + --certs-dir=/cockroach-certs \ + --host=my-release-cockroachdb-public ~~~ ~~~ id | address | build | started_at | updated_at | is_available | is_live +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+ - 1 | my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true - 2 | my-release-cockroachdb-2.my-release-cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true - 3 | my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true - 4 | my-release-cockroachdb-3.my-release-cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true + 1 | my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true + 2 | my-release-cockroachdb-2.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true + 3 | my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true + 4 | my-release-cockroachdb-3.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true (4 rows) ~~~
@@ -51,14 +57,20 @@ If you remove nodes without first telling CockroachDB to decommission them, you
{% include copy-clipboard.html %} ~~~ shell - $ kubectl exec -it cockroachdb-client-secure -- ./cockroach node decommission --insecure --host=cockroachdb-public + $ kubectl exec -it cockroachdb-client-secure \ + -- ./cockroach node decommission \ + --insecure \ + --host=cockroachdb-public ~~~
{% include copy-clipboard.html %} ~~~ shell - $ kubectl exec -it cockroachdb-client-secure -- ./cockroach node decommission --insecure --host=my-release-cockroachdb-public + $ kubectl exec -it cockroachdb-client-secure \ + -- ./cockroach node decommission \ + --insecure \ + --host=my-release-cockroachdb-public ~~~
@@ -98,10 +110,10 @@ If you remove nodes without first telling CockroachDB to decommission them, you
{% include copy-clipboard.html %} ~~~ shell - $ kubectl scale statefulset my-release-cockroachdb --replicas=3 - ~~~ - - ~~~ - statefulset "my-release-cockroachdb" scaled + $ helm upgrade \ + my-release \ + stable/cockroachdb \ + --set Replicas=3 \ + --reuse-values ~~~
diff --git a/_includes/v19.1/orchestration/kubernetes-scale-cluster.md b/_includes/v19.1/orchestration/kubernetes-scale-cluster.md index 61df086548b..82ad56a3f5d 100644 --- a/_includes/v19.1/orchestration/kubernetes-scale-cluster.md +++ b/_includes/v19.1/orchestration/kubernetes-scale-cluster.md @@ -1,12 +1,11 @@ -The Kubernetes cluster contains 4 nodes, one master and 3 workers. Pods get placed only on worker nodes, so to ensure that you do not have two pods on the same node (as recommended in our [production best practices](recommended-production-settings.html)), you need to add a new worker node and then edit your StatefulSet configuration to add another pod. -The Kubernetes cluster we created contains 3 nodes that pods can be run on. To ensure that you do not have two pods on the same node (as recommended in our [production best practices](recommended-production-settings.html)), you need to add a new node and then edit your StatefulSet configuration to add another pod. +The Kubernetes cluster contains 4 nodes, one master and 3 workers. Pods get placed only on worker nodes, so to ensure that you do not have two pods on the same node (as recommended in our [production best practices](recommended-production-settings.html)), you need to add a new worker node and then edit your StatefulSet configuration to add another pod for the new CockroachDB node. 1. Add a worker node: - On GKE, [resize your cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/resizing-a-cluster). - On GCE, resize your [Managed Instance Group](https://cloud.google.com/compute/docs/instance-groups/). - On AWS, resize your [Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html). -2. Use the `kubectl scale` command to add a pod to your StatefulSet: +2. Add a pod for the new CockroachDB node:
{% include copy-clipboard.html %} @@ -22,10 +21,33 @@ The Kubernetes cluster we created contains 3 nodes that pods can be run on. To e
{% include copy-clipboard.html %} ~~~ shell - $ kubectl scale statefulset my-release-cockroachdb --replicas=4 + $ helm upgrade \ + my-release \ + stable/cockroachdb \ + --set Replicas=4 \ + --reuse-values ~~~ ~~~ - statefulset "my-release-cockroachdb" scaled + Release "my-release" has been upgraded. Happy Helming! + LAST DEPLOYED: Tue May 14 14:06:43 2019 + NAMESPACE: default + STATUS: DEPLOYED + + RESOURCES: + ==> v1beta1/PodDisruptionBudget + NAME AGE + my-release-cockroachdb-budget 51m + + ==> v1/Pod(related) + + NAME READY STATUS RESTARTS AGE + my-release-cockroachdb-0 1/1 Running 0 38m + my-release-cockroachdb-1 1/1 Running 0 39m + my-release-cockroachdb-2 1/1 Running 0 39m + my-release-cockroachdb-3 0/1 Pending 0 0s + my-release-cockroachdb-init-nwjkh 0/1 Completed 0 39m + + ... ~~~
diff --git a/_includes/v19.1/orchestration/kubernetes-upgrade-cluster.md b/_includes/v19.1/orchestration/kubernetes-upgrade-cluster.md index 858274a51fc..5788cc05e38 100644 --- a/_includes/v19.1/orchestration/kubernetes-upgrade-cluster.md +++ b/_includes/v19.1/orchestration/kubernetes-upgrade-cluster.md @@ -19,14 +19,19 @@ Kubernetes knows how to carry out a safe rolling upgrade process of the Cockroac
{% include copy-clipboard.html %} ~~~ shell - $ kubectl exec -it cockroachdb-client-secure -- ./cockroach sql --certs-dir=/cockroach-certs --host=cockroachdb-public + $ kubectl exec -it cockroachdb-client-secure \-- ./cockroach sql \ + --certs-dir=/cockroach-certs \ + --host=cockroachdb-public ~~~
{% include copy-clipboard.html %} ~~~ shell - $ kubectl exec -it cockroachdb-client-secure -- ./cockroach sql --certs-dir=/cockroach-certs --host=my-release-cockroachdb-public + $ kubectl exec -it cockroachdb-client-secure \ + -- ./cockroach sql \ + --certs-dir=/cockroach-certs \ + --host=my-release-cockroachdb-public ~~~
@@ -38,16 +43,26 @@ Kubernetes knows how to carry out a safe rolling upgrade process of the Cockroac
{% include copy-clipboard.html %} ~~~ shell - $ kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never \ - -- sql --insecure --host=cockroachdb-public + $ kubectl run cockroachdb -it \ + --image=cockroachdb/cockroach \ + --rm \ + --restart=Never \ + -- sql \ + --insecure \ + --host=cockroachdb-public ~~~
{% include copy-clipboard.html %} ~~~ shell - $ kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never \ - -- sql --insecure --host=my-release-cockroachdb-public + $ kubectl run cockroachdb -it \ + --image=cockroachdb/cockroach \ + --rm \ + --restart=Never \ + -- sql \ + --insecure \ + --host=my-release-cockroachdb-public ~~~
@@ -57,15 +72,24 @@ Kubernetes knows how to carry out a safe rolling upgrade process of the Cockroac {% include copy-clipboard.html %} ~~~ sql - > SET CLUSTER SETTING cluster.preserve_downgrade_option = '2.0'; + > SET CLUSTER SETTING cluster.preserve_downgrade_option = '2.1'; ~~~ -2. Kick off the upgrade process by changing the desired Docker image. To do so, pick the version that you want to upgrade to, then run the following command, replacing "VERSION" with your desired new version: + 3. Exit the SQL shell and delete the temporary pod: + + {% include copy-clipboard.html %} + ~~~ sql + > \q + ~~~ + +2. Kick off the upgrade process by changing the desired Docker image:
{% include copy-clipboard.html %} ~~~ shell - $ kubectl patch statefulset cockroachdb --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"cockroachdb/cockroach:VERSION"}]' + $ kubectl patch statefulset cockroachdb \ + --type='json' \ + -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"cockroachdb/cockroach:v19.1.0"}]' ~~~ ~~~ @@ -74,17 +98,27 @@ Kubernetes knows how to carry out a safe rolling upgrade process of the Cockroac
+ + {{site.data.alerts.callout_info}} + For Helm, you must remove the cluster initialization job from when the cluster was created before the cluster version can be changed. + {{site.data.alerts.end}} + {% include copy-clipboard.html %} ~~~ shell - $ kubectl patch statefulset my-release-cockroachdb --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"cockroachdb/cockroach:VERSION"}]' + $ kubectl delete job my-release-cockroachdb-init ~~~ - ~~~ - statefulset "my-release0-cockroachdb" patched + {% include copy-clipboard.html %} + ~~~ shell + $ helm upgrade \ + my-release \ + stable/cockroachdb \ + --set ImageTag=v19.1.0 \ + --reuse-values ~~~
-3. If you then check the status of your cluster's pods, you should see one of them being restarted: +3. If you then check the status of your cluster's pods, you should see them being restarted: {% include copy-clipboard.html %} ~~~ shell @@ -103,19 +137,25 @@ Kubernetes knows how to carry out a safe rolling upgrade process of the Cockroac
~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 2m - my-release-cockroachdb-1 1/1 Running 0 2m - my-release-cockroachdb-2 1/1 Running 0 2m - my-release-cockroachdb-3 0/1 Terminating 0 1m + NAME READY STATUS RESTARTS AGE + my-release-cockroachdb-0 1/1 Running 0 2m + my-release-cockroachdb-1 1/1 Running 0 3m + my-release-cockroachdb-2 1/1 Running 0 3m + my-release-cockroachdb-3 0/1 ContainerCreating 0 25s + my-release-cockroachdb-init-nwjkh 0/1 ContainerCreating 0 6s ~~~ + + {{site.data.alerts.callout_info}} + Ignore the pod for cluster initialization. It is re-created as a byproduct of the StatefulSet configuration but does not impact your existing cluster. + {{site.data.alerts.end}}
4. This will continue until all of the pods have restarted and are running the new image. To check the image of each pod to determine whether they've all be upgraded, run: {% include copy-clipboard.html %} ~~~ shell - $ kubectl get pods -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}' + $ kubectl get pods \ + -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}' ~~~
@@ -136,9 +176,15 @@ Kubernetes knows how to carry out a safe rolling upgrade process of the Cockroac ~~~
+ You can also check the CockroachDB version of each node in the Admin UI: + + Version in UI after upgrade + 5. Finish the upgrade. - {{site.data.alerts.callout_info}}This step is relevant only when upgrading from v2.1.x to v19.1. For upgrades within the v19.1.x series, skip this step.{{site.data.alerts.end}} + {{site.data.alerts.callout_info}} + This step is relevant only when upgrading from v2.1.x to v19.1. For upgrades within the v19.1.x series, skip this step. + {{site.data.alerts.end}} If you disabled auto-finalization in step 1 above, monitor the stability and performance of your cluster for as long as you require to feel comfortable with the upgrade (generally at least a day). If during this time you decide to roll back the upgrade, repeat the rolling restart procedure with the old binary. @@ -151,14 +197,20 @@ Kubernetes knows how to carry out a safe rolling upgrade process of the Cockroac
{% include copy-clipboard.html %} ~~~ shell - $ kubectl exec -it cockroachdb-client-secure -- ./cockroach sql --certs-dir=/cockroach-certs --host=cockroachdb-public + $ kubectl exec -it cockroachdb-client-secure \ + -- ./cockroach sql \ + --certs-dir=/cockroach-certs \ + --host=cockroachdb-public ~~~
{% include copy-clipboard.html %} ~~~ shell - $ kubectl exec -it cockroachdb-client-secure -- ./cockroach sql --certs-dir=/cockroach-certs --host=my-release-cockroachdb-public + $ kubectl exec -it cockroachdb-client-secure \ + -- ./cockroach sql \ + --certs-dir=/cockroach-certs \ + --host=my-release-cockroachdb-public ~~~
@@ -169,16 +221,26 @@ Kubernetes knows how to carry out a safe rolling upgrade process of the Cockroac
{% include copy-clipboard.html %} ~~~ shell - $ kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never \ - -- sql --insecure --host=cockroachdb-public + $ kubectl run cockroachdb -it \ + --image=cockroachdb/cockroach \ + --rm \ + --restart=Never \ + -- sql \ + --insecure \ + --host=cockroachdb-public ~~~
{% include copy-clipboard.html %} ~~~ shell - $ kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never \ - -- sql --insecure --host=my-release-cockroachdb-public + $ kubectl run cockroachdb -it \ + --image=cockroachdb/cockroach \ + --rm \ + --restart=Never \ + -- sql \ + --insecure \ + --host=my-release-cockroachdb-public ~~~
@@ -190,3 +252,10 @@ Kubernetes knows how to carry out a safe rolling upgrade process of the Cockroac ~~~ sql > RESET CLUSTER SETTING cluster.preserve_downgrade_option; ~~~ + + 3. Exit the SQL shell and delete the temporary pod: + + {% include copy-clipboard.html %} + ~~~ sql + > \q + ~~~ diff --git a/_includes/v19.1/orchestration/start-cockroachdb-helm-insecure.md b/_includes/v19.1/orchestration/start-cockroachdb-helm-insecure.md index 85123dc02e2..3270e8e38af 100644 --- a/_includes/v19.1/orchestration/start-cockroachdb-helm-insecure.md +++ b/_includes/v19.1/orchestration/start-cockroachdb-helm-insecure.md @@ -64,7 +64,7 @@ You can customize your deployment by passing [configuration parameters](https://github.com/helm/charts/tree/master/stable/cockroachdb#configuration) to `helm install` using the `--set key=value[,key=value]` flag. For a production cluster, you should consider modifying the `Storage` and `StorageClass` parameters. This chart defaults to 100 GiB of disk space per pod, but you may want more or less depending on your use case, and the default persistent volume `StorageClass` in your environment may not be what you want for a database (e.g., on GCE and Azure the default is not SSD). {{site.data.alerts.end}} -4. Confirm that three pods are `Running` successfully: +4. Confirm that three pods are `Running` successfully and that the one-time cluster initialization has `Completed`: {% include copy-clipboard.html %} ~~~ shell @@ -72,10 +72,11 @@ ~~~ ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 48s - my-release-cockroachdb-1 1/1 Running 0 47s - my-release-cockroachdb-2 1/1 Running 0 47s + NAME READY STATUS RESTARTS AGE + my-release-cockroachdb-0 1/1 Running 0 1m + my-release-cockroachdb-1 1/1 Running 0 1m + my-release-cockroachdb-2 1/1 Running 0 1m + my-release-cockroachdb-init-k6jcr 0/1 Completed 0 1m ~~~ 5. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods: diff --git a/_includes/v19.1/orchestration/start-cockroachdb-helm-secure.md b/_includes/v19.1/orchestration/start-cockroachdb-helm-secure.md index 6dd954db9d9..cf121e07e75 100644 --- a/_includes/v19.1/orchestration/start-cockroachdb-helm-secure.md +++ b/_includes/v19.1/orchestration/start-cockroachdb-helm-secure.md @@ -151,7 +151,7 @@ certificatesigningrequest "default.client.root" approved ~~~ -7. Confirm that cluster initialization has completed successfully, with each pod showing `1/1` under `READY`: +7. Confirm that cluster initialization has completed successfully, with the pods for CockroachDB showing `1/1` under `READY` and the pod for initialization showing `COMPLETED` under `STATUS`: {% include copy-clipboard.html %} ~~~ shell @@ -159,10 +159,11 @@ ~~~ ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 8m - my-release-cockroachdb-1 1/1 Running 0 8m - my-release-cockroachdb-2 1/1 Running 0 8m + NAME READY STATUS RESTARTS AGE + my-release-cockroachdb-0 1/1 Running 0 8m + my-release-cockroachdb-1 1/1 Running 0 8m + my-release-cockroachdb-2 1/1 Running 0 8m + my-release-cockroachdb-init-hxzsc 0/1 Completed 0 1h ~~~ 8. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods: diff --git a/_includes/v19.1/orchestration/start-cockroachdb-insecure.md b/_includes/v19.1/orchestration/start-cockroachdb-insecure.md index aeab8b2e9e3..b96fb803d46 100644 --- a/_includes/v19.1/orchestration/start-cockroachdb-insecure.md +++ b/_includes/v19.1/orchestration/start-cockroachdb-insecure.md @@ -2,7 +2,8 @@ {% include copy-clipboard.html %} ~~~ shell - $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset.yaml + $ kubectl create \ + -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset.yaml ~~~ ~~~ @@ -63,7 +64,8 @@ {% include copy-clipboard.html %} ~~~ shell - $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml + $ kubectl create \ + -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml ~~~ ~~~ diff --git a/_includes/v19.1/orchestration/start-cockroachdb-secure.md b/_includes/v19.1/orchestration/start-cockroachdb-secure.md index 0231d5a2e38..e3b6a2ae9b7 100644 --- a/_includes/v19.1/orchestration/start-cockroachdb-secure.md +++ b/_includes/v19.1/orchestration/start-cockroachdb-secure.md @@ -6,7 +6,8 @@ If you want to use a different certificate authority than the one Kubernetes use {% include copy-clipboard.html %} ~~~ shell - $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset-secure.yaml + $ kubectl create \ + -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset-secure.yaml ~~~ ~~~ @@ -133,7 +134,8 @@ If you want to use a different certificate authority than the one Kubernetes use {% include copy-clipboard.html %} ~~~ shell - $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init-secure.yaml + $ kubectl create \ + -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init-secure.yaml ~~~ ~~~ diff --git a/_includes/v19.1/orchestration/start-kubernetes.md b/_includes/v19.1/orchestration/start-kubernetes.md index 8efc14b8414..2cca3ca0bb5 100644 --- a/_includes/v19.1/orchestration/start-kubernetes.md +++ b/_includes/v19.1/orchestration/start-kubernetes.md @@ -46,7 +46,9 @@ Choose whether you want to orchestrate CockroachDB with Kubernetes using the hos {% include copy-clipboard.html %} ~~~ shell - $ kubectl create clusterrolebinding $USER-cluster-admin-binding --clusterrole=cluster-admin --user= + $ kubectl create clusterrolebinding $USER-cluster-admin-binding \ + --clusterrole=cluster-admin \ + --user= ~~~ ~~~ diff --git a/_includes/v19.1/orchestration/test-cluster-insecure.md b/_includes/v19.1/orchestration/test-cluster-insecure.md index e0758f4ded3..fabe390fc1e 100644 --- a/_includes/v19.1/orchestration/test-cluster-insecure.md +++ b/_includes/v19.1/orchestration/test-cluster-insecure.md @@ -3,16 +3,26 @@
{% include copy-clipboard.html %} ~~~ shell - $ kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never \ - -- sql --insecure --host=cockroachdb-public + $ kubectl run cockroachdb -it \ + --image=cockroachdb/cockroach:{{page.release_info.version}} \ + --rm \ + --restart=Never \ + -- sql \ + --insecure \ + --host=cockroachdb-public ~~~
{% include copy-clipboard.html %} ~~~ shell - $ kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never \ - -- sql --insecure --host=my-release-cockroachdb-public + $ kubectl run cockroachdb -it \ + --image=cockroachdb/cockroach:{{page.release_info.version}} \ + --rm \ + --restart=Never \ + -- sql \ + --insecure \ + --host=my-release-cockroachdb-public ~~~
diff --git a/_includes/v19.1/orchestration/test-cluster-secure.md b/_includes/v19.1/orchestration/test-cluster-secure.md index 1d57b929fee..9094717bed8 100644 --- a/_includes/v19.1/orchestration/test-cluster-secure.md +++ b/_includes/v19.1/orchestration/test-cluster-secure.md @@ -5,7 +5,8 @@ To use the built-in SQL client, you need to launch a pod that runs indefinitely {% include copy-clipboard.html %} ~~~ shell - $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/client-secure.yaml + $ kubectl create \ + -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/client-secure.yaml ~~~ ~~~ @@ -18,7 +19,10 @@ To use the built-in SQL client, you need to launch a pod that runs indefinitely {% include copy-clipboard.html %} ~~~ shell - $ kubectl exec -it cockroachdb-client-secure -- ./cockroach sql --certs-dir=/cockroach-certs --host=cockroachdb-public + $ kubectl exec -it cockroachdb-client-secure \ + -- ./cockroach sql \ + --certs-dir=/cockroach-certs \ + --host=cockroachdb-public ~~~ ~~~ @@ -26,12 +30,14 @@ To use the built-in SQL client, you need to launch a pod that runs indefinitely # All statements must be terminated by a semicolon. # To exit: CTRL + D. # - # Server version: CockroachDB CCL v1.1.2 (linux amd64, built 2017/11/02 19:32:03, go1.8.3) (same version as client) - # Cluster ID: 3292fe08-939f-4638-b8dd-848074611dba + # Client version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6) + # Server version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6) + + # Cluster ID: 256a8705-e348-4e3a-ab12-e1aba96857e4 # # Enter \? for a brief introduction. # - root@cockroachdb-public:26257/> + root@cockroachdb-public:26257/defaultdb> ~~~ 3. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html): @@ -57,11 +63,9 @@ To use the built-in SQL client, you need to launch a pod that runs indefinitely ~~~ ~~~ + id | balance +----+---------+ - | id | balance | - +----+---------+ - | 1 | 1000.5 | - +----+---------+ + 1 | 1000.50 (1 row) ~~~ @@ -112,7 +116,10 @@ To use the built-in SQL client, you need to launch a pod that runs indefinitely {% include copy-clipboard.html %} ~~~ shell - $ kubectl exec -it cockroachdb-client-secure -- ./cockroach sql --certs-dir=/cockroach-certs --host=my-release-cockroachdb-public + $ kubectl exec -it cockroachdb-client-secure \ + -- ./cockroach sql \ + --certs-dir=/cockroach-certs \ + --host=my-release-cockroachdb-public ~~~ ~~~ @@ -120,12 +127,14 @@ To use the built-in SQL client, you need to launch a pod that runs indefinitely # All statements must be terminated by a semicolon. # To exit: CTRL + D. # - # Server version: CockroachDB CCL v1.1.2 (linux amd64, built 2017/11/02 19:32:03, go1.8.3) (same version as client) - # Cluster ID: 3292fe08-939f-4638-b8dd-848074611dba + # Client version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6) + # Server version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6) + + # Cluster ID: 256a8705-e348-4e3a-ab12-e1aba96857e4 # # Enter \? for a brief introduction. # - root@my-release-cockroachdb-public:26257/> + root@my-release-cockroachdb-public:26257/defaultdb> ~~~ 3. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html): @@ -151,11 +160,9 @@ To use the built-in SQL client, you need to launch a pod that runs indefinitely ~~~ ~~~ + id | balance +----+---------+ - | id | balance | - +----+---------+ - | 1 | 1000.5 | - +----+---------+ + 1 | 1000.50 (1 row) ~~~ diff --git a/images/v19.1/kubernetes-upgrade.png b/images/v19.1/kubernetes-upgrade.png new file mode 100644 index 00000000000..497559cef73 Binary files /dev/null and b/images/v19.1/kubernetes-upgrade.png differ