diff --git a/site/content/3.10/deploy/oneshard.md b/site/content/3.10/deploy/oneshard.md index f6f9ee6dc2..00800087ae 100644 --- a/site/content/3.10/deploy/oneshard.md +++ b/site/content/3.10/deploy/oneshard.md @@ -10,14 +10,20 @@ archetype: default {{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}} The OneShard option for ArangoDB clusters restricts all collections of a -database to a single shard and places them on one DB-Server node. This way, -whole queries can be pushed to and executed on that server, massively reducing -cluster-internal communication. The Coordinator only gets back the final result. +database to a single shard so that every collection has `numberOfShards` set to `1`, +and all leader shards are placed on one DB-Server node. This way, whole queries +can be pushed to and executed on that server, massively reducing cluster-internal +communication. The Coordinator only gets back the final result. Queries are always limited to a single database, and with the data of a whole database on a single node, the OneShard option allows running transactions with ACID guarantees on shard leaders. +Collections can have replicas by setting a `replicationFactor` greater than `1` +as usual. For each replica, the follower shards are all placed on one DB-Server +node when using the OneShard option. This allows for a quick failover in case +the DB-Server with the leader shards fails. + A OneShard setup is highly recommended for most graph use cases and join-heavy queries. diff --git a/site/content/3.11/deploy/oneshard.md b/site/content/3.11/deploy/oneshard.md index f6f9ee6dc2..00800087ae 100644 --- a/site/content/3.11/deploy/oneshard.md +++ b/site/content/3.11/deploy/oneshard.md @@ -10,14 +10,20 @@ archetype: default {{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}} The OneShard option for ArangoDB clusters restricts all collections of a -database to a single shard and places them on one DB-Server node. This way, -whole queries can be pushed to and executed on that server, massively reducing -cluster-internal communication. The Coordinator only gets back the final result. +database to a single shard so that every collection has `numberOfShards` set to `1`, +and all leader shards are placed on one DB-Server node. This way, whole queries +can be pushed to and executed on that server, massively reducing cluster-internal +communication. The Coordinator only gets back the final result. Queries are always limited to a single database, and with the data of a whole database on a single node, the OneShard option allows running transactions with ACID guarantees on shard leaders. +Collections can have replicas by setting a `replicationFactor` greater than `1` +as usual. For each replica, the follower shards are all placed on one DB-Server +node when using the OneShard option. This allows for a quick failover in case +the DB-Server with the leader shards fails. + A OneShard setup is highly recommended for most graph use cases and join-heavy queries. diff --git a/site/content/3.12/deploy/oneshard.md b/site/content/3.12/deploy/oneshard.md index f6f9ee6dc2..00800087ae 100644 --- a/site/content/3.12/deploy/oneshard.md +++ b/site/content/3.12/deploy/oneshard.md @@ -10,14 +10,20 @@ archetype: default {{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}} The OneShard option for ArangoDB clusters restricts all collections of a -database to a single shard and places them on one DB-Server node. This way, -whole queries can be pushed to and executed on that server, massively reducing -cluster-internal communication. The Coordinator only gets back the final result. +database to a single shard so that every collection has `numberOfShards` set to `1`, +and all leader shards are placed on one DB-Server node. This way, whole queries +can be pushed to and executed on that server, massively reducing cluster-internal +communication. The Coordinator only gets back the final result. Queries are always limited to a single database, and with the data of a whole database on a single node, the OneShard option allows running transactions with ACID guarantees on shard leaders. +Collections can have replicas by setting a `replicationFactor` greater than `1` +as usual. For each replica, the follower shards are all placed on one DB-Server +node when using the OneShard option. This allows for a quick failover in case +the DB-Server with the leader shards fails. + A OneShard setup is highly recommended for most graph use cases and join-heavy queries.