diff --git a/modules/ROOT/content-nav.adoc b/modules/ROOT/content-nav.adoc
index 649c9410d..5b768f600 100644
--- a/modules/ROOT/content-nav.adoc
+++ b/modules/ROOT/content-nav.adoc
@@ -130,25 +130,8 @@
*** xref:database-administration/aliases/manage-aliases-standard-databases.adoc[]
*** xref:database-administration/aliases/manage-aliases-composite-databases.adoc[]
*** xref:database-administration/aliases/remote-database-alias-configuration.adoc[]
-** Composite databases
-*** xref:database-administration/composite-databases/concepts.adoc[]
-*** xref:database-administration/composite-databases/create-composite-databases.adoc[]
-*** xref:database-administration/composite-databases/list-composite-databases.adoc[]
-*** xref:database-administration/composite-databases/alter-composite-databases.adoc[]
-*** xref:database-administration/composite-databases/start-stop-composite-databases.adoc[]
-*** xref:database-administration/composite-databases/delete-composite-databases.adoc[]
-*** xref:database-administration/composite-databases/querying-composite-databases.adoc[]
-*** xref:database-administration/composite-databases/sharding-with-copy.adoc[]
** xref:database-administration/routing-decisions.adoc[]
-* xref:database-internals/index.adoc[]
-** xref:database-internals/transaction-management.adoc[]
-** xref:database-internals/concurrent-data-access.adoc[]
-** xref:database-internals/transaction-logs.adoc[]
-** xref:database-internals/checkpointing.adoc[]
-** xref:database-internals/store-formats.adoc[]
-** xref:database-internals/neo4j-admin-store-info.adoc[]
-
* xref:clustering/index.adoc[]
** xref:clustering/introduction.adoc[]
** Setting up a cluster
@@ -177,6 +160,41 @@
** xref:clustering/server-syntax.adoc[]
** xref:clustering/glossary.adoc[]
+* Scalability
+** xref:scalability/concepts.adoc[]
+** xref:scalability/scaling-with-neo4j.adoc[]
+** Composite databases
+*** xref:scalability/composite-databases/concepts.adoc[]
+*** xref:scalability/composite-databases/create-composite-databases.adoc[]
+*** xref:scalability/composite-databases/list-composite-databases.adoc[]
+*** xref:scalability/composite-databases/alter-composite-databases.adoc[]
+*** xref:scalability/composite-databases/start-stop-composite-databases.adoc[]
+*** xref:scalability/composite-databases/delete-composite-databases.adoc[]
+*** xref:scalability/composite-databases/querying-composite-databases.adoc[]
+*** xref:scalability/composite-databases/sharding-with-copy.adoc[]
+//*** xref:scalability/composite-databases/scaling-with-composite-databases.adoc[]
+** Property sharding (Preview feature)
+*** xref:scalability/sharded-property-databases/overview.adoc[]
+*** xref:scalability/sharded-property-databases/planning-and-sizing.adoc[]
+*** xref:scalability/sharded-property-databases/configuration.adoc[]
+*** xref:scalability/sharded-property-databases/data-ingestion.adoc[]
+*** xref:scalability/sharded-property-databases/starting-stopping-sharded-databases.adoc[]
+*** xref:scalability/sharded-property-databases/listing-sharded-databases.adoc[]
+*** xref:scalability/sharded-property-databases/altering-sharded-databases.adoc[]
+*** xref:scalability/sharded-property-databases/deleting-sharded-databases.adoc[]
+*** xref:scalability/sharded-property-databases/role-based-access-control.adoc[]
+*** xref:scalability/sharded-property-databases/admin-operations.adoc[]
+*** xref:scalability/sharded-property-databases/security.adoc[]
+*** xref:scalability/sharded-property-databases/limitations-and-considerations.adoc[]
+
+* xref:database-internals/index.adoc[]
+** xref:database-internals/transaction-management.adoc[]
+** xref:database-internals/concurrent-data-access.adoc[]
+** xref:database-internals/transaction-logs.adoc[]
+** xref:database-internals/checkpointing.adoc[]
+** xref:database-internals/store-formats.adoc[]
+** xref:database-internals/neo4j-admin-store-info.adoc[]
+
* xref:backup-restore/index.adoc[]
** xref:backup-restore/planning.adoc[]
** xref:backup-restore/modes.adoc[]
diff --git a/modules/ROOT/images/scalability/horizontal-scaling-strategies.svg b/modules/ROOT/images/scalability/horizontal-scaling-strategies.svg
new file mode 100644
index 000000000..47bfdd1d3
--- /dev/null
+++ b/modules/ROOT/images/scalability/horizontal-scaling-strategies.svg
@@ -0,0 +1,82 @@
+
diff --git a/modules/ROOT/images/scalability/property-shard-deployment.svg b/modules/ROOT/images/scalability/property-shard-deployment.svg
new file mode 100644
index 000000000..f451d46df
--- /dev/null
+++ b/modules/ROOT/images/scalability/property-shard-deployment.svg
@@ -0,0 +1,77 @@
+
diff --git a/modules/ROOT/images/scalability/sharded-architecture.svg b/modules/ROOT/images/scalability/sharded-architecture.svg
new file mode 100644
index 000000000..9475c840d
--- /dev/null
+++ b/modules/ROOT/images/scalability/sharded-architecture.svg
@@ -0,0 +1,41 @@
+
diff --git a/modules/ROOT/pages/backup-restore/consistency-checker.adoc b/modules/ROOT/pages/backup-restore/consistency-checker.adoc
index 5c434ef51..5a5cc86fb 100644
--- a/modules/ROOT/pages/backup-restore/consistency-checker.adoc
+++ b/modules/ROOT/pages/backup-restore/consistency-checker.adoc
@@ -141,7 +141,7 @@ The following are examples of how to check the consistency of a database, a dump
[NOTE]
====
-`neo4j-admin database check` cannot be applied to xref:database-administration/composite-databases/concepts.adoc[Composite databases].
+`neo4j-admin database check` cannot be applied to xref:scalability/composite-databases/concepts.adoc[Composite databases].
It must be run directly on the databases that are associated with that Composite database.
====
diff --git a/modules/ROOT/pages/backup-restore/copy-database.adoc b/modules/ROOT/pages/backup-restore/copy-database.adoc
index 3d17183db..7d788ec61 100644
--- a/modules/ROOT/pages/backup-restore/copy-database.adoc
+++ b/modules/ROOT/pages/backup-restore/copy-database.adoc
@@ -11,7 +11,7 @@ You can use the `neo4j-admin database copy` command to copy a database, create a
====
* `neo4j-admin database copy` preserves the node IDs (unless `--compact-node-store` is used), but the relationships get new IDs.
* `neo4j-admin database copy` is not supported for use on the `system` database.
-* `neo4j-admin database copy` is not supported for use on xref:database-administration/composite-databases/concepts.adoc[Composite databases].
+* `neo4j-admin database copy` is not supported for use on xref:scalability/composite-databases/concepts.adoc[Composite databases].
It must be run directly on the databases that are associated with that Composite database.
* `neo4j-admin database copy` is an IOPS-intensive process.
For more information, see <>.
@@ -300,7 +300,7 @@ Labels are processed independently, i.e., the filter ignores any node with a lab
[TIP]
====
-For a detailed example of how to use `neo4j-admin database copy` to filter out data for sharding a database, see xref:database-administration/composite-databases/sharding-with-copy.adoc[Sharding data with the `copy` command].
+For a detailed example of how to use `neo4j-admin database copy` to filter out data for sharding a database, see xref:scalability/composite-databases/sharding-with-copy.adoc[Sharding data with the `copy` command].
====
[[compact-database]]
diff --git a/modules/ROOT/pages/backup-restore/offline-backup.adoc b/modules/ROOT/pages/backup-restore/offline-backup.adoc
index 238438007..79805efed 100644
--- a/modules/ROOT/pages/backup-restore/offline-backup.adoc
+++ b/modules/ROOT/pages/backup-restore/offline-backup.adoc
@@ -109,7 +109,7 @@ The command creates a file called _database.dump_ where `database` is the databa
[NOTE]
====
-`neo4j-admin database dump` cannot be applied to xref:database-administration/composite-databases/concepts.adoc[Composite databases].
+`neo4j-admin database dump` cannot be applied to xref:scalability/composite-databases/concepts.adoc[Composite databases].
It must be run directly on the databases that are associated with that Composite database.
====
diff --git a/modules/ROOT/pages/backup-restore/planning.adoc b/modules/ROOT/pages/backup-restore/planning.adoc
index bb05e236b..f801fd242 100644
--- a/modules/ROOT/pages/backup-restore/planning.adoc
+++ b/modules/ROOT/pages/backup-restore/planning.adoc
@@ -172,7 +172,7 @@ The following table summarizes the commands' capabilities and usage.
[NOTE]
====
-The Neo4j Admin commands `backup`, `restore`, `dump`, `load`, `copy`, and `check-consistency` are not supported for use on xref:database-administration/composite-databases/concepts.adoc[Composite databases].
+The Neo4j Admin commands `backup`, `restore`, `dump`, `load`, `copy`, and `check-consistency` are not supported for use on xref:scalability/composite-databases/concepts.adoc[Composite databases].
They must be run directly on the databases that are associated with that Composite database.
====
@@ -196,7 +196,7 @@ By default, a database backup includes only the database contents.
If you choose to include metadata, the backup also stores the role-based access control (RBAC) settings associated with the database.
When restoring, you have the flexibility to define the target topology (how many primaries and secondaries are desired for the database), which may differ from the topology at backup time.
-The database will then be allocated across the available servers according to that topology.
+The database will then be allocated across the available servers according to that topology.
====
[[backup-planning-databases]]
diff --git a/modules/ROOT/pages/backup-restore/restore-dump.adoc b/modules/ROOT/pages/backup-restore/restore-dump.adoc
index 1677b7dcd..30f0957d7 100644
--- a/modules/ROOT/pages/backup-restore/restore-dump.adoc
+++ b/modules/ROOT/pages/backup-restore/restore-dump.adoc
@@ -117,7 +117,7 @@ For more information, see xref:clustering/databases.adoc#cluster-seed[Seed a clu
[NOTE]
====
-`neo4j-admin database load` cannot be applied to xref:database-administration/composite-databases/concepts.adoc[Composite databases].
+`neo4j-admin database load` cannot be applied to xref:scalability/composite-databases/concepts.adoc[Composite databases].
It must be run directly on the databases that are associated with that Composite database.
====
diff --git a/modules/ROOT/pages/configuration/configuration-settings.adoc b/modules/ROOT/pages/configuration/configuration-settings.adoc
index eadd1e3ad..ed08b5eb0 100644
--- a/modules/ROOT/pages/configuration/configuration-settings.adoc
+++ b/modules/ROOT/pages/configuration/configuration-settings.adoc
@@ -8,7 +8,7 @@ Refer to xref:configuration/neo4j-conf.adoc#_configuration_settings[The neo4j.co
For lists of deprecated and removed configuration settings in 2025.x, refer to the page xref:changes-deprecations-removals.adoc[Changes, deprecations, and removals in Neo4j 2025.x].
-To list all available configuration settings on a Neo4j server, run the link:https://neo4j.com/docs/cypher-manual/5/clauses/listing-settings[`SHOW SETTINGS`] command.
+To list all available configuration settings on a Neo4j server, run the link:{neo4j-docs-base-uri}/cypher-manual/5/clauses/listing-settings[`SHOW SETTINGS`] command.
== Dynamic configuration settings
diff --git a/modules/ROOT/pages/configuration/plugins.adoc b/modules/ROOT/pages/configuration/plugins.adoc
index 9e89e657f..d25787380 100644
--- a/modules/ROOT/pages/configuration/plugins.adoc
+++ b/modules/ROOT/pages/configuration/plugins.adoc
@@ -150,7 +150,7 @@ For more information on configuring the plugins, see the respective documentatio
* link:https://neo4j.com/docs/bloom-user-guide/current/bloom-installation/[Bloom documentation]
* link:https://neo4j.com/docs/aura/fleet-management/setup/[Fleet management documentation]
* link:https://neo4j.com/docs/graph-data-science/current/installation/neo4j-server/[GDS documentation]
-* link:https://neo4j.com/docs/cypher-manual/current/genai-integrations/[GenAI documentation]
+* link:{neo4j-docs-base-uri}/cypher-manual/current/genai-integrations/[GenAI documentation]
* link:https://neo4j.com/labs/neosemantics/[Neosemantics documentation]
. Restart Neo4j for the plugins to be loaded and available for use.
diff --git a/modules/ROOT/pages/database-administration/aliases/manage-aliases-composite-databases.adoc b/modules/ROOT/pages/database-administration/aliases/manage-aliases-composite-databases.adoc
index c7277e2b7..203e01e38 100644
--- a/modules/ROOT/pages/database-administration/aliases/manage-aliases-composite-databases.adoc
+++ b/modules/ROOT/pages/database-administration/aliases/manage-aliases-composite-databases.adoc
@@ -56,7 +56,7 @@ For a description of all the returned columns of this command, and for ways in w
[[create-composite-database-alias]]
== Create database aliases in composite databases
-Both local and remote database aliases can be part of a xref::database-administration/composite-databases/concepts.adoc[composite database].
+Both local and remote database aliases can be part of a xref::scalability/composite-databases/concepts.adoc[composite database].
The database alias consists of two parts, separated by a dot: the namespace and the alias name.
diff --git a/modules/ROOT/pages/database-administration/aliases/manage-aliases-standard-databases.adoc b/modules/ROOT/pages/database-administration/aliases/manage-aliases-standard-databases.adoc
index e45b96881..898846c3b 100644
--- a/modules/ROOT/pages/database-administration/aliases/manage-aliases-standard-databases.adoc
+++ b/modules/ROOT/pages/database-administration/aliases/manage-aliases-standard-databases.adoc
@@ -44,7 +44,7 @@ The home database for users can be set to an alias, which will be resolved to th
Starting with Neo4j 2025.04, a database alias can also be set as the DBMS default database.
This page describes managing database aliases for standard databases.
-For aliases created as part of a xref:database-administration/composite-databases/concepts.adoc[composite database], see xref:database-administration/aliases/manage-aliases-composite-databases.adoc[].
+For aliases created as part of a xref:scalability/composite-databases/concepts.adoc[composite database], see xref:database-administration/aliases/manage-aliases-composite-databases.adoc[].
[[manage-aliases-list]]
== List database aliases
diff --git a/modules/ROOT/pages/database-administration/index.adoc b/modules/ROOT/pages/database-administration/index.adoc
index d36b16ecd..78dc0335e 100644
--- a/modules/ROOT/pages/database-administration/index.adoc
+++ b/modules/ROOT/pages/database-administration/index.adoc
@@ -83,10 +83,10 @@ image::manage-dbs-community.svg[title="A default Neo4j installation.",role=popup
.An installation of Neo4j with multiple active databases, named `marketing`, `sales`, and `hr`:
image::manage-dbs-enterprise.svg[title="A multiple database Neo4j installation.",role=popup]
-For details about the `system` database in a clustered environment, refer to xref:clustering/databases.adoc#cluster-system-db[Managing databases in a cluster -> The `system` database].
+For details about the `system` database in a clustered environment, refer to xref:clustering/databases.adoc#cluster-system-db[Managing databases in a cluster -> The `system` database].
== Composite databases
A Composite database is a logical grouping of multiple graphs contained in other, standard databases.
A Composite database defines an _execution context_ and a (limited) _transaction domain_.
-For more information, see xref:database-administration/composite-databases/concepts.adoc[Composite databases].
\ No newline at end of file
+For more information, see xref:scalability/composite-databases/concepts.adoc[Composite databases].
\ No newline at end of file
diff --git a/modules/ROOT/pages/database-administration/routing-decisions.adoc b/modules/ROOT/pages/database-administration/routing-decisions.adoc
index 12cbd590c..608ef97d2 100644
--- a/modules/ROOT/pages/database-administration/routing-decisions.adoc
+++ b/modules/ROOT/pages/database-administration/routing-decisions.adoc
@@ -21,7 +21,7 @@ Step 2: Reuse open transaction::
* If not, then proceed to step 3.
Step 3: Determine the type of the target database (execution context type)::
* If the target database is a database in this DBMS, then the context type is _Internal_.
-* If the target database is a xref::database-administration/composite-databases/concepts.adoc[Composite database], then the context type is _Composite_. +
+* If the target database is a xref::scalability/composite-databases/concepts.adoc[Composite database], then the context type is _Composite_. +
+
[NOTE]
====
diff --git a/modules/ROOT/pages/introduction.adoc b/modules/ROOT/pages/introduction.adoc
index ef04fe280..4f1c06401 100644
--- a/modules/ROOT/pages/introduction.adoc
+++ b/modules/ROOT/pages/introduction.adoc
@@ -246,7 +246,7 @@ a| APOC 450+ link:https://neo4j.com/docs/apoc/5/[Core Procedures and Functions]
|
| {check-mark}
-| xref:database-administration/composite-databases/concepts.adoc[Composite databases]
+| xref:scalability/composite-databases/concepts.adoc[Composite databases]
|
| {check-mark}
diff --git a/modules/ROOT/pages/database-administration/composite-databases/alter-composite-databases.adoc b/modules/ROOT/pages/scalability/composite-databases/alter-composite-databases.adoc
similarity index 100%
rename from modules/ROOT/pages/database-administration/composite-databases/alter-composite-databases.adoc
rename to modules/ROOT/pages/scalability/composite-databases/alter-composite-databases.adoc
diff --git a/modules/ROOT/pages/database-administration/composite-databases/concepts.adoc b/modules/ROOT/pages/scalability/composite-databases/concepts.adoc
similarity index 90%
rename from modules/ROOT/pages/database-administration/composite-databases/concepts.adoc
rename to modules/ROOT/pages/scalability/composite-databases/concepts.adoc
index e48095027..183c663dc 100644
--- a/modules/ROOT/pages/database-administration/composite-databases/concepts.adoc
+++ b/modules/ROOT/pages/scalability/composite-databases/concepts.adoc
@@ -13,7 +13,7 @@ Local database aliases target databases within the same DBMS, while remote datab
For more information, see xref:database-administration/aliases/manage-aliases-composite-databases.adoc[].
Composite databases are managed using Cypher administrative commands.
-For a detailed example of how to create a Composite database and add database aliases to it, see xref:database-administration/composite-databases/querying-composite-databases.adoc[Set up and query composite databases].
+For a detailed example of how to create a Composite database and add database aliases to it, see xref:scalability/composite-databases/querying-composite-databases.adoc[Set up and query composite databases].
Composite databases cannot guarantee compatibility between constituents from different Neo4j versions.
Constituents from versions without breaking changes should work fine, apart from newly-added features.
@@ -26,7 +26,7 @@ Composite databases have the following characteristics:
* Can be deployed in standalone and cluster deployments.
* Managed using Cypher commands, such as `CREATE COMPOSITE DATABASE` and `CREATE ALIAS`.
* You can shard an existing database with the help of the `neo4j-admin copy` command.
-See xref:database-administration/composite-databases/sharding-with-copy.adoc[Sharding data with the copy command] for details.
+See xref:scalability/composite-databases/sharding-with-copy.adoc[Sharding data with the copy command] for details.
* Use the existing user for local constituents or the user credentials defined by the remote aliases for remote consituents.
* Do not support privileges, index, and constraint management commands.
These must be defined on the constituent target database in the respective DBMS.
@@ -45,7 +45,7 @@ Data sharding is when you have two graphs that share the *same model* (same labe
For example, you can deploy shards on separate servers, splitting the load on resources and storage.
Or, you can deploy shards in different locations, to be able to manage them independently or split the load on network traffic.
An existing database can be sharded with the help of the `neo4j-admin database copy` command.
-For an example, see xref:database-administration/composite-databases/sharding-with-copy.adoc[Sharding data with the copy command].
+For an example, see xref:scalability/composite-databases/sharding-with-copy.adoc[Sharding data with the copy command].
Connecting data across graphs::
Because relationships cannot span across graphs, to query your data, you have to federate the graphs by
diff --git a/modules/ROOT/pages/database-administration/composite-databases/create-composite-databases.adoc b/modules/ROOT/pages/scalability/composite-databases/create-composite-databases.adoc
similarity index 100%
rename from modules/ROOT/pages/database-administration/composite-databases/create-composite-databases.adoc
rename to modules/ROOT/pages/scalability/composite-databases/create-composite-databases.adoc
diff --git a/modules/ROOT/pages/database-administration/composite-databases/delete-composite-databases.adoc b/modules/ROOT/pages/scalability/composite-databases/delete-composite-databases.adoc
similarity index 100%
rename from modules/ROOT/pages/database-administration/composite-databases/delete-composite-databases.adoc
rename to modules/ROOT/pages/scalability/composite-databases/delete-composite-databases.adoc
diff --git a/modules/ROOT/pages/database-administration/composite-databases/list-composite-databases.adoc b/modules/ROOT/pages/scalability/composite-databases/list-composite-databases.adoc
similarity index 100%
rename from modules/ROOT/pages/database-administration/composite-databases/list-composite-databases.adoc
rename to modules/ROOT/pages/scalability/composite-databases/list-composite-databases.adoc
diff --git a/modules/ROOT/pages/database-administration/composite-databases/querying-composite-databases.adoc b/modules/ROOT/pages/scalability/composite-databases/querying-composite-databases.adoc
similarity index 98%
rename from modules/ROOT/pages/database-administration/composite-databases/querying-composite-databases.adoc
rename to modules/ROOT/pages/scalability/composite-databases/querying-composite-databases.adoc
index 94cefa9e9..264d13600 100644
--- a/modules/ROOT/pages/database-administration/composite-databases/querying-composite-databases.adoc
+++ b/modules/ROOT/pages/scalability/composite-databases/querying-composite-databases.adoc
@@ -84,7 +84,7 @@ CREATE ALIAS `cineasts.upcoming`
====
-For more information about composite databases and database aliases in composite databases, see xref:database-administration/composite-databases/concepts.adoc[], and xref:database-administration/aliases/manage-aliases-composite-databases.adoc[].
+For more information about composite databases and database aliases in composite databases, see xref:scalability/composite-databases/concepts.adoc[], and xref:database-administration/aliases/manage-aliases-composite-databases.adoc[].
[[composite-databases-queries-graph-selection]]
== Graph selection
diff --git a/modules/ROOT/pages/database-administration/composite-databases/sharding-with-copy.adoc b/modules/ROOT/pages/scalability/composite-databases/sharding-with-copy.adoc
similarity index 100%
rename from modules/ROOT/pages/database-administration/composite-databases/sharding-with-copy.adoc
rename to modules/ROOT/pages/scalability/composite-databases/sharding-with-copy.adoc
diff --git a/modules/ROOT/pages/database-administration/composite-databases/start-stop-composite-databases.adoc b/modules/ROOT/pages/scalability/composite-databases/start-stop-composite-databases.adoc
similarity index 100%
rename from modules/ROOT/pages/database-administration/composite-databases/start-stop-composite-databases.adoc
rename to modules/ROOT/pages/scalability/composite-databases/start-stop-composite-databases.adoc
diff --git a/modules/ROOT/pages/scalability/concepts.adoc b/modules/ROOT/pages/scalability/concepts.adoc
new file mode 100644
index 000000000..91a7f9f87
--- /dev/null
+++ b/modules/ROOT/pages/scalability/concepts.adoc
@@ -0,0 +1,91 @@
+:page-role: new-2025.10 enterprise-edition not-on-aura
+:description: This page describes the concepts behind scalability with Neo4j.
+= Concepts
+
+Scalability is a crucial aspect of database management, allowing a system to handle changing demands by adding and removing resources to meet the demands of a database's workload.
+Neo4j supports multiple strategies to achieve scalability, enabling systems to handle larger datasets, more concurrent users, and higher query complexity without compromising performance or availability, i.e. the system's resiliency.
+The three main strategies are:
+
+* xref:clustering/setup/analytics-cluster.adoc[Analytics clustering] -- for horizontal read scalability.
+* xref:scalability/composite-databases/concepts.adoc[Composite databases] -- for federated queries and distributed data management.
+* xref:scalability/sharded-property-databases/overview.adoc[Property sharding] -- for handling massive property-heavy graphs.
+
+== What is scalability?
+
+Scalability is a system's ability to handle an increasing workload without compromising performance.
+There are two primary methods to achieve scalability:
+
+.Scaling methods
+[options="header", cols="1,1,1a,1a"]
+|===
+| Method
+| Description
+| Pros
+| Cons
+
+| Vertical Scaling (Scaling Up / Down)
+| Increase or decrease the capacity of a single server by adding or removing CPUs, memory, or storage.
+| Simple to manage.
+| * Physical limits. +
+* Difficult to make online changes.
+
+| Horizontal Scaling (Scaling Out / In)
+| Distribute the workload by adding more servers or reduce the infrastructure by removing existing servers.
+| * Greater scalability and fault tolerance. +
+* Easier to make online changes.
+| More complex to manage.
+|===
+
+== What is database scalability?
+
+Database scalability is the ability of a database management system (DBMS) to handle changing demands.
+To scale properly, a database must apply strategies that cover all areas: data access, data manipulation in memory, and database computing.
+
+Strategies include:
+
+* **Vertical Scaling**
+** Optimize usage (e.g., granular locks, partitioning)
+** Optimize physical resources (multi-threading, tiered storage)
+
+* **Horizontal Scaling** (distributed computing architectures):
+
+** *Shared Everything*: All servers share data and memory.
+Flexible, but prone to contention. +
+In this model, data is shared between disk and memory across all servers in a cluster.
+Requests are satisfied by any combination of servers.
+This approach introduces complexity, as the cluster must implement a way to avoid contention when multiple servers try to update the same data simultaneously.
+
+** *Shared Nothing*: Each server manages its own partition (shard).
+More fault-tolerant, eliminates single points of failure. +
+Every update request is handled by a single cluster member, which eliminates single points of failure.
+Each part of the database on a single cluster member is called a *shard*.
+
+image::scalability/horizontal-scaling-strategies.svg[title="Example of a sharing approach where all the servers share the storage vs a shared-nothing approach (e.g., property sharding).", role="middle"]
+
+== What is graph database scalability?
+
+Graph database scalability refers to the ability of a database to handle different amounts of data and workloads without compromising performance.
+It includes:
+
+* *Data volume* - involves ensuring a consistent SLA in both query and administration response times, even as the size of the data for storage and retrieval expands. +
+Volume depends on data type(s).
+Vectors occupy a large data space.
+
+* *Query volume*
+** Read queries + write queries.
+** Queries and user concurrency -- the aim is to ensure a linear response time during the execution of concurrent queries against the same database.
+** Query complexity -- provide response time in line with the complexity of a query. The complexity of a query can be set by the combination of:
+*** Steps to execute
+*** Rows to retrieve
+*** Total DB hits
+*** Total memory allocation
+*** Total execution time
+
+* *Admin volume*
+** Data ingestion/extraction -- When scaling data ingestion/extraction, the goal is to maintain a linear response time when ingesting or extracting an increasing set of data.
+This objective remains true regardless of the volume of stored data, provided a similar data structure is used.
+** Multi-tenancy -- In SaaS and AaaS environments, the scaling cost for tenants should exhibit linearity.
+For more general services, such as DBaaS (e.g., Aura), scalability should also be linear, considering all five scalability factors mentioned here.
+
+
+
diff --git a/modules/ROOT/pages/scalability/scaling-with-neo4j.adoc b/modules/ROOT/pages/scalability/scaling-with-neo4j.adoc
new file mode 100644
index 000000000..72ce210e8
--- /dev/null
+++ b/modules/ROOT/pages/scalability/scaling-with-neo4j.adoc
@@ -0,0 +1,148 @@
+:page-role: new-2025.10 enterprise-edition not-on-aura
+:description: Scaling strategies with Neo4j.
+= Scaling with Neo4j
+
+Neo4j offers various options for scaling, tailored to specific use cases and requirements. Here are some of the supported scaling strategies:
+
+* *Data replication via Neo4j analytics clustering (read scalability)* -- A Neo4j cluster is a high-availability cluster with multi-DB support.
+It is a collection of servers running Neo4j that are configured to communicate with each other.
+This means that servers and databases are decoupled: servers provide computation and storage power for databases to use.
+Each database relies on its own cluster architecture, organized into primaries (with a minimum of 3 for high availability) and secondaries (for read scaling).
+Scalability, allocation/reallocation, service elasticity, load balancing, and automatic routing are automatically provided (or they can be finely controlled).
++
+xref:clustering/setup/analytics-cluster.adoc[Neo4j analytics cluster] is good for:
+
+** Horizontal, read scalability
+** Always on, highly available with disaster recovery and rolling upgrades (Neo4j 5.0+).
+** Flexible infrastructure from 1 to many copies of the same database.
+** Servers may be service-specific (analytical/transactional workloads, data science, reporting, etc.).
+Multi-region, multi-tenant, SaaS-style scalability.
+
+* *Data federation and sharding via composite database* -- using federated queries, Neo4j allows you to query multiple Neo4j databases with a single query.
+The data is partitioned into smaller, more manageable pieces, called shards.
+Each shard can be stored on a separate server, splitting the load on resources and storage.
+Alternatively, you can deploy shards in different locations, allowing you to manage them independently or split the load on network traffic.
+Composite databases are good for:
+
+** Accessing remote databases, queries executed on federated data.
+** Parallel execution of sub-queries on large data volumes.
+** Horizontal, READ & WRITE scalability.
++
+Sharding logic adopts sharding functions, optimal time-based sharding, and other sharding keys.
+The main advantage is obtained by combining Neo4j clustering and composite databases.
+
+* *Data distribution via Infinigraph* -- using a distributed graph architecture to extend a single system without fragmenting the graph.
+//This allows, in theory, the unlimited growth of a graph.
++
+label:preview[Preview feature] xref:scalability/sharded-property-databases/overview.adoc[Property sharding] (part of Infinigraph) allows you to decouple the properties attached to nodes and relationships and store them in separate graphs.
+This architecture enables the independent scaling of property data, allowing for the handling of high volumes, heavy queries, and high read concurrency.
+
+The following table summarizes the similarities and differences between analytics clustering, composite databases, and sharded property databases:
+
+.Similarities and differences between analytics cluster, composite databases, and sharded property databases
+[cols="2,4a,4a,4a",frame="topbot",options="header"]
+|===
+|
+| Analytics cluster
+| Composite database
+| Sharded property database
+
+| *Typical use cases*
+| High Availability +
+GDS dedicated server
+| Federated data +
+Time-based sharding +
+Application-based access
+| Graphs with a large volume of properties +
+Ideal for vector and full-text search
+
+| *Scalability*
+| *Data volume:* limited to single server size +
+*Read concurrency*: horizontal scale on multiple instances
+| *Data volume:* unlimited +
+*Read concurrency*: horizontal scale on multiple instances +
+*Write concurrency*: horizontal scale depending on the graph model
+| *Data volume:* up to 100TB +
+*Read concurrency*: horizontal scale on multiple instances +
+*Write concurrency*: single instance
+
+| *Transactions*
+| Causal consistency +
+Standard transaction management
+| Parallel read transactions +
+Single-shard write transactions +
+`CALL {} IN TRANSACTION` for multiple, isolated read/write transactions with manual error handling
+| Parallel read & write transactions on all shards +
+Standard transaction management
+
+| *Data load*
+| Initial and incremental data import via neo4j-admin and Aura importer
+| Manually orchestrated import +
+Ad-hoc, project-based, sharded import
+| Initial and incremental data import via neo4j-admin and Aura importer
+
+| *Cypher queries*
+| Single database queries.
+| Parallel execution on shards. +
+Single database queries must be modified according to the sharding rules. +
+Automated shard pruning using sharding functions.
+| Parallel execution on shards. +
+Single database queries run as is. +
+Automated shard pruning based on node selection.
+
+| *User tools*
+| All tools supported.
+| Work with Browser and Cypher Shell. +
+Tools used on individual shards and Bloom are not supported on composite databases.
+| All tools supported.
+
+| *Admin tools*
+| All tools supported.
+| Tools used on individual shards are not supported on composite databases.
+| All tools supported.
+
+| *Libraries*
+| All libraries supported.
+| Supported on individual shards.
+| All libraries supported.
+|===
+
+//TODO
+//Admin considerations
+
+// == Property sharding (Preview feature)
+
+// Sharded property databases
+
+// * Admin considerations
+// * Workloads
+// ** Analytical workloads
+// ** Transactional workloads
+// ** Hybrid/Mixed workloads
+// * Applications and Services
+// ** Multi-tenant services
+// ** Ad-hoc applications
+// ** Tools and user queries
+
+//== Scaling at a glance
+//Here we can talk about what we must consider, in practical terms, if we want to create a scalable solution with Neo4j. The topics here are still generic, we will use this list to address scalability with composite and sharded properties.
+
+// * Ingestion
+// ** Offline ingestion
+// ** Online ingestion
+// ** Data streaming
+// * User Operations
+// ** Concurrency
+// ** Read/Write ratio
+// ** Heavy reads (query complexity)
+// ** Heavy writes (query complexity)
+// * Extraction
+// ** Offline extraction
+// ** Online extraction
+// * Admin Operations
+// ** Server administration & Deployment
+// ** Backup and recovery
+// ** System failovers
+// ** Data archive
+// ** Data compaction
+
diff --git a/modules/ROOT/pages/scalability/sharded-property-databases/admin-operations.adoc b/modules/ROOT/pages/scalability/sharded-property-databases/admin-operations.adoc
new file mode 100644
index 000000000..d5c0119d2
--- /dev/null
+++ b/modules/ROOT/pages/scalability/sharded-property-databases/admin-operations.adoc
@@ -0,0 +1,144 @@
+:page-role: new-2025.10 enterprise-edition not-on-aura
+:description: Admin operations for sharded property databases
+:keywords: sharded property databases, sharding, admin operations, aliases, servers, backup, recovery, failover
+= Admin operations
+
+Sharded property databases are managed similarly to standard Neo4j databases, with some differences in certain administrative operations.
+
+== Managing aliases for sharded databases
+
+When creating an alias for a sharded database, use the virtual database name when specifying it as the alias target.
+The following example shows how to create the alias `foo` for the sharded database `foo-sharded`:
+
+[source, cypher]
+----
+CREATE ALIAS foo FOR DATABASE `foo-sharded`
+----
+
+== Managing servers with sharded databases
+
+Graph references in server management commands must refer to shards.
+The virtual sharded database is rejected or ignored.
+
+The following example shows how to enable a server and allow allocating the property shard `foo-sharded-p000`:
+
+[source, cypher]
+----
+ENABLE SERVER 'serverId' OPTIONS { allowedDatabases: ['foo-sharded-p000'] }
+----
+
+== Resizing and resharding
+
+=== Resizing
+You can resize a sharded property database by adding or removing property shards.
+You can select more shards than needed to start with and allow space for their data to grow, as the Neo4j cluster allows databases to be moved based on server availability.
+For example, ten property shards can be initially hosted on five servers (two shards per server), and additional servers can be added as needed.
+For details on managing databases and servers in a cluster, see xref:clustering/databases.adoc[Managing databases in a cluster] and xref:clustering/servers.adoc[Managing servers in a cluster].
+
+=== Resharding
+
+You can reshard your data via the `neo4j-admin database copy` command.
+See xref:scalability/sharded-property-databases/data-ingestion.adoc#splitting-existing-db-into-shards[Splitting an existing database into shards] for more information.
+
+//TODO: We should talk about co-location, adding/removing servers in a cluster and say what is supported and what is not.
+
+[[backup-and-restore]]
+== Backup and restore
+
+A sharded property database is a database made up of multiple databases.
+This means that when you want to back up a database, you must back up all the shards individually, resulting in a sharded property database backup that is composed of multiple smaller backup chains.
+
+Backup chains for each shard are produced using the neo4j-admin database backup.
+For the graph shard, its backup chain must contain one full artefact and 0+ differential artefacts.
+Each property shard’s backup chain must contain only one full backup and no differential backups.
+In practical terms, this means that to back up a sharded property database, you start with a full backup of the graph shard and then all of the property shards; any subsequent differential backups would only need to be of the graph shard.
+This is because the transaction log of the property shards is the same as the graph shard log and is simply filtered when applied, so only the graph shard log is required for a restore.
+
+For example, assume there is a sharded property database called `foo` with a graph shard and 2 property shards.
+A backup must be taken of each shard, for example:
+
+[source,shell]
+----
+bin/neo4j-admin database backup "foo*" --to-path=/backups --from=localhost:6361 --remote-address-resolution
+----
+
+The `--remote-address-resolution` option requires `internal.dbms.cluster.experimental_protocol_version.dbms_enabled=true` to be set in both the _neo4j.conf_ and _neo4j-admin.conf_ files.
+
+You can then check the validity of the resulting backups using:
+
+[source,shell]
+----
+bin/neo4j-admin database backup validate "foo" --from-path=s3://bucket/backups
+----
+
+The output will indicate whether the backups are valid.
+For example:
+
+[result]
+----
+| DATABASE | PATH | STATUS |
+| foo-g000 | /bucket/backups/foo-g000-2025-06-11T21-04-42.backup | OK |
+| foo-p000 | /bucket/backups/foo-p000-2025-06-11T21-04-37.backup | OK |
+| foo-p001 | /bucket/backups/foo-p001-2025-06-11T21-04-40.backup | OK |
+----
+
+If valid, the backups can be used to seed a sharded property database:
+
+[source,cypher]
+----
+CYPHER 25 CREATE DATABASE baz SET GRAPH SHARD { TOPOLOGY 3 PRIMARIES 0 SECONDARIES }
+SET PROPERTY SHARDS { COUNT 2 TOPOLOGY 1 REPLICA }
+OPTIONS {seedUri:"s3://bucket/backups/"};
+----
+
+Due to potential synchronization issues that might occur when shard backups are not on the exact same transaction IDs (since backups can be taken in parallel or sequentially), the restore process is designed to be very lenient to different shards at different transaction IDs.
+As a result, a sharded property database backup is considered valid if the store files of each property shard are within the range of transactions recorded in the graph shard’s transaction log.
+
+For example, assume the graph shard’s store files are at tx 10 and it has transaction logs from tx 11-36, and property shard 1’s store files are at 13 and property shard 2’s store files are at 30, then at restore time, all databases can be recovered and made consistent up to transaction 36.
+
+You can use the command `neo4j-admin backup validate` to check whether a collection of backup chains for a database is valid.
+
+Additional actions may be required to create a validated backup if a property shard is ahead or behind the range of transactions in the graph shard backup chain.
+
+.Example output
+[result]
+----
+| DATABASE | PATH | STATUS |
+| foo-g000 | /backups/foo-g000-2025-06-11T21-04-42.backup | OK |
+| foo-p000 | /backups/foo-p000-2025-06-11T21-04-37.backup | Backup is behind (3 < 5) the graph shard backup chain |
+| foo-p001 | /backups/foo-p001-2025-06-11T21-04-40.backup | Backup is ahead (12 > 8) of the graph shard backup chain |
+----
+
+To form a validated backup, you must ensure that each property shard’s store files are within the range of transactions recorded in the graph shard’s transaction log.
+In the example above, property shard `foo-p000` is behind the graph shard backup chain, and property shard `foo-p001` is ahead of the graph shard backup chain.
+To form a valid sharded property database backup, you need to:
+
+* Take a full backup of the property shard `foo-p000` so that its store at least includes transaction 5.
+* Take a differential backup of the graph shard so that at least transaction 12 is included in its transaction log, so `foo-p001` is included in its range.
+
+Once a valid sharded properties database backup is created, differential backups can be performed by taking differential backups of the graph shard, extending the range of the graph shard chain.
+Continuing with the example, the graph chain contains transactions from 11 to 36, property shard 1’s store files are at 13, and property shard 2’s store files are at 30.
+You then take a differential backup of the graph shard containing transactions 37 to 50.
+At restore time, all databases can be recovered up to transaction 50 and made consistent.
+
+== System failovers
+
+In a sharded property database, property shards pull transaction log entries from the graph shard and apply them to their stores.
+Thus, it is required that the graph shard does not prune an entry from its transaction log until every replica of each property shard has pulled and applied that entry.
+Failure to meet this requirement will make a given replica of a property shard unusable.
+
+If a property shard replica does fall behind the transaction log range available on the graph shard, you can recover it by:
+
+. Connecting to the `system` database on the server hosting the affected replica using the _bolt://_ scheme.
+//. Quarantining the replica using xref:procedures.adoc#procedure_dbms_quarantineDatabase[`dbms.quarantineDatabase()`].
+. Unquarantining the replica using xref:procedures.adoc#procedure_dbms_unquarantineDatabase[`dbms.unquarantineDatabase()`] with the `replaceStateReplaceStore` option.
+This will force the replica to copy the database store files from another replica of the property shard.
+
+If all replicas of a given property shard are behind, then the sharded property database as a whole becomes unusable.
+This is an irrecoverable state.
+Up until this point, losing replicas reduces fault tolerance, but the database remains available.
+When a sharded property database becomes irrecoverable, it needs to be dropped and recreated from a backup.
+See <>.
+
+One mechanism to avoid property shards falling out of range of the graph shard’s transaction log is to set a sufficiently large transaction log prune time on the graph shard.
+See xref:scalability/sharded-property-databases/limitations-and-considerations.adoc#setting-suitable-tx-log-retention-policy[Setting a suitable transaction log retention policy].
diff --git a/modules/ROOT/pages/scalability/sharded-property-databases/altering-sharded-databases.adoc b/modules/ROOT/pages/scalability/sharded-property-databases/altering-sharded-databases.adoc
new file mode 100644
index 000000000..9baca0e26
--- /dev/null
+++ b/modules/ROOT/pages/scalability/sharded-property-databases/altering-sharded-databases.adoc
@@ -0,0 +1,84 @@
+:page-role: new-2025.10 enterprise-edition not-on-aura
+:description: Describes how to alter sharded property databases in Neo4j.
+
+= Altering sharded property databases
+
+You can alter a sharded property database on two levels.
+It is possible to change the entire sharded database with `ALTER DATABASE` or alter a specific shard with `ALTER DATABASE `.
+
+== Syntax
+
+[options="header", width="100%", cols="1m,5a"]
+|===
+| Command | Syntax
+
+| ALTER DATABASE
+|
+[source, syntax, role="noheader"]
+----
+ALTER DATABASE name [IF EXISTS]
+{
+ SET ACCESS {READ ONLY \| READ WRITE} \|
+ SET GRAPH SHARD {
+ SET TOPOLOGY ((n PRIMAR{Y\|IES}) \| (m SECONDAR{Y\|IES}))+
+ } \|
+ SET PROPERTY SHARD {
+ SET TOPOLOGY n REPLICA[S]
+ }
+}
+[WAIT [n [SEC[OND[S]]]]\|NOWAIT]
+----
+
+| ALTER DATABASE
+|
+[source, syntax, role="noheader"]
+----
+ALTER DATABASE name [IF EXISTS]
+SET TOPOLOGY (
+ (n PRIMAR{Y\|IES}) \|
+ (m SECONDAR{Y\|IES})
+)
+[WAIT [n [SEC[OND[S]]]] \| NOWAIT]
+----
+
+| ALTER DATABASE
+|
+[source, syntax, role="noheader"]
+----
+ALTER DATABASE name [IF EXISTS]
+SET TOPOLOGY (
+ n REPLICA[S]
+)
+[WAIT [n [SEC[OND[S]]]] \| NOWAIT]
+----
+
+|===
+
+== Example 1: Change the topology of the graph shard and all property shards
+
+[source, cypher]
+----
+ALTER DATABASE foo-sharded
+SET GRAPH SHARD {
+ SET TOPOLOGY 1 PRIMARY 2 SECONDARIES
+}
+SET PROPERTY SHARDS {
+ SET TOPOLOGY 1 REPLICA;
+}
+----
+
+== Example 2: Change the topology of the graph shard
+
+[source, cypher]
+----
+ALTER DATABASE `foo-sharded-g000`
+SET TOPOLOGY 1 PRIMARY 2 SECONDARIES;
+----
+
+== Example 3: Change the topology of a specific property shard
+
+[source, cypher]
+----
+ALTER DATABASE `foo-sharded-p000`
+SET TOPOLOGY 2 REPLICAS;
+----
\ No newline at end of file
diff --git a/modules/ROOT/pages/scalability/sharded-property-databases/configuration.adoc b/modules/ROOT/pages/scalability/sharded-property-databases/configuration.adoc
new file mode 100644
index 000000000..2fb98262c
--- /dev/null
+++ b/modules/ROOT/pages/scalability/sharded-property-databases/configuration.adoc
@@ -0,0 +1,30 @@
+:page-role: new-2025.10 enterprise-edition not-on-aura
+:description: This page describes the system requirements and configuration settings for sharded property databases.
+= System requirements and configuration
+
+== System requirements
+
+The sharded property database requires the same xref:installation/requirements.adoc[system requirements] as Neo4j 2025.10 and later versions.
+
+== Configuration settings
+
+To enable the property sharding in your cluster, you must configure the following additional configuration settings on each server:
+
+[options="header", width="100%", cols="4m,4a"]
+|===
+| Configuration setting | Description
+
+| internal.dbms.sharded_property_database.enabled=true
+| By default, the sharded property database is disabled.footnote:[Property sharding is a preview feature. For details, see xref:scalability/sharded-property-databases/overview.adoc[Property sharding overview].]
+
+| db.query.default_language=CYPHER_25
+| Ensures that any database created will use Cypher 25 (unless users specifically override the default version in the `CREATE DATABASE` command).
+See xref:configuration/cypher-version-configuration.adoc[Configure the Cypher default version] and link:https://neo4j.com/docs/cypher-manual/25/queries/select-version/[Cypher Manual -> Select Cypher version].
+
+| internal.dbms.cluster.experimental_protocol_version.dbms_enabled=true
+| Allows users to take valid backups of a sharded database.
+|===
+
+
+
+
diff --git a/modules/ROOT/pages/scalability/sharded-property-databases/create-spd-database.adoc b/modules/ROOT/pages/scalability/sharded-property-databases/create-spd-database.adoc
new file mode 100644
index 000000000..6750a1bf7
--- /dev/null
+++ b/modules/ROOT/pages/scalability/sharded-property-databases/create-spd-database.adoc
@@ -0,0 +1,103 @@
+:description: This page describes how to create a sharded property database using the `CREATE DATABASE` command.
+:page-role: new-2025.10 enterprise-edition not-on-aura
+:keywords: sharded property database, CREATE DATABASE, Cypher 25
+= `CREATE DATABASE` command with sharded databases
+
+You can create a sharded database using the Cypher command `CREATE DATABASE` (requires Cypher 25, introduced alongside Neo4j 2025.06.0).
+For details on configuring the Cypher version, see xref:configuration/cypher-version-configuration.adoc[Configure the Cypher default version].
+
+
+== Syntax
+
+[options="header", width="100%", cols="1m,5a"]
+|===
+| Command | Syntax
+
+| CREATE DATABASE
+|
+[source, syntax, role="noheader"]
+----
+CREATE DATABASE name [IF NOT EXISTS]
+[[SET] GRAPH SHARD {
+ [TOPOLOGY n PRIMAR{Y\|IES} [m SECONDAR{Y\|IES}]]
+}]
+[SET] PROPER{TY\|IES} {
+ COUNT n [TOPOLOGY m REPLICA[S]]
+}
+[OPTIONS "{" option: value[, ...] "}"]
+[WAIT [n [SEC[OND[S]]]]\|NOWAIT]
+----
+|===
+
+When creating a sharded database, the following are created:
+
+* A virtual sharded database ``.
+* A single graph shard with the name `-g000`.
+* A number of property shards with the name `-p000`.
+The count property in `SET PROPERTY SHARDS` specifies the number of property shards.
+
+[NOTE]
+====
+`CREATE OR REPLACE` does not replace an existing sharded database.
+====
+
+== Options
+
+The `CREATE DATABASE` command can have a map of options, e.g., `OPTIONS {key: 'value'}`.
+For sharded databases, only the seeding option is supported.
+
+The following table describes the `seedUri` option:
+
+[frame="topbot", grid="cols", cols="<1s,<4"]
+|===
+| *Key*
+m| seedURI
+| *Value*
+a| URI to a folder containing all the backups or a list of dumps/backups.
+
+[NOTE]
+The folder notation only works for backups, not dumps.
+
+When specifying each artifact manually the key of the map is the name of the shard.
+Where shard name = `databaseName-g000` or `databaseName-p000` for property shards where the last shard name would be `databaseName-px` where `x = numShards -1`.
+| *Description*
+a| Defines an identical seed from an external source, which will be used to seed all servers. For more information, see xref::database-administration/standard-databases/seed-from-uri.adoc[Seed from a URI].
+| *Example*
+|
+[source, syntax, role="noheader"]
+----
+seedUri: {
+ `foo-sharded-g000`: "s3://bucket/folder/foo-g000.backup",
+ `foo-sharded-p000`: "s3://bucket/folder/foo-p001.backup",
+ `foo-sharded-p001`: "s3://bucket/folder/foo-p002.backup"
+ }
+----
+Or
+[source, syntax, role="noheader"]
+----
+seedUri: "s3://bucket/folder/"
+----
+|===
+
+== Default numbers for topology
+
+The sharded property databases use the Neo4j cluster topology.
+Therefore, you need to consider how the following settings will affect the creation of your sharded property database.
+
+[options="header", width="100%", cols="4m,1m,1m,3a"]
+|===
+| Configuration settings with their default value
+| Default value
+| Valid values
+| Description
+
+|initial.dbms.default_primaries_count
+| 1
+| [1-10]
+| The default number of primaries for the graph shard when the database is created.
+
+|initial.dbms.default_secondaries_count
+| 0
+| [0-19]
+| The default number of secondaries for the graph shard when the database is created.
+|===
\ No newline at end of file
diff --git a/modules/ROOT/pages/scalability/sharded-property-databases/data-ingestion.adoc b/modules/ROOT/pages/scalability/sharded-property-databases/data-ingestion.adoc
new file mode 100644
index 000000000..35f2876cb
--- /dev/null
+++ b/modules/ROOT/pages/scalability/sharded-property-databases/data-ingestion.adoc
@@ -0,0 +1,204 @@
+:page-role: new-2025.10 enterprise-edition not-on-aura
+:description: Methods for ingesting data into a sharded property database.
+:keywords: sharded property database, data ingestion, import, neo4j-admin, LOAD CSV, CREATE DATABASE
+= Data ingestion
+
+There are several ways to load data into a sharded property database:
+
+* Creating a sharded property database from delimited files using `neo4j-admin database import`.
+* Creating a sharded property database by splitting an existing Neo4j database.
+* Loading data transactionally into an existing sharded property database.
+* Incrementally updating an existing sharded property database.
+
+== Offline ingestion
+
+You can use the offline ingestion methods to initially import data into a sharded property database.
+This is useful when you want to import in bulk before starting your application, for incremental imports later on, or for splitting an existing database into shards.
+
+=== Initial import from delimited files
+
+You can use the xref:import.adoc#import-tool-full[`neo4j-admin database import full`] command to import data from delimited files into a sharded property database as in a standard Neo4j database.
+This is particularly useful for large datasets that you want to import in bulk before starting your application or for incremental imports later on.
+You can specify the `--property-shard-count` option to define the number of property shards you want to create.
+This will help distribute the data across multiple servers in a Neo4j cluster.
+
+[NOTE]
+====
+If you are creating the property shards on a self-managed server, the server that executes the `neo4j-admin database import` command must have sufficient storage space available for all of the property shards that will be created.
+====
+
+The following example shows how to import a set of CSV files, back them up to S3 using the `--target-location` and `--target-format` options, and then create a database using those seeds in S3.
+
+. Using the `neo4j-admin database import` command, import data into the `foo-sharded` database, creating one graph shard and three property shards.
+If the process is running on the same server as another Neo4j DBMS process, the latter must be stopped.
+The `--target-location` and `--target-format` options take the outputs of the import, turn them into uncompressed backups, and upload them to a location ready to be seeded from.
++
+[source, shell]
+----
+neo4j-admin database import full foo-sharded --nodes=nodes.csv --nodes=movies.csv --relationships=relationships.csv --input-type=csv --property-shard-count=3 --schema=schema.cypher --target-location=s3://bucket/folder/ --target-format=backup
+----
+
+. Create the database foo-sharded as a sharded property database by seeding it from your backups in the AWS S3 bucket:
++
+[source, cypher]
+----
+CREATE DATABASE `foo-sharded`
+DEFAULT LANGUAGE CYPHER 25
+PROPERTY SHARDS { COUNT 3 }
+OPTIONS {
+ seedUri: `s3://bucket/folder/`
+};
+----
+
+The cluster automatically distributes the data across its servers.
+For more information on seed providers, see xref:database-administration/standard-databases/seed-from-uri.adoc[Create a database from a URI].
+
+=== Incremental import / offline updates
+
+You can use the `neo4j-admin database import incremental` command to import data into an existing database.
+This is particularly useful for large batches of data that you want to add to an existing sharded property database.
+It allows you to do faster updates than is possible transactionally.
+
+. Stop the `foo-sharded` database if it is running.
+See xref:scalability/sharded-property-databases/starting-stopping-sharded-databases.adoc[Starting and stopping a sharded property database] for instructions.
+
+. Run the `neo4j-admin database import incremental` command by specifying the `--property-shard-count` option to define the number of property shards you want to create, the `--target-location` and `--target-format` options to upload the resulting stores to a location ready for re-creating the databases from, and the CSV files that you wish to update your existing data with.
+See xref:import.adoc#import-tool-incremental[Incremental import] for more information and instructions.
+
+[source, shell]
+----
+neo4j-admin database import incremental foo-sharded --nodes=nodes.csv --nodes=movies.csv --relationships=relationships.csv --input-type=csv --property-shard-count=3 --schema=schema.cypher --target-location=s3://bucket/folder/ --target-format=backup
+----
+
+. Re-create your database using the `dbms.recreateDatabase()` or follow step 2 of <> and creating a new database with the resulting stores the same way you would for a normal offline incremental import.
+
+
+=== Importing data using `LOAD CSV`
+
+You can use `LOAD CSV` to import data into a sharded property database.
+This is especially useful when you want to import small to medium-sized datasets (up to 10 million records) from local and remote files, including cloud URIs.
+For more information, see link:{neo4j-docs-base-uri}/cypher-manual/current/clauses/load-csv/[Cypher Manual -> `LOAD CSV`] and link:https://neo4j.com/docs/getting-started/cypher-intro/load-csv/[Getting Started guide -> Tutorial: Import CSV data using `LOAD CSV`].
+
+[NOTE]
+====
+Transactional Cypher statements involving `MERGE` or `RELATIONSHIP` creation have not yet been optimized. Using `LOAD CSV` will not be performant for anything larger than 100K records.
+====
+
+[[splitting-existing-db-into-shards]]
+=== Splitting an existing database into shards
+
+You can use the `neo4j-admin database copy` command to split an existing database into shards.
+It works in the same way as a standard database copy with a few additional arguments.
+You must specify `--property-shard-count` to be `> 0` to indicate that you want to create a sharded property database.
+If `--to-format` is a value other than `spd_block`, a warning will be printed in the log that the given format will be ignored.
+If `--to-format` is `spd_block` and `--property-shard-count` is not set, an exception will be thrown to specify the number of shards.
+
+The following example shows how to split the existing `foo` database into a new database called `foo-sharded`, which has 3 property shards in a cluster deployment.
+If you are using a standalone server, you can skip steps 2 and 3.
+
+. On one of the servers, copy the data from the `foo` database into the database `foo-sharded`, creating one graph shard and three property shards.
+The `foo` database must be stopped.
++
+[source, shell]
+----
+neo4j-admin database copy foo foo-sharded --copy-schema --property-shard-count 3 --target-location=s3://bucket/folder/ --target-format=backup
+----
++
+For more information about the syntax and options of the `neo4j-admin database copy` command, see xref:backup-restore/copy-database.adoc[Copy a database store].
+
+
+. Create the database `foo-sharded` as a sharded property database by seeding it from your dumps in AWS S3 bucket:
++
+[source, cypher]
+----
+CREATE DATABASE `foo-sharded`
+DEFAULT LANGUAGE CYPHER 25
+PROPERTY SHARDS { COUNT 3 }
+OPTIONS {
+ seedUri: `s3://bucket/folder/`
+};
+----
++
+The cluster automatically distributes the data across its servers.
+For more information on seed providers, see xref:database-administration/standard-databases/seed-from-uri.adoc[Create a database from a URI].
+
+== Online ingestion
+
+You can use the online ingestion methods to import data into a sharded property database.
+This is useful for smaller datasets or when you want to create a new database from an existing one.
+
+=== Creating an empty sharded property database
+
+You can create an empty sharded database using the `CREATE DATABASE` command.
+The command allows you to specify the number of property shards and the topology of the graph shard.
+The following examples show how to create an empty sharded database with different configurations.
+
+==== Example 1: Create an empty sharded database with the default topology (1 primary, no secondaries, and 1 replica per property shard)
+
+[source, cypher]
+----
+CYPHER 25 CREATE DATABASE `foo-sharded`
+PROPERTY SHARDS { COUNT 3 };
+----
+
+==== Example 2: Create an empty sharded database with a custom topology
+
+[source, cypher]
+----
+CYPHER 25 CREATE DATABASE `foo-sharded`
+ SET GRAPH SHARD { TOPOLOGY 1 PRIMARY 0 SECONDARIES }
+ SET PROPERTY SHARDS { COUNT 3 TOPOLOGY 1 REPLICAS };
+----
+
+==== Example 3: Create an empty sharded database with a custom high-availability topology
+
+[source, cypher]
+----
+CYPHER 25 CREATE DATABASE `foo-sharded`
+ SET GRAPH SHARD { TOPOLOGY 3 PRIMARY 0 SECONDARIES }
+ SET PROPERTY SHARDS { COUNT 3 TOPOLOGY 2 REPLICAS };
+----
+
+=== Creating a sharded database from a URI
+
+You can create a new sharded property database from an existing database with seeding from one or more URIs.
+This is useful when you want to create a new database as a copy of an existing one, or when you want to seed a new database with data from another source.
+For more information on how seed from URI works, see the xref:database-administration/standard-databases/seed-from-uri.adoc[Create a database from a URI].
+
+The following example shows how to create a shared database with seeding from one or several URIs.
+
+==== Example 1: Create a sharded database with seeding from one URI
+
+[source, cypher]
+----
+CYPHER 25 CREATE DATABASE `foo-sharded`
+PROPERTY SHARDS { COUNT 3 }
+OPTIONS { seedURI: “s3://bucket/folder/” };
+----
+
+==== Example 2: Create a sharded database with seeding from one URI with a different name
+
+This one is similar to example 1, but the system looks for `other-database-g000`, etc.
+
+[source, cypher]
+----
+CYPHER 25 CREATE DATABASE `foo-sharded`
+PROPERTY SHARDS { COUNT 3 }
+OPTIONS { seedURI: “s3://bucket/folder/”, seedSourceDatabase: “other-database” };
+----
+
+==== Example 3: Create a sharded database with seeding from multiple URIs
+
+The URIs need to be keyed by the shard name they should be used to seed.
+The shard names will be `databaseName-g000` and `databaseName-p000` to `databaseName-px`, where `x` is the number of property shards `-1`.
+
+[source, cypher]
+----
+CYPHER 25 CREATE DATABASE `foo-sharded`
+PROPERTY SHARDS { COUNT 3 }
+OPTIONS { seedUri: {
+ `foo-sharded-g000`: "s3://bucket/folder/foo-g000.dump",
+ `foo-sharded-p000`: "s3://bucket/folder/foo-p001.dump",
+ `foo-sharded-p001`: "s3://bucket/folder/foo-p002.dump"
+ } };
+----
\ No newline at end of file
diff --git a/modules/ROOT/pages/scalability/sharded-property-databases/deleting-sharded-databases.adoc b/modules/ROOT/pages/scalability/sharded-property-databases/deleting-sharded-databases.adoc
new file mode 100644
index 000000000..6609dfb91
--- /dev/null
+++ b/modules/ROOT/pages/scalability/sharded-property-databases/deleting-sharded-databases.adoc
@@ -0,0 +1,28 @@
+:page-role: new-2025.10 enterprise-edition not-on-aura
+:description: Instructions for deleting sharded property databases in Neo4j.
+:keywords: sharded databases, delete sharded databases, drop sharded databases, drop database
+= Deleting sharded property databases
+
+Sharded databases can be deleted using the `DROP DATABASE` command.
+Note that you must drop all database aliases before dropping a database.
+
+.Syntax
+[options="header", width="100%", cols="1m,5a"]
+|===
+| Command | Syntax
+
+| DROP DATABASE
+|
+[source, syntax, role="noheader"]
+----
+DROP DATABASE name [IF EXISTS]
+[RESTRICT \| CASCADE ALIAS[ES]] [DESTROY [DATA]]
+[WAIT [n [SEC[OND[S]]]]\|NOWAIT]
+----
+|===
+
+[NOTE]
+====
+Dropping the virtual sharded database cascades to all shards.
+Individual shards are not allowed to be dropped.
+====
diff --git a/modules/ROOT/pages/scalability/sharded-property-databases/limitations-and-considerations.adoc b/modules/ROOT/pages/scalability/sharded-property-databases/limitations-and-considerations.adoc
new file mode 100644
index 000000000..b546f391d
--- /dev/null
+++ b/modules/ROOT/pages/scalability/sharded-property-databases/limitations-and-considerations.adoc
@@ -0,0 +1,154 @@
+:page-role: new-2025.10 enterprise-edition not-on-aura
+:description:
+= Limitations and considerations
+
+== Non-supported features
+
+=== CDC
+
+CDC is not supported in this version.
+
+=== Unsupported procedures
+
+The following procedures are not supported by sharded property databases:
+
+* cdc.earliest()
+* cdc.current()
+* cdc.query()
+* db.cdc.earliest()
+* db.cdc.current()
+* db.cdc.query()
+* db.cdc.translateId()
+* db.index.fulltext.awaitEventuallyConsistentIndexRefresh()
+* db.listLocks()
+* dbms.listPools()
+* dbms.listActiveLocks()
+* dbms.scheduler.jobs()
+* dbms.scheduler.failedJobs()
+
+
+[NOTE]
+====
+It is strongly recommended not to use `dbms.setConfigValue()` on sharded property databases, as sharded property databases run in a clustered environment, which means the procedure must be run against each cluster member and is not propagated to other members.
+In particular, `dbms.setConfigValue()` cannot be used to set read-only behavior as the two settings `server.databases.read_only` and `server.databases.writable` are not compatible with sharded property databases.
+The correct way of setting read/write access is by using `ALTER DATABASE`.
+See xref:scalability/sharded-property-databases/altering-sharded-databases.adoc[Altering sharded property databases] for details.
+====
+
+=== Property-based access control (PBAC)
+
+PBAC is not supported in this version.
+
+== Performance considerations
+
+=== Queries with `MERGE` clause
+
+`MERGE` queries are very slow at any meaningful scale.
+Due to their plan, they are likely to cause a nested loop join, which does not perform well on sharded property databases at the moment.
+
+=== Filtering on properties in paths
+
+Queries that need to check each relationship property for all relationships between two nodes before they can traverse the next relationship property may see performance issues.
+For example, the following query must fetch each `[k:KNOWS]` relationship between people to check each of its properties before it can traverse onto the next person:
+
+[source, cypher]
+----
+MATCH (n:Person)[k:KNOWS*1..]>(m:Person)
+WHERE k.creationDate=1268465841718
+RETURN n,k,m
+----
+
+This could be rewritten to be to perform better as follows:
+
+[source, cypher]
+----
+MATCH (n:Person)[k:KNOWS{creationDate=1268465841718}]>+(m:Person)
+RETURN n,k,m
+----
+
+However, not all queries can be rewritten in this way.
+
+=== Call in transactions for batch write operations
+
+Because of the write architecture, batching larger transactions during write operations gives significant performance benefits.
+This is also true for single instance databases, but the performance difference is more pronounced in sharded property databases.
+
+For example, consider the following query:
+
+[source, cypher]
+----
+node_updates = [
+ { id: 1, name: "Alice", age: 30 },
+ { id: 2, name: "Bob", age: 25 },
+ { id: 3, name: "Charlie", age: 40 }
+]
+
+FOR each update IN node_updates DO
+ EXECUTE Cypher:
+ MATCH (n:Person {id: update.id})
+ SET n.name = update.name,
+ n.age = update.age
+END FOR
+----
+
+It can be rewritten as follows to perform better:
+
+[source, cypher]
+----
+WITH [
+ {id: 1, name: "Alice", age: 30},
+ {id: 2, name: "Bob", age: 25},
+ {id: 3, name: "Charlie", age: 40}
+] AS updates
+
+UNWIND updates AS u
+MATCH (n:Person {id: u.id})
+SET n.name = u.name,
+ n.age = u.age
+----
+
+== Other considerations
+
+=== `neo4j-admin database copy` to a sharded property database
+
+When using the `neo4j-admin database copy --property-shard-count > 0` command to split an existing database into shards, it is not possible to copy in place, meaning you cannot replace your existing database with a sharded property database.
+Instead, you must specify a new name or set `--to-path-data` and `--to-path-txn` or `--target-location={path|uri}` and `--target-format={database|backup}` to a new DBMS location.
+
+=== `USE` clause with sharded databases
+
+When targeting a sharded database in a `USE` clause, use its virtual database name or an alias in the graph reference.
+Targeting a shard directly is not supported.
+
+For example:
+
+[source, cypher]
+----
+USE `neo4j-sharded` MATCH (n) RETURN n
+----
+
+=== Cypher 5
+
+Cypher 5 is unsupported for sharded property databases.
+Although some queries may work, it is not officially supported.
+You must use Cypher25, which is the default for creating sharded property databases.
+See xref:configuration/cypher-version-configuration.adoc[Configure the Cypher default version].
+
+[[setting-suitable-tx-log-retention-policy]]
+=== Setting a suitable transaction log retention policy
+
+Property shards pull transaction log entries from the graph shard and apply them to their stores.
+Thus, there is a requirement that the graph shard may not prune an entry from its transaction log until each replica of each property shard has pulled and applied that entry.
+Failure to maintain this requirement can render a sharded property database irrecoverable.
+In order to ensure enough transaction logs are kept, you must set xref:configuration/configuration-settings.adoc#config_db.tx_log.rotation.retention_policy[`db.tx_log.rotation.retention_policy`] accordingly.
+A suitable heuristic is to ensure that the transaction log kept covers the transactions written between successive full backups of the sharded property database.
+
+[NOTE]
+====
+It is important to ensure that there is space for the transaction logs and that the server does not run out of disk space.
+====
+
+
+=== Controlling the property shard transaction log pull frequency
+
+The interval at which property shards pull transaction log entries from the graph shard is controlled by `internal.dbms.sharded_property_database.property_pull_interval` (defaults to 10ms).
+Write performance can often be improved by setting this value lower at the cost of more polling on the graph shard from the property shards, however, the impact of this has not yet been fully tested.
\ No newline at end of file
diff --git a/modules/ROOT/pages/scalability/sharded-property-databases/listing-sharded-databases.adoc b/modules/ROOT/pages/scalability/sharded-property-databases/listing-sharded-databases.adoc
new file mode 100644
index 000000000..605d77b67
--- /dev/null
+++ b/modules/ROOT/pages/scalability/sharded-property-databases/listing-sharded-databases.adoc
@@ -0,0 +1,38 @@
+:page-role: new-2025.10 enterprise-edition not-on-aura
+:description: Listing sharded property databases
+= Listing sharded property databases
+
+Sharded databases are listed as standard databases in the `type column` when you execute `SHOW DATABASES`.
+
+* The status is either:
+** Equal to the status shared by all shards.
+** “Mixed”, if some shards have a different status.
+The `statusMessage` indicates the number of shards with different statuses, e.g., online (4) and starting (1).
+//To add an example table with a mixed status. Conor will provide an example output with mixed status.
+* The associated graph shard is listed as `graph shard` in the `type column`.
+* The associated property shards are listed as `property shard` in the `type column`.
+* The `shardTxnLag` column displays the number of transactions the current shard is behind compared to the most up-to-date shard allocation of the sharded database.
+The lag is expressed in negative integers, whereas the column `replicationLag` shows the number of transactions the current database is behind compared to the most up-to-date allocation of that database.
+
+.Command output
+[result]
+----
++------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| name | type | aliases | access | address | role | writer | requestedStatus | statusMessage | default | home | constituents | shardTxnLag |
++------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| "spd" | "standard" | [] | "read-write" | "localhost:7687" | "primary" | TRUE | "online" | "" | FALSE | FALSE | [] | NULL |
+| "spd-g000" | "graph shard" | [] | "read-write" | "localhost:7687" | "primary" | TRUE | "online" | "" | FALSE | FALSE | [] | 0 |
+| "spd-p000" | "property shard" | [] | "read-write" | "localhost:7687" | "property shard replica" | FALSE | "online" | "" | FALSE | FALSE | [] | 0 |
+| "spd-p001" | "property shard" | [] | "read-write" | "localhost:7687" | "property shard replica" | FALSE | "online" | "" | FALSE | FALSE | [] | 0 |
+| "spd-p002" | "property shard" | [] | "read-write" | "localhost:7687" | "property shard replica" | FALSE | "online" | "" | FALSE | FALSE | [] | 0 |
+| "system" | "system" | [] | "read-write" | "localhost:7687" | "primary" | TRUE | "online" | "" | FALSE | FALSE | [] | NULL |
++------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+----
+
+[NOTE]
+====
+Some columns are empty for the virtual sharded database (with the name passed into the create command) and can be found on the rows of the associated graph shard instead, e.g., `lastCommittedTxn` or `replicationLag`.
+These rows are only visible if the user has either `CREATE/DROP/ALTER DATABASE`, `SET DATABASE ACCESS`, or `DATABASE MANAGEMENT` privileges.
+These rows can be distinguished by the type, which has values `graph shard` and `property shard`, respectively.
+====
+
diff --git a/modules/ROOT/pages/scalability/sharded-property-databases/overview.adoc b/modules/ROOT/pages/scalability/sharded-property-databases/overview.adoc
new file mode 100644
index 000000000..3a33cf439
--- /dev/null
+++ b/modules/ROOT/pages/scalability/sharded-property-databases/overview.adoc
@@ -0,0 +1,50 @@
+:page-role: new-2025.10 enterprise-edition not-on-aura
+:description: This page describes property sharding and how it works.
+= Overview
+
+.Preview Feature
+[IMPORTANT]
+====
+The *property sharding feature is offered AS-IS* as described in your agreement with Neo4j and should only be used for internal development purposes.
+
+When this feature becomes generally available, you will need to upgrade to the latest Neo4j version (which may require downtime) to use the feature for non-development purposes.
+
+*Enabling the preview feature (internal parameter):* +
+By default, the sharded property database is disabled.
+Use the internal setting `internal.dbms.sharded_property_database.enabled=true` to enable it.
+
+During the Preview period, customers with active contracts may contact Neo4j Support through the standard support channels.
+Please note that any cases related to the Preview feature will be classified as Severity 4 by default, in accordance with the link:https://neo4j.com/terms/support-terms/[Support Terms].
+====
+
+== What is property sharding?
+
+Property sharding is about decoupling the properties associated with nodes and relationships, and storing them in separate graphs.
+The graph structure, comprising nodes and relationships, is stored in a single "graph shard".
+The properties associated with these nodes and relationships are distributed across multiple "property shards".
+This architecture enables the independent scaling of property data, allowing for the handling of larger volumes of properties without impacting the performance of the graph structure. +
+At the same time, it also allows for the optimization of storage for graph data, as the graph shards are designed to store more graph-centric information without the overhead of properties. +
+The sharded property database behaves like a standard database.
+It provides ACID guarantees for write transactions and full API support, i.e., Cypher language queries, driver's API, and Neo4j Java internal API.
+
+image::scalability/sharded-architecture.svg[title="High-level sketch of property sharding.", role="middle"]
+
+== How it works
+
+The graph shard contains only nodes and relationships without the properties.
+Each shard is simply a Neo4j standard database with some custom behaviors.
+
+All node and relationship properties are distributed evenly across the property shards using a sharding (hash) function.
+Each property shard contains equivalent unconnected nodes and relationships, each of which has the same unique identifier as the one in the graph shard, and stores the properties associated with those elements.
+This means that each entity in the graph shard has one and only one corresponding entity in one of the property shards, and that property shard will serve all requests for that entity.
+
+The entire system is deployed into a Neo4j cluster, with the graph shard being a regular Raft group (see xref:clustering/setup/routing.adoc[Leadership, routing, and load balancing]).
+This setup provides all the failover and availability guarantees and allows the addition of primaries and secondaries as in a normal Neo4j cluster.
+
+Property shards use replicas (property shard copies) for data redundancy and failover.
+Each property shard can have multiple replicas to ensure data availability.
+If a property shard replica fails, another replica can take over its responsibilities.
+
+Transactionality concerns are handled entirely on the graph shards using Raft consensus, the same way as in a standard database.
+However, the graph shard and a replica of the property shard must communicate with each other to ensure that ACID compliance is maintained.
+Internal bookmarks are used to ensure that the cluster can read its own writes.
diff --git a/modules/ROOT/pages/scalability/sharded-property-databases/planning-and-sizing.adoc b/modules/ROOT/pages/scalability/sharded-property-databases/planning-and-sizing.adoc
new file mode 100644
index 000000000..da06a478c
--- /dev/null
+++ b/modules/ROOT/pages/scalability/sharded-property-databases/planning-and-sizing.adoc
@@ -0,0 +1,52 @@
+:page-role: new-2025.10 enterprise-edition not-on-aura
+:description: This page describes the planning and sizing of sharded property databases.
+
+= Planning and sizing
+
+== Planning the topology of a sharded property database
+
+The sharded property database is deployed into a Neo4j cluster with the graph shard being a regular xref:clustering/setup/routing.adoc#clustering-elections-and-leadership[Raft group].
+This means that you should deploy the graph shard cluster with a topology consisting of at least three servers hosting databases in primary mode (read and write, RW) for high availability.
+Additional primaries may be added to support a higher fault tolerance.
+If high availability is not required, you can create a database with a single primary host for minimum write latency and cost efficiency.
+
+Databases in secondary mode can be added to the graph shard to scale out read workloads.
+Secondaries act like caches for the graph data and are fully capable of executing read-only (RO) queries and procedures.
+
+Replicas contain the property data.
+The property data is replicated from the databases in primary mode (RW) via transaction log shipping.
+Replicas periodically poll an upstream server for new transactions and have these shipped over.
+They are not in a Raft group and do not have the same high availability features as the graph shards.
+To achieve high availability of the replicas containing the property shards, it is recommended to have multiple replicas per property shard.
+The fault tolerance for a property shard replica is calculated with the formula `M = F + 1`, where M is the number of replicas required to tolerate F faults.
+
+The following diagram illustrates a sample architecture of a high availability property sharding deployment, which comprises a graph shard, graph shard secondaries, and 4 property shards with 2 replicas for each property shard:
+
+image::scalability/property-shard-deployment.svg[title="Sample architecture of a property sharding deployment.", role="middle"]
+
+== Planning the sizing of a sharded property database
+
+Property sharding relies on the capabilities provided by Neo4j clustering for managing and sizing the infrastructure.
+More specifically:
+
+* Some servers can be associated with the graph shard databases.
+They can also be separated and restricted into primary and secondary members of the cluster.
+
+* Other servers can be associated with the property shard databases.
+It is important to consider the number of available servers, along with the number of shards and replicas (i.e., multiple copies of the same shard for high availability and read scalability).
+
+* Data in sharded property databases is evenly distributed.
+It is recommended to consider that each database does not exceed a size suitable for the available hardware and allows a relatively smooth set of administrative operations.
+For example, in commodity virtual or physical hardware, the size of each database must not exceed 1 to 3 TB.
+
+* Should property sharding start relatively small and grow over time, it is recommended to create more property shards that may initially be co-located on the same server.
+
+* As the database grows in size, additional servers may be added to allow hardware resharding of the database.
+This administrative change happens during normal online operations.
+
+* Database resharding, i.e., changing the number of property shards, can be executed offline using the `neo4j-admin database copy` command.
+See xref:scalability/sharded-property-databases/data-ingestion.adoc#splitting-existing-db-into-shards[Splitting an existing database into shards].
+
+The block format (see xref:database-internals/store-formats.adoc[Store formats]) is required for both the graph shard and the property shard.
+For accurate sizing estimation, contact your Neo4j representative for assistance.
+
diff --git a/modules/ROOT/pages/scalability/sharded-property-databases/role-based-access-control.adoc b/modules/ROOT/pages/scalability/sharded-property-databases/role-based-access-control.adoc
new file mode 100644
index 000000000..974191b00
--- /dev/null
+++ b/modules/ROOT/pages/scalability/sharded-property-databases/role-based-access-control.adoc
@@ -0,0 +1,21 @@
+:page-role: new-2025.10 enterprise-edition not-on-aura
+:description: Managing role-based access control in sharded property databases.
+= Role-based access control (RBAC)
+
+Role-based access control works as on a standard database.
+For details, see xref:authentication-authorization/manage-privileges.adoc[Role-based access control].
+
+However, privileges are granted on the virtual database.
+Explicitly granting privileges on a shard is not supported.
+
+The following example shows how to `GRANT MATCH` privileges on the sharded database:
+
+[source, cypher]
+----
+GRANT MATCH { prop } ON GRAPH `foo-sharded` NODES A TO reader
+----
+
+[NOTE]
+====
+Property-based access control is not supported.
+====
diff --git a/modules/ROOT/pages/scalability/sharded-property-databases/security.adoc b/modules/ROOT/pages/scalability/sharded-property-databases/security.adoc
new file mode 100644
index 000000000..79c96f176
--- /dev/null
+++ b/modules/ROOT/pages/scalability/sharded-property-databases/security.adoc
@@ -0,0 +1,6 @@
+:page-role: new-2025.10 enterprise-edition not-on-aura
+:description: Security considerations for sharded property databases
+= Security
+
+The sharded property databases implement the same security principles as a standard Neo4j database deployment.
+For details on securing the database and encrypting communications between the servers, see xref:security/index.adoc[Security].
diff --git a/modules/ROOT/pages/scalability/sharded-property-databases/starting-stopping-sharded-databases.adoc b/modules/ROOT/pages/scalability/sharded-property-databases/starting-stopping-sharded-databases.adoc
new file mode 100644
index 000000000..e1c6be081
--- /dev/null
+++ b/modules/ROOT/pages/scalability/sharded-property-databases/starting-stopping-sharded-databases.adoc
@@ -0,0 +1,14 @@
+:page-role: new-2025.10 enterprise-edition not-on-aura
+:description: Starting and stopping sharded property databases
+= Starting and stopping sharded property databases
+
+You can start and stop a sharded database using `START DATABASE ` and `STOP DATABASE `.
+
+If the name is equal to the sharded database name, the graph shard and all its property shards are started/stopped.
+If the name is equal to a shard name, only the specified shard is stopped.
+
+[WARNING]
+====
+Stopping a single shard makes the database inaccessible.
+Therefore, this should only be done if you need to perform some action that requires only one shard to be offline, such as the recovery of a single shard.
+====
diff --git a/modules/ROOT/pages/tutorial/tutorial-composite-database.adoc b/modules/ROOT/pages/tutorial/tutorial-composite-database.adoc
index 9b9a8a706..0b8045b28 100644
--- a/modules/ROOT/pages/tutorial/tutorial-composite-database.adoc
+++ b/modules/ROOT/pages/tutorial/tutorial-composite-database.adoc
@@ -685,4 +685,4 @@ Then, using the returned product IDs, it queries both `db1` and `db2` *in parall
You have just learned how to store and retrieve data from multiple databases using a single Cypher query.
-For more details on Composite databases, see xref:database-administration/composite-databases/concepts.adoc[].
+For more details on Composite databases, see xref:scalability/composite-databases/concepts.adoc[].